The looming threat of diminishing returns on computer chip improvements hangs over the field of computer architecture. The Moore’s Law era seems to be winding down, bringing an end to the reliable trend of cheaper and more powerful chips every year. This silicon crisis has inspired solutions from many different fronts, changing the way we think of computer hardware design.
One popular technique, called application-specific integrated circuits (ASICs), has offered significant performance boosts to many key computing areas by designing hardware uniquely outfitted to handle certain tasks. But application-specific means application-specific – these ICs trade away all the flexibility of CPUs or GPUs (graphics processing units) for that performance boost.
In search of a better way, the University of Michigan, University of Edinburgh, Arizona State University, and Arm, led by Prof. Ron Dreslinski, are working with a $9.5million DARPA grant to develop a hardware architecture and software ecosystem that together can approach the power of ASICs with the flexibility of a CPU. Called Transmuter, this “software-defined hardware” can change how programs use the hardware available to them in real time, effectively acting as a reconfigurable computer.
In current computing systems, like high-efficiency GPUs, a regular workload that uses the same code execution path for many threads is easy to work with. What they struggle with, however, is sudden changes in a workload or frequent irregularities. In contrast, ASICs can handle the varying demands of a particular task, but respond poorly to changes in code or new data with different characteristics.
To take on both of these shortcomings, Transmuter offers a reconfigurable approach. The system would use a runtime to monitor an application’s behavior and adapt to any new demands – altering how hardware is used for better load balancing and changing how it handles data with new characteristics, for example. The software would be able to adapt the hardware as applications are running, making changes to processor interconnect speeds, connectivity, and arbitration policies, as well as processor use to allow for power-saving measures.
“A reconfigurable system allows designers to create a wide range of tailored configurations for specific workloads, while maintaining the flexibility and coding efficiency we are used to with today's CPUs and GPUs,” says Dreslinski.
As an example of this, Dreslinski describes a graph of all the connections on Twitter. While processing that graph, most connections are going to involve only a small set of individuals and produce only a very sparse dataset. If the hardware is configured to process a workload like that, and is then suddenly asked to process Justin Bieber’s connections, the density drastically changes. Transmuter’s software would be able to detect this imbalance and reconfigure the hardware to more effectively use the cache resources as soon as the workload changes.
With this smart balancing of available hardware and application demands, the team expects Transmuter to offer unprecedented energy efficiency and a raw performance two orders of magnitude better than today’s CPUs, and within an order of magnitude of ASIC designs. The system will be optimized for data intensive algorithms, like image and video understanding and graph analytics.
This project is being funded by DARPA’s Electronics Resurgence Initiative, and was accepted as part of the software-defined hardware thrust. This program seeks to develop systems with near-ASIC performance that don’t sacrifice programmability for data-intensive algorithms.
The ERI has provided a total of $44.4million for five different projects involving U-M faculty. Dreslinski is also involved in “Domain-Focused Advanced Software-Reconfigurable Heterogeneous System on Chip (DASH-SoC)" with Prof. Hun-Seok Kim, “OpenROAD (Foundations and Realization of Open and Accessible Design)” with Prof. Dennis Sylvester, and “Fully-Autonomous System on Chip (SoC) Synthesis using Customizable Cell-Based Synthesizable Analog Circuits” with Prof. Dave Wentzloff.
Posted: July 24, 2018