College of Engineering News • Iowa State University

Frontiers of Computer Architecture

 

 

 

 

 

 

 

Joeseph Zambreno, an Associate Professor in the Iowa State Electrical and Computer Engineering Department, is drawing from three frontiers of computer architecture – Exascale computing, data mining and “fused” chipsets – to propose new approaches to data collection and chip design.

“Computer engineering is considered an enabling discipline,” Zambreno says. “We have the physicists, the chemists, the people who work on bioinformatics, who need us. They have algorithms that need as much computing power as they can get. If we can provide them efficient, scalable chips, that could be what leads to the breakthrough that eventually cures cancer or something of that nature.”

THE EXASCALE ERA
Today, most computer users measure space in terms of gigabytes and, more recently, terabytes. Large-scale data users like Google and Facebook measure their server farms in terms of petabytes, equal to 1,024 terabytes. A petabyte is a staggeringly large unit of measurement. To watch one gigabyte of high definition video, for example, you would have to watch for about seven minutes. To watch one petabyte of HD video, you would have to watch for 13 years and four months.

“We can build petascale systems now if we are willing to expend significant amounts of money to do so,” he says. “But what do we need to do to be able to build that next generation system, the exabyte system, that’s even a thousand times bigger than that? That’s what we’re working toward.”

Some of the world’s largest databases have already begun to break the exabyte (1,024 petabytes) barrier, but computing performance has lagged behind the rate of data expansion. In short, we have all this data, but not enough computing power to sort through it.

Zambreno is setting his sights on rectifying that situation, one step at a time. With funding provided by a National Science Foundation Computer Systems Research Grant Using a Field-Programmable Gate Array (FPGA)-based machine programmed to act like an exascale-era chip, Zambreno can run numerous tests to find strengths and weaknesses for multiple architectural setups.

“We’re testing out architecture ideas that won’t be ready to market for 10 years, if that,” Zambreno says. “We have software that simulates what the chip would look like and what its characteristics would be. We tend to optimize for either performance or power consumption, but other aspects like programmability or security are common too.”

Though the actual creation of an exascale system is likely several years away, the push toward the exascale era has resulted in many useful breakthroughs.

“If you look at the processor in the iPhone or the Samsung Galaxy – they have processors that would have been state-of-the-art desktop processors just a few years ago,” Zambreno says. “By pushing to make state-of-the-art desktop processors that much better and power-efficient, you get those really nice little side-effects. It trickles down, and now your mobile phone is faster than your previous desktop.”

He doesn’t consider his goal of aiding in the design of exascale-era chips as an accomplishment by itself, though. Zambreno is much more interested in what those future exascale chips could be used for.

DATA MINING AT EXASCALE
When processors are powerful enough to sort through exabytes of data, finding useful patterns in that data – data mining – will be vital to businesses and future researchers.

“Data mining, as an application, is still in its infancy,” Zambreno says. “People have written a whole bunch of software algorithms, but they haven’t really focused that much on what the architecture should look like for those algorithms.”

Data mining has an enormous number of potential uses; from businesses using it to predict the buying habits of potential customers, to scientists employing it to map relationships between strands of DNA and study disease. Today, however, the gap between the amount of data available and the amount of data that processors are able to handle is widening. Zambreno’s work involves shrinking that gap and figuring out how to build computer architecture that takes every advantage of its increased power.

“As transistors get smaller and smaller and we can fit more and more on a chip, what do we do with them?” he asks. “We can add extra cores, so we go from eight cores to 16, for example, but there are diminishing returns to where we can go with regard to that kind of acceleration. If we have all these transistors, let’s allocate some to work on data mining. We might as well spend part of these chips on something that could be really useful once we need it.”

Creating ever-larger and ever-faster chips will always be the goal of computer engineers, but creating a smarter chip is another part of Zambreno’s research.

FUSED CHIPS
Traditional computer architecture revolves around a Central Processing Unit (CPU) carrying out instructions, handling logic and performing computations while a Graphics Processing Unit (GPU) renders graphics, handles display output and works with multi-threaded tasks. Today’s “fused” chips, including the AMD Fusion, the Intel FMA HD and the NVIDIA Tegra, feature integrated CPU/GPU designs which promote faster interfacing and more efficient use of processor power. However, today’s “fused” chip model utilizes a CPU and a GPU performing the same roles they always did, just in closer proximity. Zambreno wants to turn this line of thinking on its head.

“The trend now is the so-called ‘fused architecture,’ or a CPU and a GPU on the same die,” Zambreno says. But it’s kind of just glued together at this point. In the past, your CPU would be in one place, your GPU would be somewhere else and they’d be connected with a fairly high-speed bus. It’s better now, they’re physically closer together so things like locality are better and power efficiency is improved. But architecturally, it’s not that interesting. It’s sort of a logical consequence of what has been happening for years now.”

Funded by a National Science Foundation CAREER Award, Zambreno is working on a proposal for a hybrid chip, one that is the best of both a CPU and a GPU.

“We’re looking at what CPUs do very well and what GPUs do very well and figuring out how we can get the best of both worlds in terms of memory efficiency, computational density and in terms of power efficiency,” He says.

Zambreno’s work in exascale computing, data mining and “fused” chip design represents the cutting edge of computer architecture. Still, Zambreno defines his work in terms of service to other fields.

“Our innovations [as computer engineers] are not very broadly impacting just by themselves,” he says. “Increasing processor power isn’t important unless you’re saying ‘now that we have that extra computing power, maybe that enables us to do things that we didn’t think were possible.”

Loading...