Monday, April 19, 2010

Revamped Design Increases Computer Efficiency

It's hard to imagine that single processors in computers were once so fast they melted the very chips they were engineered for. Now they are designed to split up the workload to counter this incredible heat. However, this has presented new challenges for today's engineers on both a hardware and software level as far as maintaining and increasing a computer's computational speed is concerned. Luckily, researcher's at North Carolina State University seemed to have just raised bar in this matter by devising a way to increase the efficiency of modern processors by up to twenty percent.

To understand what is going on under the hood of these machines, a quick overview is in order:

Computers have a so-called “brain” known as the Central Processing Unit (CPU) or core. This is where all of the computations take place when executing an application/program such as your everyday web-browser.

The calculations necessary to run a program are split up into separate tasks called “threads”, a process known as parallelization, which can be computed simultaneously on multiple cores, making for a very fast means of computation.

Unfortunately, some programs are difficult to split up into threads because of their sequential nature. They are dependent on the outcomes of other threads/programs in order to continue computation, limiting their usage of multi-core systems – slowing execution time.

Process execution traditionally takes place with a calculation step followed by a memory-management function to free-up memory and prepare data for storage. For difficult-to-parallelize programs, this two-step task can only be completed on a single core, slowing things down significantly.

The basis of what these researchers did to achieve this feat of a twenty percent increase in efficiency is by treating the Memory-Management step as a separate Thread (MMT), allowing for simultaneous execution of both steps. This ensures utilization of multiple cores, increasing processing speed. Programs frequently request for memory to be used or freed up for various reasons. These requests are now passed through a small layer of code between the user and the operating system that determines how these requests should be satisfied.

Their methods involve taking these memory-management requests and lumping them together. Then they predict how these requests should be handled to satisfy the needs of the program. Dr. James Tuck, assistant professor of computer and electrical engineering at NC State, says: “As it turns out, programs are very predictable.” Predictable in a sense that it is possible to calculate the next memory-management request before it has happened. So, when predicting requests in bulk, time is saved since the work has already been done. These requests are then completed on a separate core than that of the calculation thread via the MMT. “It's like having two cooks in a kitchen...” Dr. Tuck explains. One cook does the mixing while the other prepares the ingredients. The mixing, or computation, is the important part of satisfying program requests and the preparation of the ingredients is what the MMT takes care of.

This exploitation of parallelization resulted in some interesting findings, with the most significant being that this MMT approach is independent of existing applications. In Layman's terms, MMT allows for a boost of speed without having to alter pre-existing code, and is effectively transparent on a user level. This is good news considering that a large overhaul of complicated programs such as common web-browsers and word processors is not necessary, saving lots of time and money for a possible twenty percent increase in speed.

The scientific article MMT: Exploiting Fine-Grained Parallelism in Dynamic Memory Management can be found at http://www.ece.ncsu.edu/arpers/Papers/MMT_IPDPS10.pdf

A visual of an Intel Core 2 Duo processor.

1 comment:

  1. For me, the best and most effective part of this post is graphs 2-6--you show me how computers process, and what's in the way of faster processing. (Seriously--I didn't know this before. 1+1=2 stuff for you...)

    The first paragraph doesn't make me want to get there, though... "Single processor" is jargon, believe it or not... For a computer-y audience, the paragraphs I like are too basic, and for me (computer dummy), that jargon is off-putting... Mixed audience messages.

    ReplyDelete