“Hiding” network latency for fast memory in data centers

By | July 28, 2020
networked servers
(CSE)

Sharing server memory between applications in large computer clusters is still a major goal for cloud and high-performance computing communities. Through fast networking technology, the memory available throughout the center’s server racks could be managed by schedulers as though it were a single resource, providing a major boost to speed and performance.

A service developed by U-M researchers called Infiniswap made this technology — called “memory disaggregation” — feasible in 2017, but it still suffered from several latency overheads that made real-world adoption unlikely. Now, a new system from the same lab called Leap improves upon this and other disaggregation solutions by applying a technique called prefetching to remote memory environments.

The prefetcher allows nearly all applications to run as if they were working with local memory. “This prefetching solution helps to hide the network latency, and the data path makes sure the operating system has no overhead,” says project researcher Hasan Al Maruf, a PhD student in Computer Science and Engineering division.

Author: News Staff

Contact Michigan IT News staff at umit-cio-newsletter@umich.edu.