Lecture: Processing Data Where It Makes Sense: Enabling In-Memory Computation
Today’s computing systems are overwhelmingly designed to move data to
computation. This design choice causes inherent performance and energy
bottlenecks:
(1) data access from memory is already a key bottleneck as applications become more data-intensive,
(2) energy consumption is a key constraint in especially mobile and server systems,
(3) data movement is very expensive in terms of bandwidth, energy and latency, much more so than computation. These trends are especially severely-felt in the data-intensive server and energy-constrained mobile systems of today.
At the same time, conventional memory technology is facing many scaling challenges not only in terms of energy, and performance, but also reliability. For instance, the DRAM technology suffers from the RowHammer problem, as we discovered and rigorously analyzed. RowHammer is the phenomenon that repeatedly accessing a row in a modern DRAM chip predictably causes errors in physically-adjacent rows. It is a prime (and very likely the first) example of how a circuit-level failure mechanism in DRAM can cause a practical and widespread system security vulnerability.
As a result of such serious technology scaling challenges, memory system architects are open to organizing memory in different ways and making it more intelligent, at the expense of slightly higher cost. The emergence of 3D-stacked memory plus logic as well as the adoption of error correcting codes inside the latest DRAM chips are an evidence of this trend.
In this talk, we will discuss some recent research that aims to practically enable computation close to data. After motivating trends in applications as well as technology, we discuss at least two promising directions for processing-in-memory (PIM): (1) performing massively-parallel bulk operations in memory by exploiting the analog operational properties of DRAM, with low-cost changes,
(2) exploiting the logic layer in 3D-stacked memory technology to accelerate important data-intensive applications. In both approaches, we describe and tackle relevant cross-layer research, design, and adoption challenges in devices, architecture, systems, and programming models. Our focus will be the development of in-memory processing designs that can be adopted in real computing platforms and real data-intensive applications, spanning machine learning, graph processing, and genome analysis, at low cost.
Info
Day:
2018-10-06
Start time:
12:45
Duration:
00:40
Room:
G 61
Track:
Computer Science
Links:
Feedback
Click here to let us know how you liked this event.
Concurrent Events
Speakers
Juan Gómez Luna | |
Prof. Onur Mutlu |