Today's emerging applications are extremely demanding in terms of storage and computing power. For instance, the Internet-of-things (IoT) combined with edge computing will transform the future. They will not only impact all aspects of our life but also change the Integrated Circuit (IC) and computer world. Emerging applications require computing power, which was typical of supercomputers a few years ago, but with constraints on size, power consumption, and guaranteed response time that are typical of the embedded applications. Both today's computer architectures and device technologies (used to manufacture them) are facing major challenges, making them incapable of delivering the required functionalities and features. Computers are facing the three well-known walls [
1]: (1) the Memory wall, due to the increasing gap between processor and memory speeds, and the limited memory bandwidth, making memory access the killer of performance and power for memory access–dominated applications, for example, big-data; (2) the Instruction Level parallelism (ILP) wall, due to the increasing difficulty in finding enough parallelism in software/code that has to run on a parallel hardware, being the mainstream today; and (3) the Power wall, as the practical power limit for cooling is reached, meaning no further increase in CPU clock speed. On the other hand, nanoscale complementary metal-oxide-semiconductor (CMOS) technology, which has been the enabler of the computing revolution, also faces three walls [
2]: (1) the Reliability wall, as technology scaling leads to reduced device lifetime and higher failure rate; (2) the Leakage wall, as the static power is becoming dominant at smaller technologies (due to volatile technology and lower Vdd) and may even be more than the dynamic power; and (3) the Cost wall, as the cost per device via pure geometric scaling of process technology is plateauing. All of these have led to the slowdown of traditional device scaling. In order for computing systems to continue to deliver sustainable benefits to society for the foreseeable future, alternative computing architectures and notions have to be explored in the light of emerging new device technologies.
Computation-in-memory (CIM) is one of the alternative computing architectures that has been attracting a lot of attention, as it seems to have a huge potential in delivering order-of-magnitude improvement in terms of energy efficiency, for example [
3–
5]. CIM may make use of traditional memory technologies such as SRAM [
6,
7] and DRAM [
8,
9] as well as of emerging device and memory technologies such as resistive random access memory (RRAM) [
10,
11], spin-transfer torque magnetic random-access memory (STT-MRAM) [
12,
13], Pulse-code modulation PCM [
14,
15], or even ferroelectric transistor (FeFET) [
16,
17]. Research on CIM has been attracting a lot of attention and has been exploring different aspects of the computing engine design stack such as device, circuit design, architectures, compilers, automation and tools, algorithms, and applications.
This special issue intends to capture the state-of-the-art and explore different aspects related to CIM full-stack design and show its potential applications and benefits. The inherent characteristics of CIM force the revision of existing design methods. From all of the received submissions from experts in the field, only 12 could be accepted to be included in this issue. These 12 papers cover four major aspects related to CIM: (1) circuit design concepts, (2) architectures, (3) applications, and (4) automation tools.
Said Hamdioui
Delft University of Technology, The Netherlands
Elena-Ioana Vatajelu
TIMA, CNRS, INPG Université Grenoble Alpes, France
Alberto Bosio
École Centrale de Lyon, Institute of Nanotechnology, France
Guest Editors