World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×
Spring Sale: Get 35% off with a min. purchase of 2 titles. Use code SPRING35. Valid till 31st Mar 2025.

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

An In-Memory-Computing Structure with Quantum-Dot Transistor Toward Neural Network Applications: From Analog Circuits to Memory Arrays

    This chapter appeared previously on the International Journal of High Speed Electronics and Systems. To cite this chapter, please cite the original article as the following: Y. Zhao, F. Jain and L. Wang, Int. J. High Speed Electron. Syst., 33, 2440059 (2024), doi:10.1142/S0129156424400597.

    https://doi.org/10.1142/9789811297427_0008Cited by:0 (Source: Crossref)
    Abstract:

    The rapid advancements in artificial intelligence (AI) have demonstrated great success in various applications, such as cloud computing, deep learning, and neural networks, among others. However, the majority of these applications rely on fast computation and large storage, which poses significant challenges to the hardware platform. Thus, there is a growing interest in exploring new computation architectures to address these challenges. Compute-in-memory (CIM) has emerged as a promising solution to overcome the challenges posed by traditional computer architecture in terms of data transfer frequency and energy consumption. Non-volatile memory, such as Quantum-dot transistors, has been widely used in CIM to provide high-speed processing, low power consumption, and large storage capacity. Matrix-vector multiplication (MVM) or dot product operation is a primary computational kernel in neural networks. CIM offers an effective way to optimize the performance of the dot product operation by performing it through an intertwining of processing and memory elements. In this paper, we present a novel design and analysis of a Quantum-dot transistor (QDT) based CIM that offers efficient MVM or dot product operation by performing computations inside the memory array itself. Our proposed approach offers energy-efficient and high-speed data processing capabilities that are critical for implementing AI applications on resource-limited platforms such as portable devices.