Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    Quantum-Dot Transistor Based Multi-Bit Multiplier Unit for In-Memory Computing

    In-memory computing is an emerging technique to fulfill the fast growing demand for high-performance data processing. This technique provides fast processing and high throughput by accessing data stored in the memory array rather than dealing with complicated operation and data movement on hard drive. For data processing, the most important computation is dot product, which is also the core computation for applications such as deep learning neuron networks, machine learning, etc. As multiplication is the key function in dot product, it is critical to improve its performance and achieve faster memory processing. In this paper, we present a design with the ability to perform in-memory multi-bit multiplications. The proposed design is implemented by using quantum-dot transistors, which enable multi-bit computations in the memory cell. Experimental results demonstrate that the proposed design provides reliable in-memory multi-bit multiplications with high density and high energy efficiency. Statistical analysis is performed using Monte Carlo simulations to investigate the process variations and error effects.

  • articleFree Access

    A Multi-Bit Non-Volatile Compute-in-Memory Architecture with Quantum-Dot Transistor Based Unit

    The recent advance of artificial intelligence (AI) has shown remarkable success for numerous tasks, such as cloud computing, deep-learning, neural network and so on. Most of those applications rely on fast computation and large storage, which brings various challenges to the hardware platform. The hardware performance is the bottle neck to break through and therefore, there is a lot of interest in exploring new solutions for computation architecture in recent years. Compute-in-memory (CIM) has drawn attention to the researchers and it is considered as one of the most promising candidates to solve the above challenges. Computing-In-memory is an emerging technique to fulfill the fast-growing demand for high-performance data processing. This technique offers fast processing, low power and high performance by blurring the boundary between processing cores and memory units. One key aspect of CIM is performing matrix-vector multiplication (MVM) or dot product operation through intertwining of processing and memory elements. As the primary computational kernel in neural networks, dot product operation is targeted to be improved in terms of its performance. In this paper, we present the design, implementation and analysis of quantum-dot transistor (QDT) based CIM, from the multi-bit multiplier to the dot product unit, and then the in-memory computing array.

  • articleNo Access

    An In-Memory-Computing Structure with Quantum-Dot Transistor Toward Neural Network Applications: From Analog Circuits to Memory Arrays

    The rapid advancements in artificial intelligence (AI) have demonstrated great success in various applications, such as cloud computing, deep learning, and neural networks, among others. However, the majority of these applications rely on fast computation and large storage, which poses significant challenges to the hardware platform. Thus, there is a growing interest in exploring new computation architectures to address these challenges. Compute-in-memory (CIM) has emerged as a promising solution to overcome the challenges posed by traditional computer architecture in terms of data transfer frequency and energy consumption. Non-volatile memory, such as Quantum-dot transistors, has been widely used in CIM to provide high-speed processing, low power consumption, and large storage capacity. Matrix-vector multiplication (MVM) or dot product operation is a primary computational kernel in neural networks. CIM offers an effective way to optimize the performance of the dot product operation by performing it through an intertwining of processing and memory elements. In this paper, we present a novel design and analysis of a Quantum-dot transistor (QDT) based CIM that offers efficient MVM or dot product operation by performing computations inside the memory array itself. Our proposed approach offers energy-efficient and high-speed data processing capabilities that are critical for implementing AI applications on resource-limited platforms such as portable devices.

  • chapterNo Access

    Quantum-Dot Transistor Based Multi-Bit Multiplier Unit for In-Memory Computing

    In-memory computing is an emerging technique to fulfill the fast growing demand for high-performance data processing. This technique provides fast processing and high throughput by accessing data stored in the memory array rather than dealing with complicated operation and data movement on hard drive. For data processing, the most important computation is dot product, which is also the core computation for applications such as deep learning neuron networks, machine learning, etc. As multiplication is the key function in dot product, it is critical to improve its performance and achieve faster memory processing. In this paper, we present a design with the ability to perform in-memory multi-bit multiplications. The proposed design is implemented by using quantum-dot transistors, which enable multi-bit computations in the memory cell. Experimental results demonstrate that the proposed design provides reliable in-memory multi-bit multiplications with high density and high energy efficiency. Statistical analysis is performed using Monte Carlo simulations to investigate the process variations and error effects.

  • chapterNo Access

    A Multi-Bit Non-Volatile Compute-in-Memory Architecture with Quantum-Dot Transistor Based Unit

    The recent advance of artificial intelligence (AI) has shown remarkable success for numerous tasks, such as cloud computing, deep-learning, neural network and so on. Most of those applications rely on fast computation and large storage, which brings various challenges to the hardware platform. The hardware performance is the bottle neck to break through and therefore, there is a lot of interest in exploring new solutions for computation architecture in recent years. Compute-in-memory (CIM) has drawn attention to the researchers and it is considered as one of the most promising candidates to solve the above challenges. Computing-In-memory is an emerging technique to fulfill the fast-growing demand for high-performance data processing. This technique offers fast processing, low power and high performance by blurring the boundary between processing cores and memory units. One key aspect of CIM is performing matrix-vector multiplication (MVM) or dot product operation through intertwining of processing and memory elements. As the primary computational kernel in neural networks, dot product operation is targeted to be improved in terms of its performance. In this paper, we present the design, implementation and analysis of quantum-dot transistor (QDT) based CIM, from the multi-bit multiplier to the dot product unit, and then the in-memory computing array.

  • chapterNo Access

    An In-Memory-Computing Structure with Quantum-Dot Transistor Toward Neural Network Applications: From Analog Circuits to Memory Arrays

    The rapid advancements in artificial intelligence (AI) have demonstrated great success in various applications, such as cloud computing, deep learning, and neural networks, among others. However, the majority of these applications rely on fast computation and large storage, which poses significant challenges to the hardware platform. Thus, there is a growing interest in exploring new computation architectures to address these challenges. Compute-in-memory (CIM) has emerged as a promising solution to overcome the challenges posed by traditional computer architecture in terms of data transfer frequency and energy consumption. Non-volatile memory, such as Quantum-dot transistors, has been widely used in CIM to provide high-speed processing, low power consumption, and large storage capacity. Matrix-vector multiplication (MVM) or dot product operation is a primary computational kernel in neural networks. CIM offers an effective way to optimize the performance of the dot product operation by performing it through an intertwining of processing and memory elements. In this paper, we present a novel design and analysis of a Quantum-dot transistor (QDT) based CIM that offers efficient MVM or dot product operation by performing computations inside the memory array itself. Our proposed approach offers energy-efficient and high-speed data processing capabilities that are critical for implementing AI applications on resource-limited platforms such as portable devices.