Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Embedded SRAM design with high noise margin between read and write, low power, low supply voltages, and high speed become essential features in VLSI embedded applications. The complete embedded SRAM design of self-timing synchronization is proposed based on the CMOS eight-transistor (8T-Cell) memory cell circuit. The cell is based on the traditional six-transistor (6T-Cell) cross-coupled invertors with the addition of two NMOS transistors for separate read buffer circuit. The read buffer structure is based on pre-charging the read bit-line during the low value of read clock and evaluating the read bit-line during the high value of read clock, thereby maintaining one active line per column and eliminating the use of traditional sense amplifier with all its synchronization schemes. The simulation results show that the embedded SRAM of size 128-bit × 128-bit is operating at a maximum frequency of 200 MHz for Write and Read clock cycles with 1.62 V power supply, and measures a total average power consumption of 22.60 mW. All simulation results were conducted on 0.18 μm TSMC single poly and three layers of metals measuring a cell area of 2.2 × 3.0 μm2. The circuit is not meant to replace the SRAM with 6T-Cell transistor structure; however, it is attractive for applications related to high density with automation road-map design, such as graphic and network processor chips. In these applications, memory sizes are introduced in many different irregular geometries and uses all over the chip with storage sizes less than 20 k-bit, in addition, it is susceptible to large substrate noise as well as large coupling wire routing.
Graphics processing unit (GPU) has surfaced as a high-quality platform for computer vision-related systems. In this paper, we propose a straightforward system consisting of a registration and a fusion method over GPU, which generates good results at high speed, compared to non-GPU-based systems. Our GPU-accelerated system utilizes existing methods through converting the methods into the GPU-based platform. The registration method uses point correspondences to find a registering transformation estimated with the incremental parameters in a coarse-to-fine way, while the fusion algorithm uses multi-scale methods to fuse the results from the registration stage. We evaluate performance with the same methods that are executed over both CPU-only and GPU-mounted environment. The experiment results present convincing evidences of the efficiency of our system, which is tested on a few pairs of aerial images taken by electro-optical and infrared sensors to provide visual information of a scene for environmental observatories.
Relief texture mapping is an image-based rendering technique which can successfully support the representation of 3D surface details and view motion parallax. It has the potential to significantly increase visual realism of rendered geometry while keeping system load constant. In this paper, FPGA (Field Programmable Gate Array) chip technology is applied to this three-dimensional image warping method. A relief texture mapping system has been implemented on a reprogrammable and reconfigurable FPGA board. The algorithm is optimized for the specific architecture and the framework is customized for circuit resources, which can be flexibly changed for other structures. In our design, we take advantage of inherent parallelism of the algorithm by concatenating multiple warping engines and well organizing data in memory space. Experimental results show high image quality with improved rendering speed.