Please login to be able to save your searches and receive alerts for new content matching your search criteria.
This study explores the implementation of the nonlinear autoregressive Volterra (NARV) model using a field programmable gate arrays (FPGAs)-based hardware simulation platform and accomplishes the identification process of the Hodgkin–Huxley (HH) model. First, a physiological detailed single-compartment HH model is applied to generate experiment data sets and the electrical behavior of neurons are described by the membrane potential. Then, based on the injected input current and the output membrane potential, a second-order NARV model is constructed and implemented on FPGA-based simulation platforms. The NARV modeling method is data-driven, requiring no accurate physiological information and the FPGA-based hardware simulation can provide a real time and high-performance platform to deal with the drawbacks of software simulation. Therefore, the proposed method in this paper is capable of handling the nonlinearities and uncertainties in nonlinear neural systems and may help promote the development of clinical treatment devices.
A four-variable dynamical system composed of memristor is proposed to investigate the dependence of multi-scroll attractor on initial setting for one variable with memory, and the description for physical background is supplied. It is found that appropriate setting of initial values for the memory variable can induce different numbers of attractor, as a result, resetting initials can change the profile of attractors which is also dependent on the calculating period. Time-delayed feedback is used to stabilize the dynamical system thus the effect of initial dependence is suppressed and multi-scroll attractors are controlled by applying appropriate time delay and feedback gain in the controller. Furthermore, the system is verified on FPGA circuit platform and memristor is used to describe the memory effect of variable related to magnetic flux. It is confirmed that multi-scroll attractors can be stabilized and the dependence of initials setting is suppressed in experiment way.
This paper designs a novel classification hardware framework based on neural network (NN). It utilizes COordinate Rotation DIgital Computer (CORDIC) algorithm to implement the activation function of NNs. The training was performed through software using an error back-propagation algorithm (EBPA) implemented in C++, then the final weights were loaded to the implemented hardware framework to perform classification. The hardware framework is developed in Xilinx 9.2i environment using VHDL as programming languages. Classification tests are performed on benchmark datasets obtained from UCI machine learning data repository. The results are compared with competitive classification approaches by considering the same datasets. Extensive analysis reveals that the proposed hardware framework provides more efficient results as compared to the existing classifiers.
In the process of quantum key distribution (QKD), error correction algorithm is used to correct the error bits of the key at both ends. The existing applied QKD system has a low key rate and is generally Kbps of magnitude. Therefore, the performance requirement of data processing such as error correction is not high. In order to cope with the development demand of high-speed QKD system in the future, this paper introduces the Winnow algorithm to realize high-speed parity and hamming error correction based on Field Programmable Gate Array (FPGA), and explores the performance limit of this algorithm. FPGA hardware implementation can achieve the scale of Mbps bandwidth, with choosing different group length of sifted key by different error rate, and can achieve higher error correction efficiency by reducing the information leakage in the process of error correction, and improves the QKD system’s secure key rate, thus helping the future high-speed QKD system.
This paper introduces a new technique for analyzing the behavior of global interconnects in FPGAs, for nanoscale technologies. Using this new enhanced modeling method, new enhanced accurate expressions for calculating the propagation delay of global interconnects in nano-FPGAs have been derived. In order to verify the proposed model, we have performed the delay simulations in 45 nm, 65 nm, 90 nm, and 130 nm technology nodes, with our modeling method and the conventional Pi-model technique. Then, the results obtained from these two methods have been compared with HSPICE simulation results. The obtained results show a better match in the propagation delay computations for global interconnects between our proposed model and HSPICE simulations, with respect to the conventional techniques such as Pi-model. According to the obtained results, the difference between our model and HSPICE simulations in the mentioned technology nodes is (0.29–22.92)%, whereas this difference is (11.13–38.29)% for another model.
Field Programmable Gate Arrays (FPGA), as one of the popular circuit implementation platforms, provide the flexible and powerful way for different applications. IC designs are configured to FPGA through bitstream files. However, the configuration process can be hacked by side channel attacks (SCA) to acquire the critical design information, even under the protection of encryptions. Reports have shown many successful attacks against the FPGA cryptographic systems during the bitstream loading process to acquire the entire design. Current countermeasures, mostly random masking methods, are effective but also introduce large hardware complexity. They are not suitable for resource-constrained scenarios such as Internet of Things (IoT) applications. In this paper, we propose a new secure FPGA masking scheme to counter the SCA. By utilizing the FPGA partial reconfiguration feature, the proposed technique provides a light-weight and flexible solution for the FPGA decryption masking.
Continuous enhancement of the performance of energy harvesters in recent years has broadened their arenas of applications. On the other hand, ample availability of IoT devices has made radio frequency (RF) a viable source of energy harvesting. Integration of a maximum power point tracking (MPPT) controller in RF energy harvester is a necessity that ensures maximum available power transfer with variable input power conditions. In this paper, FPGA implementation of a machine learning (ML) model for maximum power point tracking in RF energy harvesters is presented. A supervised learning-based ML model-feedforward neural network (FNN) has been designed which is capable of tracking maximum power point with optimal accuracy. The model was designed using stochastic gradient descent (SGD) optimizer and mean square error (MSE) loss function. Simulation results of the VHDL translated model demonstrated a good agreement between the expected and the obtained values. The proposed ML based MPPT controller was implemented in Artix-7 Field Programmable Gate Array (FPGA).