Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    Nonbinary Low-Density Parity Check Decoding Algorithm Research-Based Majority Logic Decoding

    In the nonbinary low-density parity check (NB-LDPC) codes decoding algorithms, the iterative hard reliability based on majority logic decoding (IHRB-MLGD) algorithm has poor error correction performance. The essential reason is that the hard information is used in the initialization and iterative processes. For the problem of partial loss of information, when the reliability is assigned during initialization, the error correction performance is improved by modifying the assignment of reliability at initialization. The initialization process is determined by the probability of occurrence of the number of erroneous bits in the symbol and the Hamming distance. In addition, the IHRB-MLGD decoding algorithm uses the hard decision in the iterative decoding process. The improved algorithm adds soft decision information in the iterative process, which improves the error correction performance while only slightly increasing the decoding complexity, and improves the reliability accumulation process which makes the algorithm more stable. The simulation results indicate that the proposed algorithm has a better decoding performance than IHRB algorithm.

  • articleNo Access

    A GENERALIZED ABFT TECHNIQUE USING A FAULT TOLERANT NEURAL NETWORK

    In this paper, a measure of sensitivity is defined to evaluate fault tolerance of neural networks and we show that the sensitivity of a link is closely related to the amount of information passed through it. Based on this assumption, we prove that the distribution of output error caused by s-a-0 (stuck at 0) faults in an MLP network has a Gaussian distribution function. UDBP (Uniformly Distributed Back Propagation) algorithm is then introduced to minimize mean and variance of the output error. Then an MLP neural network trained with UDBP, contributes in an Algorithm-Based Fault Tolerant (ABFT) scheme to protect a nonlinear data process block. A systematic real convolution code guarantees that faults representing errors in the processed data will result in notable nonzero values in syndrome sequence. A majority logic decoder can now easily detect and correct single faults by observing the syndrome sequence. Simulation results demonstrating the error detection and correction behavior against random s-a-0 faults are presented too.