Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Color has an important role in object recognition and visual working memory (VWM). Decoding color VWM in the human brain is helpful to understand the mechanism of visual cognitive process and evaluate memory ability. Recently, several studies showed that color could be decoded from scalp electroencephalogram (EEG) signals during the encoding stage of VWM, which process visible information with strong neural coding. Whether color could be decoded from other VWM processing stages, especially the maintaining stage which processes invisible information, is still unknown. Here, we constructed an EEG color graph convolutional network model (ECo-GCN) to decode colors during different VWM stages. Based on graph convolutional networks, ECo-GCN considers the graph structure of EEG signals and may be more efficient in color decoding. We found that (1) decoding accuracies for colors during the encoding, early, and late maintaining stages were 81.58%, 79.36%, and 77.06%, respectively, exceeding those during the pre-stimuli stage (67.34%), and (2) the decoding accuracy during maintaining stage could predict participants’ memory performance. The results suggest that EEG signals during the maintaining stage may be more sensitive than behavioral measurement to predict the VWM performance of human, and ECo-GCN provides an effective approach to explore human cognitive function.
We show nearly work-optimal parallel decoding algorithms which run on the PRAM EREW in O(log n) time with O(n/(log n)1/2) processors for text compressed with LZ1 and LZ2 methods, where n is the length of the output string. We also present pseudo work-optimal PRAM EREW decoders for finite window compression and LZ2 compression requiring logarithmic time with O(dn) work, where d is the window size and the alphabet size respectively. Finally, we observe that PRAM EREW decoders requiring O(log n) time and O(n/log n) processors are possible with the non-conservative assumption that the computer word length is O(log2 n) bits.
The i-p sequence is one of the most common encodings for a binary tree. This paper gives constant time BSR parallel algorithms for the decoding and drawing of a binary tree from its i-p sequence respectively.
In this paper, location–allocation problem of a three-stage supply chain network, including suppliers, plants, distribution centers (DCs) and customers is investigated. With respect to the total cost, the aim is determining opened plants and DCs and designing transportation trees between the facilities. Considering the capacity of suppliers, plants and DCs are limited and there is a limitation on the maximum number of opened plants and DCs, a mixed-integer linear programming (MILP) model of the problem is presented. Since multi-stage supply chain networks have been recognized as NP-hard problems, applying priority-based encoding and a four-step backward decoding procedure, a meta-heuristic algorithm, namely GAIWO, based on the best features of genetic algorithm (GA) and invasive weed optimization (IWO) is designed to solve the problem. In small size problems, the efficiency of the GAIWO is checked by solutions of GAMS software. For larger size problems, the performance of the proposed approach is compared with four evolutionary algorithms in both aspects of the structure of the GAIWO and the efficiency of the proposed encoding–decoding procedure. Besides usual evaluation criteria, Wilcoxon test and a chess rating system are used for evaluating and ranking the algorithms. The results show higher efficiency of the proposed approach.
Data processing with multiple domains is an important concept in any platform; it deals with multimedia and textual information. Where textual data processing focuses on a structured or unstructured way of data processing which computes in less time with no compression over the data, multimedia data are processing deals with a processing requirement algorithm where compression is needed. This involve processing of video and their frames and compression in short forms such that the fast processing of storage as well as the access can be performed. There are different ways of performing compression, such as fractal compression, wavelet transform, compressive sensing, contractive transformation and other ways. One way of performing such a compression is working with the high frequency component of multimedia data. One of the most recent topics is fractal transformation which follows the block symmetry and archives high compression ratio. Yet, there are limitations such as working with speed and its cost while performing proper encoding and decoding using fractal compression. Swarm optimization and other related algorithms make it usable along with fractal compression function. In this paper, we review multiple algorithms in the field of fractal-based video compression and swarm intelligence for problems of optimization.
In this paper, a critical analysis of the joint source channel coding scheme proposed by Honary et al.1 is given. We show mathematically that although the algorithm leads to a reduction in the overall number of decoding states it increases both the time and space complexity of the decoding process. Moreover, the hardware complexity of the proposed scheme is also analyzed and it is observed that the joint decoding algorithm is in the first place extremely complex to implement and secondly, it significantly increases the number of components required, thereby increasing the power consumption. A mathematical proof that the joint decoding scheme has the same error performance as a separate scheme is also given. Finally, the simulation results show that the joint decoding scheme takes a significantly greater time to decode than a separate scheme, for the same error performance.
While Maximum-Likelihood (ML) is the optimum decoding scheme for most communication scenarios, practical implementation difficulties limit its use, especially for Multiple Input Multiple Output (MIMO) systems with a large number of transmit or receive antennas. Tree-searching type decoder structures such as Sphere decoder and K-best decoder present an interesting trade-off between complexity and performance. Many algorithmic developments and VLSI implementations have been reported in literature with widely varying performance to area and power metrics. In this semi-tutorial paper we present a holistic view of different Sphere decoding techniques and K-best decoding techniques, identifying the key algorithmic and implementation trade-offs. We establish a consistent benchmark framework to investigate and compare the delay cost, power cost, and power-delay-product cost incurred by each method. Finally, using the framework, we propose and analyze a novel architecture and compare that to other published approaches. Our goal is to explicitly elucidate the overall advantages and disadvantages of each proposed algorithms in one coherent framework.
The need of effective packet transmission to deliver advanced performance in wireless networks creates the need to find shortest network paths efficiently and quickly. This paper addresses a reduced uncertainty-based hybrid evolutionary algorithm (RUBHEA) to solve dynamic shortest path routing problem (DSPRP) effectively and rapidly. Genetic algorithm (GA) and particle swarm optimization (PSO) are integrated as a hybrid algorithm to find the best solution within the search space of dynamically changing networks. Both GA and PSO share context of individuals to reduce uncertainty in RUBHEA. Various regions of search space are explored and learned by RUBHEA. By employing a modified priority encoding method, each individual in both GA and PSO are represented as a potential solution for DSPRP. A complete statistical analysis has been performed to compare the performance of RUBHEA with various state-of-the-art algorithms. It shows that RUBHEA is considerably superior (reducing the failure rate by up to 50%) to similar approaches with increasing number of nodes encountered in the networks.
In securities trading, low latency helps investors take the leading position in the market. Conventionally, market data is decoded with software running on general computers. However, the serial structure of software and complex operating system scheduling cause high latency. This paper designs an accelerator for decoding market data based on field-programmable gate array (FPGA). We propose a pipeline in the accelerator, where every part works independently and parallelly. Furthermore, we present a mechanism for encoding templates, which avoids reconstructing the accelerator and decreases the cost when the template is renewed. We evaluate this accelerator with real Financial Information eXchange (FIX) messages and FIX Adapting for Streaming (FAST) templates, attaining an average latency of 447ns.
The paper elaborates on the encoding and decoding of numerical and nonnumerical data. Proposed are general criteria leading to the distortion-free interfacing mechanisms that help transform information between the systems (or modelling environments) operating at different levels of information granularity. Distinguished are three basic categories of information: numerical, interval-valued, and linguistic (fuzzy). As all of them are dealt with here, the paper subsumes the current studies concentrated exclusively on representing fuzzy sets through their numerical representatives (prototypes). The algorithmic framework in which the distortion-free interfacing is completed is realized through neural networks. Each category of information is treated separately and gives rise to its own specialized architecture of the neural network. Similarly, these networks require carefully designed training sets that fully capture the specificity of the reconstruction problem. Several carefully selected numerical examples are aimed at the illustration of the key ideas.
Cloud computing (CC), which provides numerous benefits to customers, is a new revolution in information technology. The benefits are on-demand, support, scalability, along with reduced cost usage of computing resources. However, with the prevailing techniques, the system’s authentication is still challenging and it leads to being vulnerable. Thus, utilizing Barrel Shift-centric Whirlpool Hashing-Secure Diffie Hellman ASCII Key-Elliptic Curve Cryptography (BSWH-SDHAK-ECC), the hashed access policy (AP)-grounded secure data transmission is presented in this paper. The data owner (DO) registers their information initially. The user login and verify their profile grounded on the registration. The user selects the data to upload to the Cloud Server (CS) after successful verification. The AP is created; after that, the image attributes are extracted; also, utilizing the BSWH approach, a hash code is produced for the AP. Concurrently, by utilizing the Adaptive Binary Shift-based Huffman Encoding (ABSHE) technique, the selected image is compressed. Also, by utilizing the SDHAK-ECC algorithm, the compressed image is encrypted. Lastly, to the CS, the created AP along with the encrypted image is uploaded. The data user sent the request to access and downloads the data. After that, the AP was provided to the user by the data owner. Next, the user sends it to the CS, which checks its AP with the user’s AP. When the AP is matched with the cloud AP, the encrypted data is downloaded and decrypted. Finally, the experimental outcomes revealed that the proposed model achieved a higher security value of 0.9970 that shows the proposed framework’s efficient performance in contrast to the prevailing techniques.
A perfect t-code in a graph Γ is a subset 𝒞 of V(Γ) such that every vertex of Γ is at a distance not more than t, to exactly one vertex of 𝒞. In this paper, we present a new family of perfect t-codes in Cayley graphs of groups. We proposed the role of the subgroups of a group to create perfect t-codes by restricting the elements of the left transversal of the subgroups in the given group. Also, we introduce a new decoding algorithm for the all of perfect t-codes in Cayley graphs. These codes are able to correct every t-error pattern.
We define and study a class of codes obtained from scrolls over curves of any genus over finite fields. These codes generalize Goppa codes in a natural way, and the orthogonal complements of these codes belong to the same class. We show how syndromes of error vectors correspond to certain vector bundle extensions, and how decoding is associated to finding destabilizing bundles.
During the elongation cycle of protein biosynthesis, the specific amino acid coded for by the mRNA is delivered by a complex that is comprised of the cognate aminoacyl-tRNA, elongation factor Tu and GTP. As this ternary complex binds to the ribosome, the anticodon end of the tRNA reaches the decoding center in the 30S subunit. Here we present the cryo-electron microscopy (EM) study of an Escherichia coli 70S ribosome-bound ternary complex stalled with an antibiotic, kirromycin. In the cryo-EM map the anticodon arm of the tRNA presents a new conformation that appears to facilitate the initial codon–anticodon interaction. Furthermore, the elbow region of the tRNA is seen to contact the GTPase-associated center on the 50S subunit of the ribosome, suggesting an active role of the tRNA in the transmission of the signal prompting the GTP hydrolysis upon codon recognition.
In translation, elongation factor Tu (EF-Tu) molecules deliver aminoacyl-tRNAs to the mRNA-programmed ribosome. The GTPase activity of EF-Tu is triggered by ribosome-induced conformational changes of the factor that play a pivotal role in the selection of the cognate aminoacyl-tRNAs. We present a 6.7-Å cryo-electron microscopy map of the aminoacyl-tRNA·EF-Tu·GDP·kirromycin-bound Escherichia coli ribosome, together with an atomic model of the complex obtained through molecular dynamics flexible fitting. The model reveals the conformational changes in the conserved GTPase switch regions of EF-Tu that trigger hydrolysis of GTP, along with key interactions, including those between the sarcin-ricin loop and the P loop of EF-Tu, and between the effector loop of EF-Tu and a conserved region of the 16S rRNA. Our data suggest that GTP hydrolysis on EF-Tu is controlled through a hydrophobic gate mechanism.