Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Data redundancy consumes huge storage space while setting up or employing cloud and fog storage. The dynamic cloud nature primarily focuses on the static environments which must be revised. Data deduplication solutions help minimize and control this issue by eradicating duplicate data from cloud storage systems. Since it might improve storage economy and security, data deduplication (DD) over encrypted data is a crucial problem in computing and storage systems. In this research, a novel approach to building secure deduplication systems across cloud and fog environments is developed. It uses MCDD and convergent cryptographic algorithms. The two most significant objectives of such systems are the focus of the suggested approach. Data redundancy must be minimized, but it also needs to be secured using a robust encryption method, which needs to be devised. The suggested approach is ideally suited for tasks like a user uploading new data to cloud storage or the fog. The proposed method might eliminate data redundancy by detecting redundancy at the block level. The testing results indicate that the recommended methodology can surpass a few cutting-edge techniques regarding computing effectiveness and security levels. The file is encrypted twice, once with the modified cryptographic model for deduplication (MCDD) and once with convergence encryption (CE).
The relationship between the absence of redundancy in relational databases and fourth normal form (4NF) is investigated. A relation scheme is defined to be redundant if there exists a legal relation defined over it which has at least two tuples that are identical on the attributes in a functional dependency (FD) or multivalued dependency (MVD) constraint. Depending on whether the dependencies in a set of constraints or the dependencies in the closure of the set is used, two different types of redundancy are defined. It is shown that the two types of redundancy are equivalent and their absence in a relation scheme is equivalent to the 4NF condition.
This paper will prove the uniqueness theorem for 3-layered complex-valued neural networks where the threshold parameters of the hidden neurons can take non-zeros. That is, if a 3-layered complex-valued neural network is irreducible, the 3-layered complex-valued neural network that approximates a given complex-valued function is uniquely determined up to a finite group on the transformations of the learnable parameters of the complex-valued neural network.
Redundancy in constraints and variables are usually studied in linear, integer and non-linear programming problems. However, main emphasis has so far been given only to linear programming problems. In this paper, an algorithm that identifies redundant objective functions in multi-objective stochastic fractional programming problems is provided. A solution procedure is also illustrated. This reduces the number of objective functions in cases where redundant objective functions exist.
In this paper, two novel approaches for unsupervised feature selection are proposed based on the spectral clustering. In the first proposed method, spectral clustering is employed over the features and the center of clusters is selected as well as their nearest-neighbors. These features have a minimum similarity (redundancy) between themselves since they belong to different clusters. Next, samples of data sets are clustered employing spectral clustering so that to the samples of each cluster a specific pseudo-label is assigned. After that according to the obtained pseudo-labels, the information gain of the features is computed that secures the maximum relevancy. Finally, the intersection of the selected features in the two previous steps is determined that simultaneously guarantees both the maximum relevancy and minimum redundancy. Our second proposed approach is very similar to the first one whose only but significant difference with the first method is that it selects one feature from each cluster and sorts all the features in terms of their relevancy. Then, by appending the selected features to a sorted list and ignoring them for the next step, the algorithm continues with the remaining features until all the features to be appended into the sorted list. Both of our proposed methods are compared with state-of-the-art methods and the obtained results confirm the performance of our proposed approaches especially the second one.
We present a new approach to algorithm-based fault tolerance (ABFT) and parity-checking techniques in the design of high performance computing systems. The ABFT technique employs real convolution error-correcting codes to encode the input data. In order to reduce the round-off error from the output decoding process, systematic real convolution encoding is employed. This paper proposes an efficient method to detect the arithmetic errors using convolution codes at the output compared with an equivalent parity value derived from the input data. Number data processing errors are detected by comparing parity values associated with a convolution code. These comparable sets will be very close numerically, although not identical because of round-off error differences between the two parity generation processes. The effects of internal failures and round-off error are modeled by additive error sources located at the output of the processing block and input at threshold detector. This model combines the aggregate effects of errors and applies them to the respective outputs.
Electronic equipment used in harsh environments such as space has to cope with many threats. One major threat is the intensive radiation which gives rise to Single Event Upsets (SEU) that lead to control flow errors and data errors. In the design of embedded systems to be used in space, the use of radiation tolerant equipment may therefore be a necessity. However, even if the higher cost of such a choice is not a problem, the efficiency of such equipment is lower than the COTS equipment. Therefore, the use of COTS with appropriate measures to handle the threats may be the optimal solution, in which a simultaneous optimization is carried out for power, performance, reliability and cost. In this paper, a novel method is presented for control flow error detection in multitask environments with less memory and performance overheads as compared to other methods seen in the literature.
Due to the winding level of the thinned wafers and the surface roughness of silicon dies, the quality of through-silicon vias (TSVs) varies during the fabrication and bonding process. If one TSV exhibits a defect during its manufacturing process, the probability of multiple defects occurring in the TSVs neighboring increases the faulty TSVs (FTSV), i.e., the TSV defects tend to be clustered which significantly reduces the yield of three-dimensional integrated circuits (3D-ICs). To resolve the clustered TSV faults, router-based and ring-based redundant TSV (RTSV) architecture were proposed. However, the repair rate is low and the hardware overhead is high. In this paper, we propose a novel cross-cellular based RTSV architecture to utilize the area more efficiently as well as to maintain high yield. The simulation results show that the proposed architecture has higher repair rate as well as more cost-effective overhead, compared with router-based and ring-based methods.
This paper presents a 6.4-GS/s 16-way 10-bit time-interleaved (TI) SAR ADC for wideband wireless applications. A two-stage master–slave hierarchical sampling network, which is immune to the time skew of multi-phase clocks, is introduced to avoid the time-skew calibration for design simplicity and hardware efficiency. To perform low distortion and fast sampling at acceptable power consumption, a linearity- and energy efficiency-improved track-and-hold (T&H) buffer with current-feedback compensation scheme is proposed. Accompanied by its low-output-impedance feature, the buffer obtains adequate bandwidth which can cover the entire ADC Nyquist sampling range. Moreover, the split capacitor DAC combined with a novel nonbinary algorithm is adopted in single-channel ADC, enabling a shorter DAC settling time as well as less switching energy. Capacitor mismatch effect with related design trade-off is discussed and behavior models are built to evaluate the effect of capacitor mismatch on ENOB. An asynchronous self-triggered SAR logic is designed and optimized to minimize the delay on logic paths to match up the acceleration on DAC and comparator. With these proposed techniques, the 10-b sub-ADC achieves a 400-MHz conversion rate with only 3.5-mW power consumption. The circuit is designed and simulated in TSMC 28 HPC process and the results show that the overall ADC achieves 54.6-dB SNDR and 58.1-dB SFDR at Nyquist input while consuming 127-mW power from 1-V/1.5-V supply and achieving a Walden FoM of 45fJ/conv-step.
This paper presents a predictive noise shaping (NS) Successive Approximation (SAR) Analog-to-Digital Converter (ADC), which improves its conversion speed by 25%, compared to its counterpart with 0.3% less redundancy. It begins by investigating the Signal to Noise and Distortion Ratio (SNDR) degradation when using a lower Oversampling Ratio (OSR, e.g., 8) than required in the prior state-of-the-art works, when predicting the first 4 MSBs with a second-order predictor. Later, it compares the SNDR for the same predictor with and without the bit weight redundancy in the capacitor array. In addition, designs with various levels of redundancies and OSRs are compared on their SNDRs. Both MATLAB and Cadence simulation results verified that by introducing 0.3% more redundant bit weight, either 8 dB more SNDR at the same bandwidth, or 25% speed improvement can be obtained while maintaining its SNDR.
As Graphics Processing Units (GPUs) evolve for general-purpose computations besides inherently fault-tolerant graphics programs, soft error reliability becomes a first-class citizen in program design. Especially, safety-critical systems utilizing GPU devices need to employ fault-tolerance techniques to recover from errors in hardware components. While software-level redundancy approaches, based on the replication of the application code, offer high reliability for safe program execution, it is essential to perform redundancy by utilizing parallel execution units in the target architecture not to hurt performance with redundant computations. In this work, we propose redundancy approaches using the parallel GPU cores and implement a compiler-level redundancy framework that enables the programmer to configure the target GPGPU program for redundant execution. We run redundant executions for GPGPU programs from the PolyBench benchmark suite by applying our kernel-level redundancy approaches and evaluate their performance by considering the parallelism level of the programs. Our results reveal that redundancy approaches utilizing parallelism offered by GPU cores yield higher performance for redundant executions, while the programs that already make use of parallel GPU cores in their original form suffer from overhead caused by contention among redundant threads.
Aiming at resolving the conflict between security and efficiency in the design of chaotic image encryption algorithms, an image encryption algorithm based on information hiding is proposed based on the “one-time pad” idea. A random parameter is introduced to ensure a different keystream for each encryption, which has the characteristics of “one-time pad”, improving the security of the algorithm rapidly without significant increase in algorithm complexity. The random parameter is embedded into the ciphered image with information hiding technology, which avoids negotiation for its transport and makes the application of the algorithm easier. Algorithm analysis and experiments show that the algorithm is secure against chosen plaintext attack, differential attack and divide-and-conquer attack, and has good statistical properties in ciphered images.
The naïve Bayes model is a simple but often satisfactory supervised classification method. The original naïve Bayes scheme, does, however, have a serious weakness, namely, the harmful effect of redundant predictors. In this paper, we study how to apply a regularization technique to learn a computationally efficient classifier that is inspired by naïve Bayes. The proposed formulation, combined with an L1-penalty, is capable of discarding harmful, redundant predictors. A modification of the LARS algorithm is devised to solve this problem. We tackle both real-valued and discrete predictors, assuring that our method is applicable to a wide range of data. In the experimental section, we empirically study the effect of redundant and irrelevant predictors. We also test the method on a high dimensional data set from the neuroscience field, where there are many more predictors than data cases. Finally, we run the method on a real data set than combines categorical with numeric predictors. Our approach is compared with several naïve Bayes variants and other classification algorithms (SVM and kNN), and is shown to be competitive.
Computer based systems have increased dramatically in scope, complexity, pervasiveness. Most industries are highly dependent on computers for their basic day to day functioning. Safe & reliable software operations are an essential requirement for many systems across different industries. The number of functions to be included in a software system is decided during the software development. Any software system must be constructed in such a way that execution can resume even after the occurrence of failure with minimal loss of data and time. Such software systems which can continue execution even in presence of faults are called fault tolerant software. When failure occurs one of the redundant software modules get executed and prevent system failure. The fault tolerant software systems are usually developed by integrating COTS (commercial off-the-shelf) software components. The motivation for using COTS components is that they will reduce overall system development costs and reduce development time. In this paper, reliability models for fault tolerant consensus recovery blocks are analyzed. In first optimization model, we formulate joint optimization problem in which reliability maximization of software system and execution time minimization for each function of software system are considered under budgetary constraint. In the second model the issue of compatibility among alternatives available for different modules, is discussed. Numerical illustrations are provided to demonstrate the developed models.
The redundancy is a widely spread technology of building computing systems that continue to operate satisfactorily in the presence of faults occurring in hardware and software components. The principle objective of applying redundancy is achieve reliability goals subject to techno-economic constraints. Due to a plenty of applications arising virtually in both industrial and military organizations especially in embedded fault tolerance systems including telecommunication, distributed computer systems, automated manufacturing systems, etc., the reliability and its dependability measures of redundant computer-based systems have become attractive features for the systems designers and production engineers. However, even with the best design of redundant computer-based systems, software and hardware failures may still occur due to many failure mechanisms leading to serious consequences such as huge economic losses, risk to human life, etc. The objective of present survey article is to discuss various key aspects, failure consequences, methodologies of redundant systems along with software and hardware redundancy techniques which have been developed at the reliability engineering level. The methodological aspects which depict the required steps to build a block diagram composed of components in different configurations as well as Markov and non-Markov state transition diagram representing the structural system has been elaborated. Furthermore, we describe the reliability of a specific redundant system and its comparison with a non redundant system to demonstrate the tractability of proposed models and its performance analysis.
Homologous recombination is an important operator in the evolution of biological organisms. However, there is still no clear, generally accepted understanding of why it exists and under what circumstances it is useful. In this paper, we consider its utility in the context of an infinite population haploid model with selection and homologous recombination. We define utility in terms of two metrics — the increase in frequency of fit genotypes, and the increase in average population fitness, relative to those associated with selection only. Explicitly, we explore the full parameter space of a two-locus two-allele system, showing, as a function of the landscape and the initial population, that recombination is beneficial in terms of these metrics in two distinct regimes: a relatively landscape independent regime — the search regime — where recombination aids in the search for a fit genotype that is absent or at low frequency in the population; and the modular regime, where recombination allows for the juxtaposition of fit “modules” or Building Blocks (BBs). Thus, we conclude that the ubiquity and utility of recombination is intimately associated with the existence of modularity and redundancy in biological fitness landscapes.
Computational motor control covers all applications of quantitative tools for the study of the biological movement control system. This paper provides a review of this field in the form of a list of open questions. After an introduction in which we define computational motor control, we describe: a Turing-like test for motor intelligence; internal models, inverse model, forward model, feedback error learning and distal teacher; time representation, and adaptation to delay; intermittence control strategies; equilibrium hypotheses and threshold control; the spatiotemporal hierarchy of wide sense adaptation, i.e., feedback, learning, adaptation, and evolution; optimization based models for trajectory formation and optimal feedback control; motor memory, the past and the future; and conclude with the virtue of redundancy. Each section in this paper starts with a review of the relevant literature and a few more specific studies addressing the open question, and ends with speculations about the possible answer and its implications to motor neuroscience. This review is aimed at concisely covering the topic from the author's perspective with emphasis on learning mechanisms and the various structures and limitations of internal models.
User studies in information science have recognised relevance as a multidimensional construct. An implication of multidimensional relevance is that a user's information need should be modeled by multiple data structures to represent different relevance dimensions. While the extant literature has attempted to model multiple dimensions of a user's information need, the fundamental assumption that a multidimensional model is better than a uni-dimensional model has not been addressed. This study seeks to test this assumption. Our results indicate that a retrieval system that models both topicality and the novelty dimension of a users' information need outperforms a system with a uni-dimensional model.
Using a survey data of UK firms that engaged in downsizing, this paper explores the link between downsizing and innovation determinants. We suggest that the relationship between downsizing and innovation determinants is contingent upon the speed of implementing the downsizing and the severity of downsizing. Overall, the results confirm our general proposition and shed new light on the relationship between downsizing and innovation enhancers, and barriers to innovation.
Cellular pathways are ordinarily diagnosed with pathway inhibitors, related gene regulation, or fluorescent protein markers. They are also suggested to be diagnosed with pathway activation modulation of photobiomodulation (PBM) in this paper. A PBM on a biosystem function depends on whether the biosystem is in its function-specific homeostasis (FSH). An FSH, a negative feedback response for the function to be performed perfectly, is maintained by its FSH-essential subfunctions and its FSH-non-essential subfunctions (FNSs). A function in its FSH or far from its FSH is called a normal or dysfunctional function. A direct PBM may self-adaptatively modulate a dysfunctional function until it is normal so that it can be used to discover the optimum pathways for an FSH to be established. An indirect PBM may self-adaptatively modulate a dysfunctional FNS of a normal function until the FNS is normal, and the normal function is then upgraded so that it can be used to discover the redundant pathways for a normal function to be upgraded.