The rapid expansion of infrastructure topology, especially noticeable in high-performance computing systems and data center networks, significantly increases the likelihood of failures in network components. While traditional (edge) connectivity has long been the standard for measuring the reliability of interconnection networks, this approach becomes less effective as networks grow more complex. To address this, two innovative metrics, named matroidal connectivity and conditional matroidal connectivity, have emerged. These metrics provide the flexibility to impose constraints on faulty edges across different dimensions and have shown promise in enhancing the edge fault tolerance of interconnection networks. In this paper, we explore (conditional) matroidal connectivity of the kk-dimensional folded Petersen network FPkFPk, which is constructed by iteratively applying the Cartesian product operation on the well-known Petersen graph and possesses a regular, vertex- and edge-symmetric architecture with optimal connectivity and logarithmic diameter. Specifically, the faulty edge set FF is partitioned into kk subsets according to the dimensions of FPkFPk. We then arrange these subsets by their cardinality, imposing the restriction whereby the cardinality of the iith largest subset dose not exceed 3⋅10i−13⋅10i−1 for 1≤i≤k1≤i≤k. Subsequently, we show that FPk−FFPk−F is connected with |F|≤∑ki=1(3⋅10i−1)∣∣F∣∣≤∑ki=1(3⋅10i−1) and determine the exact value of matroidal connectivity and conditional matroidal connectivity.
In order to present the evaluation effect of sports on users’ health, this paper puts forward the construction of sports machinery error model based on wireless communication technology. A model for evaluating human health by sports is constructed, which consists of data layer, logic layer and display layer. The data layer is used to obtain sports event data, real-time sports data, and health monitoring data, and transmit them to the logic layer. The logic layer fuses human health data and extracts the characteristics of human health information. Combining with wireless communication technology, the characteristics are input into the long-short memory neural network, which outputs the results of sports health pattern recognition after forward and reverse operations, thus realizing the construction of sports machinery error model. The experimental results show that the model can effectively improve the BMI index value of the human body and reduce the maximum loss value, and the output results have higher reliability and fit, faster iteration speed and better performance.
Due to rapid growth in the integrated circuit (IC) industry, the demand for compact digital system design is high. However, the continued technology reductions made the feasibility of further scaling down transistor size more challenging. In response to the growing demand for ultra-compact IC designs, the revolutionary quantum-dot cellular automata (QCA) technology has emerged as a promising solution. In a digital era, the counters are widely adopted in the peer-to-peer process flow to establish a mechanism for generating unique values for each identifier/number. In this work, a unique synchronous and asynchronous counters architecture is proposed with a reliable D and T flip-flop design. The proposed QCA architecture is implemented and validated with the QCA designer tool. Furthermore, in QCA technology, unreliable QCA designs can lead to frequent errors and malfunctions in the implemented logic. To overcome this challenge, the proposed design prioritizes cell placement (the relative positions of QCA cells) to make the circuit more robust. As a result, the circuit can still produce the expected functionality even if some QCA cells malfunction. Hence, to ensure the reliability of the proposed QCA architecture, the missing cell defect analysis is carried out in comparison with existing state-of-the-art designs. Based on comparison results, the unique designs like the proposed multiplexer, D flip-flop and T flip-flop design exhibit success rates of 67.28, 77.04 and 85.15%, respectively. The experimental results demonstrate that the proposed counter-architecture outperforms existing architectures.
A modified definition of the stress-strength reliability function is proposed for discrete family of distributions. The newly introduced definition is utilized to derive expressions of the reliability function as well as its Uniformly Minimum Variance Unbiased Estimator (UMVUE) and unbiased estimator of the variance of the UMVUE for different members of the discrete Power Series family of Distributions (PSD). The performance of the UMVUE is compared with the corresponding Maximum Likelihood (ML) estimator in terms of relevant measures. The modified definition is further applied in the context of engineering data and real-world football data to demonstrate its usefulness.
Due to the presence of reliability, security, or performance-related issues, software systems will become nondependable during the early development phase. Currently, there is a lack of research addressing these dependability issues in software quality analysis. To bridge this gap, this study proposes a neutrosophic inference system (NIS)-based model to predict reliability, security and performance attributes during the early phase. The NIS model accommodates uncertainties, imprecisions, indeterminacies and incompleteness of metric values utilizing its truth, indeterminate and false components. To enhance the prediction accuracy of the NIS model, a rule-base formation algorithm is proposed for NIS considering domain expert knowledge. Finally, an artificial neural network (ANN) model is designed based on estimated values of reliability, security and performance attributes to predict the total number of faults in software projects. Comparative analysis demonstrates that the proposed model outperforms other existing models. This proposed methodology helps software developers in assessing software dependability from the beginning stage of software development.
At present, the fault big data of open source software are opened as the open data set. In particular, the fault detection phenomenon depends on various situation of operation in OSS. Actually, various software reliability growth models have been actively proposed by several researchers in the past. This paper applies the deep learning approach to the OSS fault big data. Then, we propose several reliability assessment measures based on the deep learning. As an approach, the range of estimate expands by the Wiener process embedded for the data preprocessing. Furthermore, this paper proposes the performability as novel reliability assessment measure from the proposed deep learning model. In particular, we develop the prototype of 3D reliability assessment tool. Several illustration examples based on the developed prototype of 3D reliability assessment tool by using the actual fault big data sets are shown in this paper.
The loading manipulator is an important actuator in modern artillery automatic loading systems, and it is crucial to evaluate its positioning accuracy accurately and efficiently. This paper presents a reliability analysis method based on the combination of feedforward neural network (FNN) and mixed importance sampling (MIS) to study the positioning accuracy of the loading manipulator. First, the manipulator dynamic model considering the control system is established. To improve the efficiency of reliability analysis, a surrogate model based on FNN is constructed. By searching the most probable point (MPP) of each limit state equation, an MIS density function is constructed, after which important samples can be extracted to evaluate the reliability of the positioning accuracy of the manipulator. Compared with the Monte Carlo simulation (MCS) method, the proposed method enjoys higher efficiency while ensuring accuracy. The results of the example show that the gear clearance and friction coefficient have an important effect on the positioning accuracy of the loading manipulator, which should be taken into account in the machining and installation process.
Most real-time systems are embedded in portable, battery-powered devices that have strict limitations on power consumption. Safety-critical embedded systems, in particular, demand a high level of reliability. To effectively enhance both reliability and power consumption, it is crucial to consider both criteria with an accurate and stable model. Existing research on power and reliability models for embedded systems often lacks the accuracy required for safety-critical applications and fails to account for all hardware and software components. This paper proposes a machine learning-based optimization model designed to improve the accuracy and stability of reliability and power consumption assessments. The proposed model demonstrates a significant enhancement in accuracy compared to previous randomization models, showing a 2.75%2.75% improvement in reliability and a 0.88%0.88% improvement in power consumption relative to existing state-of-the-art models.
The connectivity plays an important role in measuring the fault tolerance and reliability of interconnection networks. The generalized kk-connectivity of a graph GG, denoted by κk(G)κk(G), is an important indicator of a network’s ability for fault tolerance and reliability. The bubble-sort star graph, denoted by BSnBSn, is a well known interconnection network. In this paper, we show that κ3(BSn)=2n−4κ3(BSn)=2n−4 for n≥3n≥3, that is, for any three vertices in BSnBSn, there exist 2n−42n−4 internally disjoint trees connecting them in BSnBSn for n≥3n≥3, which attains the upper bound of κ3(G)≤δ(G)−1κ3(G)≤δ(G)−1 given by Li et al. for G=BSn.
Connectivity is an important index to evaluate the reliability and fault tolerance of a graph. As a natural extension of the connectivity of graphs, the g-component connectivity of a graph G, denoted by cκg(G), is the minimum number of vertices whose removal from G results in a disconnected graph with at least g components. It is a scientific issue to determine the exact values of cκg(G) for distinguishing the fault tolerability of networks. However, g-component connectivity of many well-known interconnection networks has not been explored even for small g′s. For the n-dimensional alternating group networks ANn and n-dimensional godan graphs EAn, we show that cκ5(ANn)=4n−8 for n≥6, and cκg(EAn)=(g−2)(n−2)+n for g∈{3,4,5} and n≥4.
Let G be a connected graph, and S⊆V(G) with |S|≥2. κG(S) refers to the maximum number k of edge-disjoint trees T1,T2,…,Tk in G such that V(Ti)∩V(Tj)=S for distinct i,j∈{1,2,…,k}. The generalized k-connectivity of G, denoted by κk(G), is defined as the minimum value of κG(S) over all S⊆V(G) with |S|=k. In fact, κ2(G) is exactly the traditional connectivity of G. Exchanged crossed cube ECQ(s,t), a variation of hypercube, has better properties than other variations of hypercube. In this work we obtain that κ3(ECQ(s,t))=s with 2≤s≤t.
Identification, by algorithmic devices, of grammars for languages from positive data is a well studied problem. In this paper we are mainly concerned about the learnability of indexed families of uniformly recursive languages. Mukouchi introduced the notion of minimal and reliable minimal concept inference from positive data. He left open a question about whether every indexed family of uniformly recursive languages that is minimally inferable is also reliably minimally inferable. We show that this is not the case.
Applications of Artificial Intelligence (AI) are revolutionizing biomedical research and healthcare by offering data-driven predictions that assist in diagnoses. Supervised learning systems are trained on large datasets to predict outcomes for new test cases. However, they typically do not provide an indication of the reliability of these predictions, even though error estimates are integral to model development. Here, we introduce a novel method to identify regions in the feature space that diverge from training data, where an AI model may perform poorly. We utilize a compact precompiled structure that allows for fast and direct access to confidence scores in real time at the point of use without requiring access to the training data or model algorithms. As a result, users can determine when to trust the AI model’s outputs, while developers can identify where the model’s applicability is limited. We validate our approach using simulated data and several biomedical case studies, demonstrating that our approach provides fast confidence estimates (<0.2 milliseconds per case), with high concordance to previously developed methods (f-score>0.965). These estimates can be easily added to real-world AI applications. We argue that providing confidence estimates should be a standard practice for all AI applications in public use.
A thorough understanding of reliability and radiation hardness is required in order to use compound semiconductors in space, or in other environments involving radiation and/or extended temperature operation. This paper discusses those issues for several types of compound semiconductors that are of interest for high-performance applications.
Through silicon vias (TSVs) play a critical role in today’s microelectronic technology as they enable fabrication of three-dimensional integrated circuits. Traditionally, copper has been used to fill TSVs. However, copper is prone to electro-migration and as the size of TSVs become smaller, copper resistance increases significantly, thereby reducing its potential for TSV material at nanoscales. A proposed hybrid structure is presented here in which Carbon Nanotube (CNT) bundles are grown vertically inside TSVs and encased with copper. The CNT bundles assists with increasing the strength of the hybrid structure and is likely to enhance the reliability of the package. Thermo-mechanical stress analysis and reliability evaluations is conducted to determine the effect of CNT bundles on stress distribution in the package and their impact on reliability of other critical components such as solder bumps that are used to join the silicon layers. The finite element analysis shows that addition of CNT material to the structure, even in small volume ratios tend to redistribute the stress and refocus it to inside the CNT material rather than interfaces. Interface stresses in low strength material typically cause delamination and failure in the package. Redistribution of stress is likely to enhance the reliability of the TSVs. Additional reliability analysis of the solder joints, shows that CNT additions enhances the number of cycles to failure four times. It is hypothesized that addition of CNTs decreases the local CTE mismatch between the silicon layers and assists in reducing the stress in solder bumps. This hypothesis is proven using finite element simulations.
This article investigates complexity and approximability properties of combinatorial optimization problems yielded by the notion of Shared Risk Resource Group (SRRG). SRRG has been introduced in order to capture network survivability issues where a failure may break a whole set of resources, and has been formalized as colored graphs, where a set of resources is represented by a set of edges with same color. We consider here the analogous of classical problems such as determining paths or cuts with the minimum numbers of colors or color disjoint paths. These optimization problems are much more difficult than their counterparts in classical graph theory. In particular standard relationship such as the Max Flow - Min Cut equality do not hold any longer. In this article we identify cases where these problems are polynomial, for example when the edges of a given color form a connected subgraph, and otherwise give hardness and non approximability results for these problems.
We consider the problem of scheduling an application on a parallel computational platform. The application is a particular task graph, either a linear chain of tasks, or a set of independent tasks. The platform is made of identical processors, whose speed can be dynamically modified. It is also subject to failures: if a processor is slowed down to decrease the energy consumption, it has a higher chance to fail. Therefore, the scheduling problem requires us to re-execute or replicate tasks (i.e., execute twice the same task, either on the same processor, or on two distinct processors), in order to increase the reliability. It is a tri-criteria problem: the goal is to minimize the energy consumption, while enforcing a bound on the total execution time (the makespan), and a constraint on the reliability of each task. Our main contribution is to propose approximation algorithms for linear chains of tasks and independent tasks. For linear chains, we design a fully polynomial-time approximation scheme. However, we show that there exists no constant factor approximation algorithm for independent tasks, unless P=NP, and we propose in this case an approximation algorithm with a relaxation on the makespan constraint.
With the rapid development of cloud computing, many large-scale data centers are being built to provide increasingly popular online application services, such as search, e-mails, WeChat, and microblog, etc. The reliability of a massive data center network is the likelihood that it performs its expected functions consistently well under the given conditions within a specified time interval. A typical approach to measure the reliability of the system is to count the mean time to failure (MTTF), which shows the time that the appearance of a certain number of faulty subsystem costs. The higher the MTTF, the more reliable the system is. In this paper, we explore the reliability of data center network DCell when it is decomposed into smaller ones along the last dimension under server (node) failure model and link failure model, respectively.
Reliability evaluation of an interconnection network is of great significance for construction and maintenance of the network. The super (edge-)connectivity, restricted (edge-)connectivity and cyclic (edge-)connectivity are important parameters to evaluate network reliability. Compared with the hypercube Qn, the folded Petersen network Pn has better properties, including strong connectivity, symmetry. Besides, it has smaller diameter and more vertices than Qn with the same degree and connectivity. In this paper, based on the symmetric properties of Pn, we prove that Pn is super (edge-)connected, super restricted edge-connected and super cyclically edge-connected. In addition, we obtain the restricted edge-connectivity and the cyclic edge-connectivity of Pn.
A system of simultaneously triggered clocks is designed to be stabilizing: if the clock values ever differ, the system is guaranteed to converge to a state where all clock values are identical, and are subsequently maintained to be identical. For an N-clock system, the design uses N registers of 2logN bits each and guarantees convergence to identical values within N2 "triggers".
Please login to be able to save your searches and receive alerts for new content matching your search criteria.