Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Software reliability is a critical factor in ensuring the quality and dependability of software systems. Historically, software errors were primarily attributed to coding mistakes. However, recent insights have revealed that human error is a dynamic phenomenon influenced by factors such as learning processes and fatigue. This paper presents an approach that incorporates tester fatigue into the debugging process, thereby improving the development of more realistic software reliability growth models (SRGMs). The proposed approach utilizes S-shaped learning curves and an exponential fatigue function to account for the dynamic nature of human error. Additionally, interdependencies between faults are considered in the analysis. The quality, predictive capabilities, and accuracy of the proposed models are rigorously evaluated using three well-established fit criteria: mean squared error (MSE), mean absolute error (MAE), and the coefficient of determination (R2), applied to two failure datasets. By integrating the fatigue factor into the proposed models, a more comprehensive representation of software reliability dynamics is provided. This research contributes to the advancement of software reliability analysis and enhances the assessment of software system dependability.
Catastrophic forgetting (CF) poses the most important challenges in neural networks for continual learning. During training, numerous current approaches replay precedent data that deviate from the constraints of an optimal continual learning system. In this work, optimisation-enabled continually evolved classifier is used for the CF. Moreover, a few-shot continual learning model is exploited to mitigate CF. Initially, images undergo pre-processing using Contrast Limited Adaptive Histogram Equalisation (CLAHE). Then; the pre-processed outputs are utilised in the classification through Continually Evolved Classifiers based on few-shot incremental learning. Here, the initial training is done by using the CNN (Convolutional Neural Network) model and then the pseudo incremental learning phase is performed. Furthermore, to enhance the performance, an optimisation approach, called Serial Exponential Sand Cat Swarm Optimisation (SExpSCSO), is developed. SExpSCSO algorithm modifies Sand Cat Swarm Optimisation by incorporating the serial exponential weighted moving average concept. The proposed SExpSCSO is applied to train the continually evolved classifier by optimising weights and thus, improves the classifiers performance. Finally, the experimentation analysis reveals that the adopted system acquired the maximal accuracy of 0.677, maximal specificity of 0.592, maximal precision of 0.638, recall of 0.716 and F-measure of 0.675.
Identification, by algorithmic devices, of grammars for languages from positive data is a well studied problem. In this paper we are mainly concerned about the learnability of indexed families of uniformly recursive languages. Mukouchi introduced the notion of minimal and reliable minimal concept inference from positive data. He left open a question about whether every indexed family of uniformly recursive languages that is minimally inferable is also reliably minimally inferable. We show that this is not the case.
This work evaluates the capability of a spiking cerebellar model embedded in different loop architectures (recurrent, forward, and forward&recurrent) to control a robotic arm (three degrees of freedom) using a biologically-inspired approach. The implemented spiking network relies on synaptic plasticity (long-term potentiation and long-term depression) to adapt and cope with perturbations in the manipulation scenario: changes in dynamics and kinematics of the simulated robot. Furthermore, the effect of several degrees of noise in the cerebellar input pathway (mossy fibers) was assessed depending on the employed control architecture. The implemented cerebellar model managed to adapt in the three control architectures to different dynamics and kinematics providing corrective actions for more accurate movements. According to the obtained results, coupling both control architectures (forward&recurrent) provides benefits of the two of them and leads to a higher robustness against noise.
Spiking Neural Networks (SNN) were shown to be suitable tools for the processing of spatio-temporal information. However, due to their inherent complexity, the formulation of efficient supervised learning algorithms for SNN is difficult and remains an important problem in the research area. This article presents SPAN — a spiking neuron that is able to learn associations of arbitrary spike trains in a supervised fashion allowing the processing of spatio-temporal information encoded in the precise timing of spikes. The idea of the proposed algorithm is to transform spike trains during the learning phase into analog signals so that common mathematical operations can be performed on them. Using this conversion, it is possible to apply the well-known Widrow–Hoff rule directly to the transformed spike trains in order to adjust the synaptic weights and to achieve a desired input/output spike behavior of the neuron. In the presented experimental analysis, the proposed learning algorithm is evaluated regarding its learning capabilities, its memory capacity, its robustness to noisy stimuli and its classification performance. Differences and similarities of SPAN regarding two related algorithms, ReSuMe and Chronotron, are discussed.
We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power — as the static analog neural networks — irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.
Classification and sequence learning are relevant capabilities used by living beings to extract complex information from the environment for behavioral control. The insect world is full of examples where the presentation time of specific stimuli shapes the behavioral response. On the basis of previously developed neural models, inspired by Drosophila melanogaster, a new architecture for classification and sequence learning is here presented under the perspective of the Neural Reuse theory. Classification of relevant input stimuli is performed through resonant neurons, activated by the complex dynamics generated in a lattice of recurrent spiking neurons modeling the insect Mushroom Bodies neuropile. The network devoted to context formation is able to reconstruct the learned sequence and also to trace the subsequences present in the provided input. A sensitivity analysis to parameter variation and noise is reported. Experiments on a roving robot are reported to show the capabilities of the architecture used as a neural controller.
Neurons are the fundamental units of the brain and nervous system. Developing a good modeling of human neurons is very important not only to neurobiology but also to computer science and many other fields. The McCulloch and Pitts neuron model is the most widely used neuron model, but has long been criticized as being oversimplified in view of properties of real neuron and the computations they perform. On the other hand, it has become widely accepted that dendrites play a key role in the overall computation performed by a neuron. However, the modeling of the dendritic computations and the assignment of the right synapses to the right dendrite remain open problems in the field. Here, we propose a novel dendritic neural model (DNM) that mimics the essence of known nonlinear interaction among inputs to the dendrites. In the model, each input is connected to branches through a distance-dependent nonlinear synapse, and each branch performs a simple multiplication on the inputs. The soma then sums the weighted products from all branches and produces the neuron’s output signal. We show that the rich nonlinear dendritic response and the powerful nonlinear neural computational capability, as well as many known neurobiological phenomena of neurons and dendrites, may be understood and explained by the DNM. Furthermore, we show that the model is capable of learning and developing an internal structure, such as the location of synapses in the dendritic branch and the type of synapses, that is appropriate for a particular task — for example, the linearly nonseparable problem, a real-world benchmark problem — Glass classification and the directional selectivity problem.
Space and time are fundamental attributes of the external world. Deciphering the brain mechanisms involved in processing the surrounding environment is one of the main challenges in neuroscience. This is particularly defiant when situations change rapidly over time because of the intertwining of spatial and temporal information. However, understanding the cognitive processes that allow coping with dynamic environments is critical, as the nervous system evolved in them due to the pressure for survival. Recent experiments have revealed a new cognitive mechanism called time compaction. According to it, a dynamic situation is represented internally by a static map of the future interactions between the perceived elements (including the subject itself). The salience of predicted interactions (e.g. collisions) over other spatiotemporal and dynamic attributes during the processing of time-changing situations has been shown in humans, rats, and bats. Motivated by this ubiquity, we study an artificial neural network to explore its minimal conditions necessary to represent a dynamic stimulus through the future interactions present in it. We show that, under general and simple conditions, the neural activity linked to the predicted interactions emerges to encode the perceived dynamic stimulus. Our results show that this encoding improves learning, memorization and decision making when dealing with stimuli with impending interactions compared to no-interaction stimuli. These findings are in agreement with theoretical and experimental results that have supported time compaction as a novel and ubiquitous cognitive process.
The behavioral effects of a standardized extract from Panax ginseng roots (G115), of a standardized extract from Ginkgo biloba leaves (GK501) and of their combination (PHL-00701) (Gincosan®) were examined in experiments on rats with undisturbed memory and on rats with experimentally-impaired memory (by alcohol or by muscarinic- and dopamine-receptor antagonists), using methods for active avoidance (shuttle-box) and passive avoidance (step-down and step-through). On multiple administration G115, GK501 and PHL-00701 exerted favorable effects on learning and memory. These effects varied with the dose and administration schedules, with the rat strain and with the behavioral method. Based on earlier results, we discuss the role of changes in brain biogenic amines induced by the extracts in their mechanism of action. The present results allow for ranking G115, GK501 and their combination PHL-00701 (Gincosan®) among cognition-enhancing (nootropic) drugs.
The precautionary principle was included in 1992 in the Rio Declaration and is part of important international agreements such as the Convention on Biological Diversity. Yet, it is not a straight-forward guide for environmental policy because many interpretations are possible as shown in this paper. Its different economic versions can result in conflicting policy recommendations about resource conservation. The principle does not always favor (natural) resource conservation (e.g., biodiversity conservation) although it has been adopted politically on the assumption it does. The principle's consequences are explored for biodiversity conservation when the introduction of new genotypes is possible.
Optimal compression and decompression of fractal images can be performed by out-of-equilibrium stochastic systems which exhibit a learning behaviour. We show how stochastic systems of this type are able to learn the structure of classical fractal images in a simple situation.
In information processing systems for classification and regression tasks, global parameters are often introduced to balance the prior expectation about the processed data and the emphasis on reproducing the training data. Since over-emphasizing either of them leads to poor generalization, optimal global parameters are needed. Conventionally, a time-consuming cross-validation procedure is used. Here we introduce a novel approach to this problem, based on the Green's function. All estimations can be made empirically and hence can be easily extended to more complex systems. The method is fast since it does not require the validation step. Its performances on benchmark data sets are very satisfactory.
This paper is aimed at 3D object understanding from 2D images, including articulated objects in active vision environment, using interactive, and internet virtual reality techniques. Generally speaking, an articulated object can be divided into two portions: main rigid portion and articulated portion. It is more complicated that "rigid" object in that the relative positions, shapes or angles between the main portion and the articulated portion have essentially infinite variations, in addition to the infinite variations of each individual rigid portions due to orientations, rotations and topological transformations. A new method generalized from linear combination is employed to investigate such problems. It uses very few learning samples, and can describe, understand, and recognize 3D articulated objects while the objects status is being changed in an active vision environment.
This paper proposes a general formalism for representation, inference and learning with general hybrid Bayesian networks in which continuous and discrete variables may appear anywhere in a directed acyclic graph. The formalism fuzzifies a hybrid Bayesian network into two alternative forms: the first form replaces each continuous variable in the given directed acyclic graph (DAG) by a partner discrete variable and adds a directed link from the partner discrete variable to the continuous one. The mapping between two variables is not crisp quantization but is approximated (fuzzified) by a conditional Gaussian (CG) distribution. The CG model is equivalent to a fuzzy set but no fuzzy logic formalism is employed. The conditional distribution of a discrete variable given its discrete parents is still assumed to be multinomial as in discrete Bayesian networks. The second form only replaces each continuous variable whose descendants include discrete variables by a partner discrete variable and adds a directed link from that partner discrete variable to the continuous one. The dependence between the partner discrete variable and the original continuous variable is approximated by a CG distribution, but the dependence between a continuous variable and its continuous and discrete parents is approximated by a conditional Gaussian regression (CGR) distribution. Obviously, the second form is a finer approximation, but restricted to CGR models, and requires more complicated inference and learning algorithms. This results in two general approximate representations of a general hybrid Bayesian networks, which are called here the fuzzy Bayesian network (FBN) form-I and form-II. For the two forms of FBN, general exact inference algorithms exists, which are extensions of the junction tree inference algorithm for discrete Bayesian networks. Learning fuzzy Bayesian networks from data is different from learning purely discrete Bayesian networks because not only all the newly converted discrete variables are latent in the data, but also the number of discrete states for each of these variables and the CG or CGR distribution of each continuous variable given its partner discrete parents or both continuous and discrete parents have to be determined.
This paper is concerned with an application of Hidden Markov Models (HMMs) to the generation of shape boundaries from image features. In the proposed model, shape classes are defined by sequences of "shape states" each of which has a probability distribution of expected image feature types (feature "symbols").The tracking procedure uses a generalization of the well-known Viterbi method by replacing its search by a type of "beam-search" so allowing the procedure, at any time, to consider less likely features (symbols) as well the search for an instantiable optimal state sequences. We have evaluated the model's performance on a variety of image and shape types and have also developed a new performance measure defined by an expected Hamming distance between predicted and observed symbol sequences. Results point to the use of this type of model for the depiction of shape boundaries when it is necessary to have accurate boundary annotations as, for example, occurs in Cartography.
We present an approach for the development of Language Understanding systems from a Transduction point of view. We describe the use of two types of automatically inferred transducers as the appropriate models for the understanding phase in dialog systems.
SUSTAIN (Supervised and Unsupervised STratified Adaptive Incremental Network) is a network model of human category learning. SUSTAIN initially assumes a simple category structure. If simple solutions prove inadequate and SUSTAIN is confronted with a surprising event (e.g. it is told that a bat is a mammal instead of a bird), SUSTAIN recruits an additional cluster to represent the surprising event. Newly recruited clusters are available to explain future events and can themselves evolve into prototypes/attractors/rules. SUSTAIN has expanded the scope of findings that models of human category learning can address. This paper extends SUSTAIN to account for both supervised and unsupervised learning data through a common mechanism. The modified model, uSUSTAIN (unified SUSTAIN), is successfully applied to human learning data that compares unsupervised and supervised learning performances.18
Multi-label learning (MLL) problems abound in many areas, including text categorization, protein function classification, and semantic annotation of multimedia. Issues that severely limit the applicability of many current machine learning approaches to MLL are the large-scale problem, which have a strong impact on the computational complexity of learning. These problems are especially pronounced for approaches that transform MLL problems into a set of binary classification problems for which Support Vector Machines (SVMs) are used. On the other hand, the most efficient approaches to MLL, based on decision trees, have clearly lower predictive performance. We propose a hybrid decision tree architecture, where the leaves do not give multi-label predictions directly, but rather utilize local SVM-based classifiers giving multi-label predictions. A binary relevance architecture is employed in the leaves, where a binary SVM classifier is built for each of the labels relevant to that particular leaf. We use a broad range of multi-label datasets with a variety of evaluation measures to evaluate the proposed method against related and state-of-the-art methods, both in terms of predictive performance and time complexity. Our hybrid architecture on almost every large classification problem outperforms the competing approaches in terms of the predictive performance, while its computational efficiency is significantly improved as a result of the integrated decision tree.
This article discusses some intelligence aspects of Chinese characters. Some basic concepts of two-dimensional pattern representation and artificial intelligence such as semantic networks, forward chaining, deduction and the resolution principle are used to analyze and interpret the syntactic structure, representation, semantics and evolution of Chinese characters. The concept of degrees of ambiguity and the principle of new characters are investigated. It is found that Chinese characters are actually not only artistically elegant and culturally rich but also semantically meaningful and intelligently sound. Finally some topics for future research such as intelligent pattern recognition for Chinese characters, automatic learning and translation, and knowledge-based Chinese language understanding are discussed.