Please login to be able to save your searches and receive alerts for new content matching your search criteria.
This paper explores the application of machine learning techniques for acquiring utility functions in message routing within Vehicular Ad Hoc Networks (VANETs), aiming to enhance the performance of message routing. Message routing in VANETs employs a store-carry-and-forward protocol, where vehicles (nodes) holding messages traverse suitable trajectories and opportunistically forward messages upon entering the communication range of other nodes. The core of the protocols involves context-dependent decision-making on whether to forward messages to an encountered node and, if so, determining which message to transmit. To enhance the message arrival to the destination with short delay, we introduce a method building upon Wu et al.’s prior work on utility function acquisition through the structure learning of Bayesian networks. Our proposed method incorporates two key innovations: (1) integrating destination information of forwarded messages as an attribute within the Bayesian network and (2) extending the crossover operation from genetic algorithms to optimize the attribute order in the structure learning. Through extensive experiments conducted on the ONE simulator, our method demonstrates superior utility compared to the existing baselines, showcasing its effectiveness in VANET message routing improvement.
In financial network models, to assess systemic risk, a general understanding and prediction of precisely how a single financial institution is associated with systemic risk from the network perspective remains lacking. This paper proposes a framework for predicting and assessing system crises through inferring the cause–effect relationships between financial institutions and system state, which is structured in three steps: the assessment stage for system state based on the mean-variance framework, the prediction stage based on a Bayesian network and the reliability stage based on the Markov process. By applying them to monthly returns of financial institutions, it implies the need to pay attention to insurance and Broker sectors while regulating the banking system on the Bayesian network theory. Moreover, we find that the measure contains predictive power both during tranquil periods and during financial crisis periods. The results can be applied to derive interventions in financial crisis management with regard to systemic risk prediction and system state reliability.
Functional MRI (fMRI) is a brain signal with high spatial resolution, and visual cognitive processes and semantic information in the brain can be represented and obtained through fMRI. In this paper, we design single-graphic and matched/unmatched double-graphic visual stimulus experiments and collect 12 subjects’ fMRI data to explore the brain’s visual perception processes. In the double-graphic stimulus experiment, we focus on the high-level semantic information as “matching”, and remove tail-to-tail conjunction by designing a model to screen the matching-related voxels. Then, we perform Bayesian causal learning between fMRI voxels based on the transfer entropy, establish a hierarchical Bayesian causal network (HBcausalNet) of the visual cortex, and use the model for visual stimulus image reconstruction. HBcausalNet achieves an average accuracy of 70.57% and 53.70% in single- and double-graphic stimulus image reconstruction tasks, respectively, higher than HcorrNet and HcasaulNet. The results show that the matching-related voxel screening and causality analysis method in this paper can extract the “matching” information in fMRI, obtain a direct causal relationship between matching information and fMRI, and explore the causal inference process in the brain. It suggests that our model can effectively extract high-level semantic information in brain signals and model effective connections and visual perception processes in the visual cortex of the brain.
An extensive, in-depth study of diabetes risk factors (DBRF) is of crucial importance to prevent (or reduce) the chance of suffering from type 2 diabetes (T2D). Accumulation of electronic health records (EHRs) makes it possible to build nonlinear relationships between risk factors and diabetes. However, the current DBRF researches mainly focus on qualitative analyses, and the inconformity of physical examination items makes the risk factors likely to be lost, which drives us to study the novel machine learning approach for risk model development. In this paper, we use Bayesian networks (BNs) to analyze the relationship between physical examination information and T2D, and to quantify the link between risk factors and T2D. Furthermore, with the quantitative analyses of DBRF, we adopt EHR and propose a machine learning approach based on BNs to predict the risk of T2D. The experiments demonstrate that our approach can lead to better predictive performance than the classical risk model.
As Bayesian networks become widely accepted as a normative formalism for diagnosis based on probabilistic knowledge, they are applied to increasingly larger problem domains. These large projects demand a systematic approach to handle the complexity in knowledge engineering. The needs include modularity in representation, distribution in computation, as well as coherence in inference. Multiply Sectioned Bayesian Networks (MSBNs) provide a distributed multiagent framework to address these needs.
According to the framework, a large system is partitioned into subsystems and represented as a set of related Bayesian subnets. To ensure exact inference, the partition of a large system into subsystems and the representation of subsystems must follow a set of technical constraints. How to satisfy these goals for a given system may not be obvious to a practitioner. In this paper, we address three practical modeling issues.
Alzheimer’s disease is the third most expensive disease, only after cancer and cardiopathy. It is also the fourth leading cause of death in the elderly after cardiopathy, cancer, and cerebral palsy. The disease lacks specific diagnostic criteria. At present, there is still no definitive and effective means for preclinical diagnosis and treatment. It is the only disease that cannot be prevented and cured among the world’s top ten fatal diseases. It has now been proposed as a global issue. Computer-aided diagnosis of Alzheimer’s disease (AD) is mostly based on images at this stage. This project uses multi-modality imaging MRI/PET combining with clinical scales and uses deep learning-based computer-aided diagnosis to treat AD, improves the comprehensiveness and accuracy of diagnosis. The project uses Bayesian model and convolutional neural network to train experimental data. The experiment uses the improved existing network model, LeNet-5, to design and build a 10-layer convolutional neural network. The network uses a back-propagation algorithm based on a gradient descent strategy to achieve good diagnostic results. Through the calculation of sensitivity, specificity and accuracy, the test results were evaluated, good test results were obtained.
This paper presents a new method for learning the structure of Bayesian Networks. Broadly speaking, we leverage the Branch and Bound (B&B) to derive the best Directed Acyclic Graphs (DAGs) that describes the structure of the network. Our contribution consists in introducing two main heuristics: the first one allows the selection of the graph that has the best score among those that contain less cycles, the second one eliminates the shortest cycle from the selected graph; it aims to reduce the number of explored nodes. Our experimental study asserts that the suggested proposal improves the results for multiple data sets. These facts are confirmed by the reduction of the computation time and the memory overhead.
Batch processes and phenomena in traffic video data processing, such as traffic video image processing and intelligent transportation, are commonly used. The application of batch processing can increase the efficiency of resource conservation. However, owing to limited research on traffic video data processing conditions, batch processing activities in this area remain minimally examined. By employing database functional dependency mining, we developed in this study a workflow system. Meanwhile, the Bayesian network is a focus area of data mining. It provides an intuitive means for users to comply with causality expression approaches. Moreover, graph theory is also used in data mining area. In this study, the proposed approach depends on relational database functions to remove redundant attributes, reduce interference, and select a property order. The restoration of selective hidden naive Bayesian (SHNB) affects this property order when it is used only once. With consideration of the hidden naive Bayes (HNB) influence, rather than using one pair of HNB, it is introduced twice. We additionally designed and implemented mining dependencies from a batch traffic video processing log for data execution algorithms.
In the case of expert systems, when premises from two different rules argue for the same conclusion, there exist a number of methods for combining the certainties in the conclusion, obtained from each premise individually, to yield a total certainty in the conclusion. When these certainties are interpreted as probability values, it is necessary in all these methods to make assumptions of independence concerning the premises and conclusion. It is proven here that there does not exist a procedure which yields, for all probability functions, meaningful bounds for the probability of the conclusion based on two premises in terms of the probabilities of the conclusion based on each premise individually. Additionally, it is proven that there does not even exist a procedure which yields, for all probability functions, meaningful bounds for the probability of the conclusion based on two premises in terms of the probabilities of all one- and two-member combinations of the premises and the conclusion.
Finally, suggestions are made as to when to make assumptions of independence and when to take other alternatives.
Existing Bayesian network (BN) structure learning algorithms based on dynamic programming have high computational complexity and are difficult to apply to large-scale networks. Therefore, this paper proposes a Dynamic Programming BN structure learning algorithm based on Mutual Information, the MIDP (Dynamic Programming Based on Mutual Information) algorithm. The algorithm uses mutual information to build the maximum spanning tree and M-order matrix, and introduces a penalty coefficient d based on the matrix-based node removal strategy, so as to reduce the number of scoring calculations and time consumption of the algorithm. Simulation results show that, compared with DP, SMDP and MEDP algorithms, the MIDP algorithm can reduce the calculation times and time consumption of algorithm scores while maintaining the accuracy of the algorithm when selecting the appropriate d value.
The underground utility tunnels accommodate various types of urban lifelines, which are of great significance for improving the living standards of the citizens. With the rapid development of underground utility tunnels, the large-scale underground utility tunnel systems are gradually becoming the operational lifeblood of China’s large cities. Currently, most of the underground utility tunnels’ risks are estimated and analyzed from a static perspective, and the analysis results are one-sided. This study proposes a dynamic risk evaluation framework. A risk assessment and sensitivity analysis framework based on Bayesian network is established in this study. Combined with the groundwater and electric tunnel risk accident case study, the operation and maintenance data of Beijing Future Science and Technology City from 2010 to 2018 are collected for learning to obtain the conditional probability of the Bayesian network node by using the K2 algorithm. The overall evolution process from the beginning to the end of groundwater tunnel accidents is clearly described and displayed. Through sensitivity analysis and critical path analysis, the critical points of an accident and the probabilities of risk occurrence are identified and predicted. This proposed framework could facilitate the underground utility tunnel management for controlling risk resources, mitigating risk damage and reducing risk losses.
It is crucial for enterprises to clearly identify user needs during the process of formulating product design improvement plans. Therefore, it is essential to comprehensively and accurately identify user needs, explore the reasons behind the emergence of these needs, and incorporate user opinions into the process of product design improvement. A method is proposed to comprehensively and accurately capture user requirements and address the challenge of identifying the underlying causes of user requirements. This method utilizes online comments and operational data to identify user requirements and their influencing factors. First, text sentiment analysis techniques are employed to quantify the importance and performance values of product feature topics. Second, we construct a quadrant model to identify product features requiring improvement, and the original negative comments related to these features are traced. However, the quadrant model alone is insufficient to reflect specific product issues that users are concerned about. Therefore, a functional structure model based on product issues is designed to filter and identify factors that influence user requirements using operational data. Finally, a Bayesian network inference approach is utilized to identify the key influencing factors on user requirements, enabling analysis of the causes behind user requirements and the proposal of product design improvement strategies. The feasibility and effectiveness of the proposed method are validated through experiments conducted on heavy-duty truck data. By analyzing the original negative comments related to the power characteristics, specific user demands regarding the insufficient power of the product were identified, such as “obviously insufficient power when climbing slopes” and other issues. Based on the vehicle power system functional structure model, combined with expert knowledge and operational data, factors related to the state of parts and user behavior that may affect “insufficient vehicle power” were identified. Based on the analysis results, suggestions were made to improve the engine intake air temperature control strategy and to enhance vehicle performance by promoting correct user behavior through informational campaigns.
In software architecture design, we explore design alternatives and make decisions about adoption or rejection of a design from a web of complex and often uncertain information. Different architectural design decisions may lead to systems that satisfy the same set of functional requirements but differ in certain quality attributes. In this paper, we propose a Bayesian Network based approach to rational architectural design. Our Bayesian Network helps software architects record and make design decisions. We can perform both qualitative and quantitative analysis over the Bayesian Network to understand how the design decisions influence system quality attributes, and to reason about rational design decisions. We use the KWIC (Key Word In Context) example to illustrate the principles of our approach.
Context: Companies must make a paradigm shift in which both short- and long-term value aspects are employed to guide their decision-making. Such need is pressing in innovative industries, such as ICT, and is the core of Value-based Software Engineering (VBSE).
Objective: This paper details three case studies where value estimation models using Bayesian Network (BN) were built and validated. These estimation models were based upon value-based decisions made by key stakeholders in the contexts of feature selection, test cases execution prioritization, and user interfaces design selection.
Methods: All three case studies were carried out according to a Framework called VALUE — improVing decision-mAking reLating to software-intensive prodUcts and sErvices development. This framework includes a mixed-methods approach, comprising several steps to build and validate company-specific value estimation models. Such a building process uses as input data key stakeholders’ decisions (gathered using the Value tool), plus additional input from key stakeholders.
Results: Three value estimation BN models were built and validated, and the feedback received from the participating stakeholders was very positive.
Conclusions: We detail the building and validation of three value estimation BN models, using a combination of data from past decision-making meetings and also input from key stakeholders.
Multiple diagnosis methods using Bayesian networks are rooted in numerous research projects about model-based diagnosis. Some of this research exploits probabilities to make a diagnosis. Many Bayesian network applications are used for medical diagnosis or for the diagnosis of technical problems in small or moderately large devices. This paper explains in detail the advantages of using Bayesian networks as graphic probabilistic models for diagnosing complex devices, and then compares such models with other probabilistic models that may or may not use Bayesian networks.
Bayesian networks are powerful tools for common knowledge representation and reasoning of partial beliefs under uncertainty. In the last decade, Bayesian networks have been successfully applied to a variety of problem domains and many Bayesian networks have been established. Confronted with many real-world applications, each Bayesian network established may be a local model of the whole knowledge domain. It is desirable to combine all local models of the whole domain into a global and more general representation. It is not realistic to expect the domain experts to construct the global model manually due to the too broad domain. As well, it is not feasible to relearn the model since the dataset may have been discarded or the whole domain may be distributed. Thus, constructing the global model by combining local models is doomed to be an alternative solution. This paper concentrates on finding a method of combination without loss of any information and free of datasets by capturing the graphical characterizations of global models. From the graphical perspective, this paper first captures two graphical characterizations to determine the skeleton and V-structure of global models. Moreover, a simple algorithm is elicited from these graphical characterizations to recover the global underlying model from multiple local models. A preliminary experiment demonstrates empirically that our algorithm is feasible.
Interval data are widely used in real applications to represent the values of quantities in uncertain situations. However, the implied probabilistic causal relationships among interval-valued variables with interval data cannot be represented and inferred by general Bayesian networks with point-based probability parameters. Thus, it is desired to extend the general Bayesian network with effective mechanisms of representation, learning and inference of probabilistic causal relationships implied in interval data. In this paper, we define the interval probabilities, the bound-limited weak conditional interval probabilities and the probabilistic description, as well as the multiplication rules. Furthermore, we propose the method for learning the Bayesian network structure from interval data and the algorithm for corresponding approximate inferences. Experimental results show that our methods are feasible, and we conclude that the Bayesian network with interval probability parameters is the expansion of the general Bayesian network.
To effectively intervene when cells are trapped in pathological modes of operation it is necessary to build models that capture relevant network structure and include characterization of dynamical changes within the system. The model must be of sufficient detail that it facilitates the selection of intervention points where pathological cell behavior arising from improper regulation can be stopped. What is known about this type of cellular decision-making is consistent with the general expectations associated with any kind of decision-making operation. If the result of a decision at one node is serially transmitted to other nodes, resetting their states, then the process may suffer from mechanistic inefficiencies of transmission or from blockage or activation of transmission through the action of other nodes acting on the same node. A standard signal-processing network model, Bayesian networks, can model these properties. This paper employs a Bayesian tree model to characterize conditional pathway logic and quantify the effects of different branching patterns, signal transmission efficiencies and levels of alternate or redundant inputs. In particular, it characterizes master genes and canalizing genes within the quantitative framework. The model is also used to examine what inferences about the network structure can be made when perturbations are applied to various points in the network.
Bayesian networks have been known as a powerful knowledge representation model and a computational architecture for reasoning under uncertainty. However, there are still some obstacles in applying them for practical applications. Among them are learning parameters, or the conditional probability tables from data. Much work in the field focuses on handling incomplete data, i.e. data with missing values. In this paper, we propose an approach for learning parameters of Bayesian networks from soft data, the values of which may be imprecise or uncertain, given by beliefs or confidence scales. Theoretical justification of the approach is given, and experimental results demonstrated the effectiveness of our approach.
In one procedure for finding the maximal prime decomposition of a Bayesian network or undirected graphical model, the first step is to create a minimal triangulation of the network, and a common and straightforward way to do this is to create a triangulation that is not necessarily minimal and then thin this triangulation by removing excess edges. We show that the algorithm for thinning proposed in several previous publications is incorrect. A different version of this algorithm is available in the R package gRbase, but its correctness has not previously been proved. We prove that this version is correct and provide a simpler version, also with a proof. We compare the speed of the two corrected algorithms in three ways and find that asymptotically their speeds are the same, neither algorithm is consistently faster than the other, and in a computer experiment the algorithm used by gRbase is faster when the original graph is large, dense, and undirected, but usually slightly slower when it is directed.