Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    Bayesian Learning of Optimal Utility Functions for Message Routing in Vehicular Ad Hoc Networks

    This paper explores the application of machine learning techniques for acquiring utility functions in message routing within Vehicular Ad Hoc Networks (VANETs), aiming to enhance the performance of message routing. Message routing in VANETs employs a store-carry-and-forward protocol, where vehicles (nodes) holding messages traverse suitable trajectories and opportunistically forward messages upon entering the communication range of other nodes. The core of the protocols involves context-dependent decision-making on whether to forward messages to an encountered node and, if so, determining which message to transmit. To enhance the message arrival to the destination with short delay, we introduce a method building upon Wu et al.’s prior work on utility function acquisition through the structure learning of Bayesian networks. Our proposed method incorporates two key innovations: (1) integrating destination information of forwarded messages as an attribute within the Bayesian network and (2) extending the crossover operation from genetic algorithms to optimize the attribute order in the structure learning. Through extensive experiments conducted on the ONE simulator, our method demonstrates superior utility compared to the existing baselines, showcasing its effectiveness in VANET message routing improvement.

  • articleNo Access

    LEARNING AND EVALUATING BAYESIAN NETWORK EQUIVALENCE CLASSES FROM INCOMPLETE DATA

    In this paper, we propose a new method, named Greedy Equivalence Search-Expectation Maximization (GES-EM), for learning Bayesian networks from incomplete data. Our method extends the recently proposed Greedy Equivalence Search (GES) algorithm10 to deal with incomplete data. For the quality evaluation of learned networks, we make use of the expected Bayesian Information Criterion (BIC) scoring function. In addition, we propose a new structural evaluation criterion. This so-called SEC criterion is more suitable than existing structural evaluation criteria, since it is based on the comparison of learned networks to the generating ones through Completed Partially Directed Acyclic Graphs (CPDAGs). Experimental results show that GES-EM algorithm yields more accurate structures than the standard Alternating Model Selection-Expectation Maximization (AMS-EM) algorithm.15.

  • articleNo Access

    BAYESIAN NETWORK WITH INTERVAL PROBABILITY PARAMETERS

    Interval data are widely used in real applications to represent the values of quantities in uncertain situations. However, the implied probabilistic causal relationships among interval-valued variables with interval data cannot be represented and inferred by general Bayesian networks with point-based probability parameters. Thus, it is desired to extend the general Bayesian network with effective mechanisms of representation, learning and inference of probabilistic causal relationships implied in interval data. In this paper, we define the interval probabilities, the bound-limited weak conditional interval probabilities and the probabilistic description, as well as the multiplication rules. Furthermore, we propose the method for learning the Bayesian network structure from interval data and the algorithm for corresponding approximate inferences. Experimental results show that our methods are feasible, and we conclude that the Bayesian network with interval probability parameters is the expansion of the general Bayesian network.

  • articleNo Access

    Learning Markov Network Structures Constrained by Context-Specific Independences

    This work focuses on learning the structure of Markov networks from data. Markov networks are parametric models for compactly representing complex probability distributions. These models are composed by: a structure and numerical weights, where the structure describes independences that hold in the distribution. Depending on which is the goal of structure learning, learning algorithms can be divided into: density estimation algorithms, where structure is learned for answering inference queries; and knowledge discovery algorithms, where structure is learned for describing independences qualitatively. The latter algorithms present an important limitation for describing independences because they use a single graph; a coarse grain structure representation which cannot represent flexible independences. For instance, context-specific independences cannot be described by a single graph. To overcome this limitation, this work proposes a new alternative representation named canonical model as well as the CSPC algorithm; a novel knowledge discovery algorithm for learning canonical models by using context-specific independences as constraints. On an extensive empirical evaluation, CSPC learns more accurate structures than state-of-the-art density estimation and knowledge discovery algorithms. Moreover, for answering inference queries, our approach obtains competitive results against density estimation algorithms, significantly outperforming knowledge discovery algorithms.

  • chapterNo Access

    Learning Dynamic Bayesian Networks for Analyzing Causal Relationship Between Macro-Economic Index

    Dynamic Bayesian networks is a powerful tool in modeling multivariate stochastic processes. At present, the methods of learning dynamic Bayesian network structure have low efficiency and reliability and can not make certain the causal direction of all edges. In this paper, an high effective and reliable and practical method of learning dynamic Bayesian network structure is presented to find dynamic causal knowledge from data. Firstly, a maximal likelihood tree is built from data. Then a causal tree is obtained by orienting the edges of the maximal likelihood tree and the variables can be sorted in the light of the causal tree. Finally, a dynamic Bayesian network structure can be established based on the causal order of variables and local search & scoring method by finding father nodes of a node.