World Scientific
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

MAXIMISING ENTROPY TO DEDUCE AN INITIAL PROBABILITY DISTRIBUTION FOR A CAUSAL NETWORK

    https://doi.org/10.1142/S0218488599000052Cited by:2 (Source: Crossref)

    The desire to use Causal Networks as Expert Systems even when the causal information is incomplete and/or when non-causal information is available has led researchers to look into the possibility of utilising Maximum Entropy. If this approach is taken, the known information is supplemented by maximising entropy to provide a unique initial probability distribution which would otherwise have been a consequence of the known information and the independence relationships implied by the network. Traditional maximising techniques can be used if the constraints are linear but the independence relationships give rise to non-linear constraints. This paper extends traditional maximising techniques to incorporate those types of non-linear constraints that arise from the independence relationships and presents an algorithm for implementing the extended method. Maximising entropy does not involve the concept of "causal" information. Consequently, the extended method will accept any mutually consistent set of conditional probabilities and expressions of independence. The paper provides a small example of how this property can be used to provide complete causal information, for use in a causal network, when the known information is incomplete and not in a causal form.