pgmpy Bayesian Network with discrete inputs and continuous variable outputs

143 Views Asked by At

I have a Bayesian network that is supposed to compute marginal posterior probabilities of 'fault variables', which are continuous values between 0 and 1, given discrete 'observable variable' data (this discrete data are not probabilities). For this, the conditional probability distributions (CPDs) should be learned by the Bayesian Network in pgmpy. For learning the CPDs in pgmpy, such that if I give discrete observable data, the output I get must be continuous values for marginal posterior probabilities, what should the training data consist of? As in, the labeled training data I have consists of discretized observable variable data and corresponding fault variable probabilities.

I want to train the Bayesian Network in pgmpy and do inference using my evidence data which is new discrete observable data. I know the structure of my Directed Acyclic Graph. I assume the distribution of the continuous variables is Gaussian. I found "Parameterizing with Continuous Variables" in pgmpy documentation but I am trying to understand, through an example, training a BN with discrete and continuous data for computing continuous posterior values through probabilistic inference in the Bayesian Network.

0

There are 0 best solutions below