How do I translate this Edward code to TFP code?

115 Views Asked by At

I have coded a Probabilistic Matrix Factorization model in Edward. I am trying to port it over to TFP, but I am not sure how to define the log-likelihood and KL divergence terms. Here is the code in Edward -

# MODEL
U = Normal(
    loc=0.0,
    scale=1.0,
    sample_shape=[n_latent_dims, batch_size])
V = Normal(
    loc=0.0,
    scale=1.0,
    sample_shape=[n_latent_dims, n_features])
R = Bernoulli(logits=tf.matmul(tf.transpose(U), V))
R_ph = tf.placeholder(tf.int32, [None, n_features])

# INFERENCE
qU = Normal(
    loc=tf.get_variable("qU/loc",
                        [n_latent_dims, batch_size]),
    scale=tf.nn.softplus(
        tf.get_variable("qU/scale",
                        [n_latent_dims, batch_size])))
qV = Normal(
    loc=tf.get_variable("qV/loc",
                        [n_latent_dims, n_features]),
    scale=tf.nn.softplus(
        tf.get_variable("qV/scale",
                        [n_latent_dims, n_features])))
qR = Bernoulli(logits=tf.matmul(tf.transpose(qU), qV))
qR_avg = Bernoulli(
    logits=tf.reduce_mean(qR.parameters['logits'], 0))
log_likli = tf.reduce_mean(qR_avg.log_prob(R_ph), 1)
inference = ed.KLqp(
    {
        U: qU,
        V: qV
    }, data={self.R: self.R_ph})
inference_init = inference.initialize()
init = tf.global_variables_initializer()
1

There are 1 best solutions below

1
On

Seems like you're getting an answer at https://github.com/tensorflow/probability/issues/364