Compute an effect size analogon for a GPyTorch Multivariate Normal distribution

36 Views Asked by At

please have a look at this paper: https://projecteuclid.org/journals/annals-of-applied-statistics/volume-13/issue-2/Variable-prioritization-in-nonlinear-black-box-methods--A-genetic/10.1214/18-AOAS1222.full

I want to compute a KLD for an effect size analogon beta_j, that represents the importance of specific features on the overall distribution. The calculations in there are very general, but I know, that (G)PyTorch does a lot of it internally (including the precision and a lower triangular matrix) and it may comes down to properly use the framework. So, if I understand correctly, the markers j represent a certain fraction of the MVN. I think I lack some basic understandings of the MVN here too, because I don't know how to construct a P(beta_minus_j|beta_j), E(beta_minus_j)|beta_j), V(beta_minus_j|beta_j), but I think this is some fundamental stuff ..
It already would help me a lot how to partition the precision and the lower triangular matrix to calculate the alpha_j's according to the supplementary material.

I'm totally stuck at this, but I think I could learn a lot about Gaussian Processes and MVN, if I understand the results of the paper.

I'm using the SAASBO fully bayesian model of BoTorch and fitted it using my categorical training data (nothing special here, just binary X, p=10, single output dimension). I go into eval mode with and receive the MVN properties

model.eval()
posterior = model.posterior(X_test)
mvn = posterior.mvn
Sigma = mvn.covariance_matrix
Lambda = mvn.precision_matrix
L = mvn.scale_tril
0

There are 0 best solutions below