So I'm trying to fit a quite complex model using OpenBUGS from R using R2OpenBUGS, short runs of the model work, but longer runs fail. When I set debug = T in the bugs call in R2OpenBUGS the log says: sorry something went wrong in module MathRandnum. Obviously to get converged estimates I need to run the model for a large number of iterations, typically it doesn't make it to 40K iterations.
One thing I've tried is to run the model for a short run, stop, save the chains and then restart, this at least tells me when it's failing. On this run the model fails somewhere between 44 and 48K iterations.
I don't really understand what the module MathRandnum is doing but it seems like it's part of the MCMc updating process.
So the way I'm thinking about this is that once OpenBUGS random seed is set and the initial values given, then the chains' values and their updating process is deterministic. So for any given seed and initial values the fault will occur at a given iteration. Therefore if there's a way to change the seed before the fault happens it might not happen. In the OpenBUGS GUI you can't change the seed at at that point, it needs to be done once the model is compiled and before it updates. So that's my question, is it possible to change the seed when restarting? If it were possible would it be wrong to do so?
Alternatively, what might be going wrong with this model, is my interpretation of what MathRandnum might be doing correct? Given that I think it fails at different points but once initialised it always fails at the same point does this mean changing the seed preset (it can take values 1-14) may fix it?
My BUGS model is below.
{
# Priors
#State model priors
shape.c ~ dunif(0.0001, 3) # dispersal kernel shape/scale parameters
scale.alpha ~ dunif(0.0001, 3) #^
tau2 <- 1/(sigma2 * sigma2)
sigma2 ~ dunif(0, 5)
phi ~ dunif(0, 1)
alpha.theta ~ dnorm(0 , .01)
#alpha.theta ~ dflat()
beta.urb ~ dnorm(0 , .01)
beta.wood ~ dnorm(0 , .01)
beta.int ~ dnorm(0, .01)
init.occ ~ dnorm(0, 1000)T(0, 1) # Strong prior! Assumption that is absent from most sites in 2001
for (t in 1:nyear) {
alpha.p[t] ~ dnorm(mu.lp, tau.lp)
}
#Observation model priors
mu.lp ~ dnorm(0, 0.01)
tau.lp <- 1 / (sd.lp * sd.lp)
sd.lp ~ dunif(0, 5)
dtype2.p ~ dunif(dtype2p_min,dtype2p_max) #List length factor effects
dtype3.p ~ dunif(dtype3p_min,dtype3p_max) #
#State model
for (i in 1:nsite) {
z[i,1] ~ dbern(init.occ) # this follows from line 22
theta[i] <- min(max(0.00000000001, thetalim[i]), 0.999999999999) # overflow constraint
logit(thetalim[i]) <- alpha.theta + beta.urb*urban[i] + beta.wood*wood[i] + beta.int*wood[i]*urban[i]# + RE[i] #Site level environment relation
for (t in 2:nyear) { # This loop now starts at year 2002
gamma[i,t] <- min(max(0.00000000001, gammalim[i,t]), 0.999999999999) # overflow constraint
gammalim[i,t] <- (shape.c/(2*scale.alpha*(exp(loggam(1/shape.c)))))*(exp(-1*(pow((d[i,t]/scale.alpha), shape.c)))) # dispersal kernel from Clark 1998
muZ[i,t] <- z[i ,t-1] * phi + (1 - z[i,t-1]) * gamma[i, t]*theta[i] # Occupancy dynamics,
z[i,t] ~ dbern(muZ[i,t]) # True occupancy z at site i
}
}
# Observation model
for (j in 1:nvisit) {
Py[j] <- z[Site[j], Year[j]] * p[j]
logit(p[j]) <- alpha.p[Year[j]] + dtype2.p*DATATYPE2[j] + dtype3.p*DATATYPE3[j]
y[j] ~ dbern(Py[j])
Presi[j] <- abs(y[j] - p[j])
y.new[j] ~ dbern(Py[j])
Presi.new[j] <- abs(y.new[j] - p[j])
}
# Bayesian Goodness-of-fit
fit <- sum(Presi[])
fit.new <- sum(Presi.new[])
# Finite sample occupancy
for (t in 2:nyear) {
psi.fs[t] <- sum(z[1:nsite, t])/nsite
}