I am trying to understand how Hidden Markov Models work. Using hmmlearn library, I want to see if a time-series can be posible under a learned distribution.
I start with a single state markov model. I generate a gaussian sequence with mean 0 and variance 3, and try to learn its characteristics with .fit(). Then I get the .score() for different series with different covariances.
import numpy as np
from hmmlearn.hmm import GaussianHMM
import matplotlib.pyplot as plt
x = np.random.normal(0,3,1000).reshape(-1,1)
model = GaussianHMM(n_components = 1, n_iter = 50, random_state = 42)
model.fit(x)
print(model.means_, np.sqrt(model.covars_)) #[[-0.11165377]] [[[2.87575159]]]
covs = np.linspace(0,6,100)
for c in covs :
y = np.random.normal(0,,1000).reshape(-1,1)
score = model.score(y)
plt.scatter(c,score,color='black')
I get the following result in black.
I was expecting something along the blue shape, where log-likelyhood is maximal near the learnt variance.
When doing the work with the means, I get the expected result
import numpy as np
from hmmlearn.hmm import GaussianHMM
import matplotlib.pyplot as plt
x = np.random.normal(2,1,1000).reshape(-1,1)
model = GaussianHMM(n_components = 1, n_iter = 50, random_state = 42)
model.fit(x)
print(model.means_, np.sqrt(model.covars_)) #[[-0.11165377]] [[[2.87575159]]]
ns = np.linspace(-3,8,100)
for n in ns :
y = np.random.normal(n,1,1000).reshape(-1,1)
score = model.score(y)
plt.scatter(n,score,color='black')
Why does it work for the mean but not the variance ? Thank you very much
