Multivariate Hidden Markov Model implementation question

223 Views Asked by At

I have to categorise a signal from an eye tracker. I have a single vector representing velocities of the eye at a given time. The idea is that when the velocity is low there is a high chance that it is a fixation and when the velocity is high it is a saccade. Each point is dependent on the previous since. This gives rise to using a multivariate Hidden Markov Model (HMM) to classify if is a saccade. The model is a two state system like this. I have a total of 8 parameters to learn, the mean and variance for each gaussian, and the two transition probabilities for each state. To to estimate the parameters I am using MATLAB with the toolbox PMTK3. I have not found other MATLAB toolboxes that allow for HMM with gaussians. My code looks like this:

exampleData = [25.2015   24.1496   33.0422   21.9321   15.5897    9.1592   19.9374   15.2868    9.6767   39.8610   22.2483   31.6508]
prior.mu = [10 10];
prior.Sigma = [0.5; 0.5];
prior.k = 2;
prior.dof = prior.k + 1;
model = hmmFit(data, 2, 'gauss', 'verbose', true, 'piPrior', [3 2], ...
    'emissionPrior', prior, 'nRandomRestarts', 2, 'maxIter', 10);

It is my understanding that prior.k is how many clusters it should find, which should be two clusters: saccades and fixations. It outputs this error message when I run it:

Error using chol
Matrix must be positive definite.
Error in gaussSample (line 20)
A = chol(Sigma, 'lower');
Error in kmeansFit (line 42)
    noise = gaussSample(zeros(1, length(v)), 0.01*diag(v), K);
Error in kmeansInitMixGauss (line 7)
[mu, assign] = kmeansFit(data, K);
Error in mixGaussFit>initGauss (line 38)
        [mu, Sigma, model.mixWeight] = kmeansInitMixGauss(X, nmix);
Error in mixGaussFit>@(m,X,r)initGauss(m,X,r,initParams,prior) (line 24)
initFn = @(m, X, r)initGauss(m, X, r, initParams, prior);
Error in emAlgo (line 56)
model = init(model, data, restartNum);
Error in mixGaussFit (line 25)
[model, loglikHist] = emAlgo(model, data, initFn, @estep, @mstep , ...
Error in hmmFitEm>initWithMixModel (line 244)
    mixModel    = mixGaussFit(stackedData, nstates,  'verbose', false, 'maxIter', 10);
Error in hmmFitEm>initGauss (line 146)
        model = initWithMixModel(model, data);
Error in hmmFitEm>@(m,X,r)initFn(m,X,r,emissionPrior) (line 45)
initFn = @(m, X, r)initFn(m, X, r, emissionPrior);
Error in emAlgo (line 56)
model = init(model, data, restartNum);
Error in emAlgo (line 38)
        [models{i}, llhists{i}] = emAlgo(model, data, init, estep,...
Error in hmmFitEm (line 46)
[model, loglikHist] = emAlgo(model, data, initFn, @estep, @mstep, EMargs{:});
Error in hmmFit (line 69)
[model, loglikHist] = hmmFitEm(data, nstates, type, varargin{:}); 

When I try to run the sample code it works, and I can't seem to figure out why:

data = [train4'; train5'];
data = data{2};
d = 13;

% test with a bogus prior
if 1
    prior.mu = ones(1, d);
    prior.Sigma = 0.1*eye(d);
    prior.k = d;
    prior.dof = prior.k + 2;
else 
    prior.mu = [1 3 5 2 9 7 0 0 0 0 0 0 1];
    prior.Sigma = randpd(d) + eye(d);
    prior.k = 12;
    prior.dof = 15;
end

model = hmmFit(data, 2, 'gauss', 'verbose', true, 'piPrior', [1 1], ...
    'emissionPrior', prior, 'nRandomRestarts', 2, 'maxIter', 10);

Please explain to me what it is I am misunderstanding about the HMM

1

There are 1 best solutions below

0
On

I tried a lot of stuff until I decided to make my data shorter so their sizes were the same size. That worked and made me discover that some chunks worked and some cause errors. Upon close inspection it was because there were a few NaNs in the data, and the HMM did not know what do with them.