Crashing RAM using memmap in Oja rule

134 Views Asked by At

I am using oja's rule on dataset of size 400x156300. It seems to crash my RAM. I am not sure what is causing this. Please help. I have 12 GB of RAM. Tried using memmap but still crashing!!

#convert memmap and reduce precision
[num_sample,num_feat]=train_data.shape
filename = path.join(mkdtemp(), 'train_data.dat')
memmap_train = np.memmap(filename, dtype='float32', mode='w+', shape=(num_sample,num_feat))
memmap_train[:] = train_data[:]
del train_data,test_data

#apply oja's rule
ojanet = algorithms.Oja(minimized_data_size=1250,step=1e-10,verbose=True,show_epoch=1)
ojanet.train(memmap_train, epsilon=1e-3,epochs=10000)
red_train_data = ojanet.predict(memmap_train)
ojanet.plot_errors(logx=False)
pdb.set_trace()

Also, raised issue:https://github.com/itdxer/neupy/issues/27. Don't know if the package development is active.

By crashing RAM I mean the RAM % utilization goes beyond 100% and my computer stops responding.

1

There are 1 best solutions below

0
On

This issues was related to the inefficient memory usage for the Oja algorithm. It was fixed in the NeuPy version 0.1.4. Closed ticket you can find here: https://github.com/itdxer/neupy/issues/27