MemoryError: ipython convnet module. Is it normal?

424 Views Asked by At

I'm just going through the assignments of Stanford's cs231n course on my own. I'm not a student of the course. I asked the same question in their subreddit, but seems no one is there. Hoping to find one or two veterans here.

I'm using 32bit Ubuntu 14.04 in VMWare Player. I gave it 4GB of RAM (but since 32 bit probably not using the whole). My PC has 16GB RAM and running Windows 8.1 Pro.

I was running convnet ipython module and was at the segment overfit small data. This is the error I'm getting:

---------------------------------------------------------------------------
MemoryError                               Traceback (most recent call last)
<ipython-input-7-5c1aed72acc3> in <module>()
      6           X_train[:50], y_train[:50], X_val, y_val, model, two_layer_convnet,
      7           reg=0.001, momentum=0.9, learning_rate=0.0001, batch_size=10, num_epochs=10,
----> 8           verbose=True)

.../classifier_trainer.pyc in train(self, X, y, X_val, y_val, model, loss_function, reg, learning_rate, momentum, learning_rate_decay, update, sample_batches, num_epochs, batch_size, acc_frequency, verbose, decay_rate)
    132 
    133         # evaluate val accuracy
--> 134         scores_val = loss_function(X_val, model)
    135         y_pred_val = np.argmax(scores_val, axis=1)
    136         val_acc = np.mean(y_pred_val ==  y_val)

.../convnet.pyc in two_layer_convnet(X, model, y, reg)
     49 
     50   # Compute the forward pass
---> 51   a1, cache1 = conv_relu_pool_forward(X, W1, b1, conv_param, pool_param)
     52   scores, cache2 = affine_forward(a1, W2, b2)
     53 

.../layer_utils.pyc in conv_relu_pool_forward(x, w, b, conv_param, pool_param)
     43   - cache: Object to give to the backward pass
     44   """
---> 45   a, conv_cache = conv_forward_fast(x, w, b, conv_param)
     46   s, relu_cache = relu_forward(a)
     47   out, pool_cache = max_pool_forward_fast(s, pool_param)

.../fast_layers.pyc in conv_forward_fast(x, w, b, conv_param)
     30   # x_cols = im2col_indices(x, w.shape[2], w.shape[3], pad, stride)
     31   x_cols = im2col_cython(x, w.shape[2], w.shape[3], pad, stride)
---> 32   res = w.reshape((w.shape[0], -1)).dot(x_cols) + b.reshape(-1, 1)
     33 
     34   out = res.reshape(w.shape[0], out.shape[2], out.shape[3], x.shape[0])

MemoryError: 

And as expected python has reached 1.8GB. So, the memory error is due to the fact that there is no RAM.

However I'm wondering if this is normal to go beyond 1.8GB for the small dataset? Should I try with 64bit Ubuntu giving it 8GB of RAM in virtual machine?

Thanks in advance.

1

There are 1 best solutions below

0
On

If you are running a 32bit ubuntu, If I remember correctly, the addressable space is ~2Gb Application/Process. You your process see only 2Gb. Using a 64 Bit Ubuntu with 4Gb Ram should be enough. It's the 32Bits that blocks you not the Memory of the VM.