skflow allocates memory in gpu0 even when other gpu is specified

354 Views Asked by At

I'm running into this problem on a 4 GPU Amazon instance, using a simple example script:

import skflow
import tensorflow as tf
from sklearn import datasets

iris = datasets.load_iris()
X_train, X_test, y_train, y_test = cross_validation.train_test_split(iris.data, iris.target,
    test_size=0.2, random_state=42)

def my_model(X, y):

    with tf.device('/gpu:1'):
        layers = skflow.ops.dnn(X, [1000, 500, 150], keep_prob=0.5) # many neurons to see the impac on memory
    with tf.device('/cpu:0'):
        return skflow.models.logistic_regression(layers, y)

classifier = skflow.TensorFlowEstimator(model_fn=my_model, n_classes=3)
classifier.fit(X_train, y_train)

The result of nvidia-smi before launching the script is:

Fri Feb 19 11:30:22 2016       
+------------------------------------------------------+                       
| NVIDIA-SMI 346.46     Driver Version: 346.46         |                       
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GRID K520           Off  | 0000:00:03.0     Off |                  N/A |
| N/A   40C    P0    41W / 125W |   2247MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GRID K520           Off  | 0000:00:04.0     Off |                  N/A |
| N/A   36C    P0    40W / 125W |   2113MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  GRID K520           Off  | 0000:00:05.0     Off |                  N/A |
| N/A   41C    P0    43W / 125W |     53MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   3  GRID K520           Off  | 0000:00:06.0     Off |                  N/A |
| N/A   39C    P0    41W / 125W |   1816MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

and while the script is running:

Fri Feb 19 11:30:53 2016       
+------------------------------------------------------+                       
| NVIDIA-SMI 346.46     Driver Version: 346.46         |                       
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GRID K520           Off  | 0000:00:03.0     Off |                  N/A |
| N/A   40C    P0    46W / 125W |   3926MiB /  4095MiB |     26%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GRID K520           Off  | 0000:00:04.0     Off |                  N/A |
| N/A   37C    P0    42W / 125W |   3926MiB /  4095MiB |     17%      Default |
+-------------------------------+----------------------+----------------------+
|   2  GRID K520           Off  | 0000:00:05.0     Off |                  N/A |
| N/A   41C    P0    44W / 125W |     92MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   3  GRID K520           Off  | 0000:00:06.0     Off |                  N/A |
| N/A   39C    P0    42W / 125W |   1856MiB /  4095MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

so memory is allocated to GPU0, even though no part in the code mentions it. Do you know where this behavior comes from? This causes an issue because we are multiple users on this instance, and GPU0 gets saturated even if nobody means to use it.

2

There are 2 best solutions below

0
On

If you are interested in only using GPU1, I'd consider wrapping the script in something that sets CUDA_VISIBLE_DEVICES (see https://devblogs.nvidia.com/parallelforall/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/) to 1. That way, only one GPU will be visible to the script (and it will look like its id is 0). If you'd set it to 2,3 you would get those GPUs with ids 0,1 respectively.

3
On

A workaround we have found is to modify skflow.TensorFlowEstimator

The culprit is

with self._graph.as_default():
    tf.set_random_seed(self.tf_random_seed)
    self._global_step = tf.Variable(
        0, name="global_step", trainable=False)

in skflow.TensorFlowEstimator.setup_training(), which we've modified as

with self._graph.as_default(), tf.device("/gpu:{0}".format(self.gpu_number)):
    tf.set_random_seed(self.tf_random_seed)
    self._global_step = tf.get_variable('global_step', [],
                                      initializer=tf.constant_initializer(0), trainable=False)

adding an attribute gpu_number to the class, and initiliazing session with allow_soft_placement=True in skflow.TensorFlowEstimator._setup_training()