Tensorflow Taking too much time on GPU

936 Views Asked by At
import tensorflow as tf

gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus:
    print("Name:", gpu.name, "  Type:", gpu.device_type)

from tensorflow.python.client import device_lib
device_lib.list_local_devices()
 

tf.test.is_gpu_available()

I am able to get the output as True (i.e Tensorflow is able to detect the GPU) but the problem is its taking 5 - 10 minutes in showing the Output and continuously consumes system memory. I am using RTX 3060Ti with Python 3.8, CUDA 10.1, cudnn 7.6, tensorflow 2.3.1 and tensorflow-gpu 2.3.1.

1

There are 1 best solutions below

0
On BEST ANSWER

It's because Tensorflow version 2.3 doesn't support using CUDA 11, but all Ampere cards require a minimum of CUDA 11.0 and Cudnn 8.

Luckily TensorFlow 2.4 got released recently. It's compatible, but with a slightly lower CUDA 11.0.

Please update your installation to use CUDA 11.0 from the archives section on the Nvidia website. It won't be as performant as 11.1 (which was the first version with official support for RTX 3000), but at least it will support Ampere GPUs

You can easily check the CUDA and cuDNN versions used to build Tensorflow from here.