I am trying to use 2 GPUs, tensorflow does not recognise the 2nd one. the 2nd GPU is working fine (in widows environment)
When I set CUDA_VISIBLE_DEVICES=0 and run the program I see RTX2070 as GPU0
When I set CUDA_VISIBLE_DEVICES=1 and run the program I see GTX1050as GPU0
When I set CUDA_VISIBLE_DEVICES=0,1 and run the program I see RTX2070 as GPU0
so basically, TF does not recognise GPU1, it one GPU at the same time (GPU 0) Is there any command to manually define GPU1?
uninstalled and re-installed, Cudann, python 3.7, tensorflow and keras (GPU versions). I am using anaconda on windows 10. tried to change the CUDA_VISIBLE_DEVICES to 0, 1. I dont see any error, but the 2nd GPU does not appear anywhere in python.
the main GPU is RTX2070 (8GB) and 2nd GPU is GTX1050 (2GB). Before i submit i spent sometime searching for solution and did whatever I could find on the internet. drivers are up to date, 64bit version anf latest versions of the software are installed. I dont see any issue, beside not appearing the 2nd GPU.
The codes are working fine on first GPU, both have > 3.5 computational capacity.
Providing the solution here (Answer Section), even though it is present in the Comment Section (Thanks M Student for sharing solution), for the benefit of the community.
Adding this at the beginning of the code resolved the issue