I am running a transformer model and It will be great if I can use multiple GPUs to parallelise the load. So I wanted to know if M1 Mac has multiple mps devices. Currently my code only uses mps:0 - thats what I get from torch.device() code. I am wondering if I can have mps:0 , mps:1 , .. till how many ever mps devices my Mac has to run the PyTorch code.
I want a Metal and Pytorch Equivalent of the following code for CUDA:
os.environ["CUDA_VISIBLE_DEVICES"] = str(self.args.gpu) if not self.args.use_multi_gpu else self.args.devices
device = torch.device('cuda:{}'.format(self.args.gpu))
print('Use GPU: cuda:{}'.format(self.args.gpu))
My understanding of the above code is that , when you set os.environ["CUDA_VISIBLE_DEVICES"] , it automatically parallelises the work among all visible GPUs. I want to do the same in Metal/ M1 Mac
I am trying to reproduce - https://github.com/Thinklab-SJTU/Crossformer with multiple GPUs in M1 Mac