Can I speed up inference in PyTorch using autocast (automatic mixed precision)?

3k Views Asked by At

The docs (see also this) for autocast in PyTorch only discuss training. Does it speed things up if I also use autocast for inference?

1

There are 1 best solutions below

1
On BEST ANSWER

Yes it could (may not in some cases though).

You are processing data with lower precision (e.g. float16 vs float32). Your program has to read and process less data in this case.

This might help with cache locality and hardware specific software (e.g. tensor cores if using CUDA)