Switching entire network from float32 to float64 on condition

74 Views Asked by At

Since lower precision can yield significant computational time savings, I would like to be able to switch (mid run) all variables in my partially trained network from float32 to float64 on an error condition.

For example: I initialize all variables as float32, run several hundred thousand batches through the network, and observe the loss reaches a tolerance of order 1e-8. At this point, to continue converging the model, I would like to switch to double precision for all model variables.

Is there a simple way to do this in python?

EDIT: Also, will switching the dtype of all of the network variables (weights, biases, inputs, etc.) cause issues with the optimizer I previously used? For example if Adam is being used, and computes moment estimates in single precision, will switching to double precision cause an issue?

0

There are 0 best solutions below