You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi all, I am training skorch models locally in a CUDA-enabled torch environment, and, if possible, I would like to transfer the entirety of the model to CPU so that they can be registered and used for inference in a CPU-only environment. Is there a best method for accomplishing this?
I'm pretty new to skorch and deep learning so I'm not sure if this is even possible, but, if so, a skorch helper method for converting a model to CPU would be a nice-to-have feature.
Edit: Just noticed a very similar (old) issue that is still open at time of posting (#553). The conversation there didn't seem to completely resolve. Let me know if should post there or if reviving this topic here would be preferable.
The text was updated successfully, but these errors were encountered:
If you train a model on GPU, save it, then load it on a machine without GPU, it should already work and be automatically transferred to CPU. Please give this a try and tell us if you encounter problems.
The thread you cited is a bit different, as it is about changing the device within the same process.
Hi all, I am training skorch models locally in a CUDA-enabled torch environment, and, if possible, I would like to transfer the entirety of the model to CPU so that they can be registered and used for inference in a CPU-only environment. Is there a best method for accomplishing this?
I'm pretty new to skorch and deep learning so I'm not sure if this is even possible, but, if so, a skorch helper method for converting a model to CPU would be a nice-to-have feature.
Edit: Just noticed a very similar (old) issue that is still open at time of posting (#553). The conversation there didn't seem to completely resolve. Let me know if should post there or if reviving this topic here would be preferable.
The text was updated successfully, but these errors were encountered: