This section covers how to setup
modeltime.gluonts to use GPUs.
You must have:
Refer to MXNet’s Official GPU Documentation on using GPUs.
Create a Custom GluonTS Python Environment. You will need to install a version of
mxnet that is compatible with your CUDA software.
Follow instructions to set the path and check your custom gluonts environment. You will need to:
modeltime.gluontsis connecting to your GPU-enabled GluonTS Python Environment
You’re now ready to start using GPUs. Just start training as normal.
model_fit_deepar <- deep_ar( id = "id", freq = "M", prediction_length = 24, lookback_length = 36, epochs = 10, num_batches_per_epoch = 500, learn_rate = 0.001, num_layers = 3, num_cells = 80, dropout = 0.10 ) %>% set_engine("gluonts_deepar") %>% fit(value ~ date + id, m750)
One final point is that if you have multiple GPUs, you can configure how to distribute the work using the MXNet Context (
ctx). For example, if you have two GPUs, you can specify to use both of them by adding to the
mxnet <- reticulate::import("mxnet") # Modify your set_engine() ... %>% set_engine("gluonts_deepar", ctx = list(mxnet$gpu(0), mxnet$gpu(1)))