# Create a retraining job

> Create a retraining job - How to add a job, manually or from a template, implementing a code-based
> retraining policy.

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-05-06T18:17:10.042659+00:00` (UTC).

## Primary page

- [Create a retraining job](https://docs.datarobot.com/en/docs/workbench/nxt-registry/nxt-jobs-workshop/nxt-create-jobs/nxt-create-retraining-job.html): Full documentation for this topic (HTML).

## Sections on this page

- [Add a new retraining job](https://docs.datarobot.com/en/docs/workbench/nxt-registry/nxt-jobs-workshop/nxt-create-jobs/nxt-create-retraining-job.html#add-a-new-retraining-job): In-page section heading.
- [Create a retraining job from a template](https://docs.datarobot.com/en/docs/workbench/nxt-registry/nxt-jobs-workshop/nxt-create-jobs/nxt-create-retraining-job.html#create-a-retraining-job-from-a-template): In-page section heading.

## Related documentation

- [NextGen UI documentation](https://docs.datarobot.com/en/docs/workbench/index.html): Linked from this page.
- [Registry](https://docs.datarobot.com/en/docs/workbench/nxt-registry/index.html): Linked from this page.
- [Jobs](https://docs.datarobot.com/en/docs/workbench/nxt-registry/nxt-jobs-workshop/index.html): Linked from this page.
- [Create custom jobs](https://docs.datarobot.com/en/docs/workbench/nxt-registry/nxt-jobs-workshop/nxt-create-jobs/index.html): Linked from this page.
- [define runtime parameters in ametadata.yamlfile](https://docs.datarobot.com/en/docs/workbench/nxt-registry/nxt-jobs-workshop/nxt-runtime-parameters-custom-jobs.html): Linked from this page.
- [Key values](https://docs.datarobot.com/en/docs/workbench/nxt-registry/nxt-jobs-workshop/nxt-key-values-custom-jobs.html): Linked from this page.
- [retraining policy](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-mitigation/nxt-retraining.html): Linked from this page.

## Documentation content

Add a job, manually or from a template, implementing a code-based retraining policy. To view and add retraining jobs, navigate to the Jobs > Retraining tab, and then:

- To add a new retraining job manually, click+ Add new retraining job(or the minimized add buttonwhen the job panel is open).
- To create a retraining job from a template, next to the add button, click, and then, underRetraining, clickCreate new from template.

The new job opens to the Assemble tab. Depending on the creation option you selected, proceed to the configuration steps linked in the table below.

| Retraining job type | Description |
| --- | --- |
| Add new retraining job | Manually add a job implementing a code-based retraining policy. |
| Create new from template | Add a job, from a template provided by DataRobot, implementing a code-based retraining policy. |

> [!NOTE] Retraining jobs require metadata
> All retraining jobs require a `metadata.yaml` file to associate the retraining policy with a deployment and a retraining policy.

## Add a new retraining job

To manually add a job for code-based retraining:

1. On theAssembletab for the new job, click the job name (or the edit icon) to enter a new job name, and then click confirm:
2. In theEnvironmentsection, select aBase environmentfor the job. The available drop-in environments depend on your DataRobot installation; however, the table below lists commonly available public drop-in environments withtemplates in the DRUM repository. Depending on your DataRobot installation, the Python version of these environments may vary, and additional non-public environments may be available for use. Managed AI Platform (SaaS)Self-Managed AI PlatformDrop-in environment securityStarting with the March 2025 Managed AI Platform release, most general purpose DataRobot custom model drop-in environments are security-hardened container images. When you require a security-hardened environment for running custom jobs, only shell code following thePOSIX-shell standardis supported. Security-hardened environments following the POSIX-shell standard support a limited set of shell utilities.Drop-in environment securityStarting with the 11.0 Self-Managed AI Platform release, most general purpose DataRobot custom model drop-in environments are security-hardened container images. When you require a security-hardened environment for running custom jobs, only shell code following thePOSIX-shell standardis supported. Security-hardened environments following the POSIX-shell standard support a limited set of shell utilities. Environment name & exampleCompatibility & artifact file extensionPython 3.XPython-based custom models and jobs. You are responsible for installing all required dependencies through the inclusion of arequirements.txtfile in your model files.Python 3.X GenAI AgentsGenerative AI models (Text GenerationorVector Databasetarget type)Python 3.X ONNX Drop-InONNX models and jobs (.onnx)Python 3.X PMML Drop-InPMML models and jobs (.pmml)Python 3.X PyTorch Drop-InPyTorch models and jobs (.pth)Python 3.X Scikit-Learn Drop-InScikit-Learn models and jobs (.pkl)Python 3.X XGBoost Drop-InNative XGBoost models and jobs (.pkl)Python 3.X Keras Drop-InKeras models and jobs backed by tensorflow (.h5)Java Drop-InDataRobot Scoring Code models (.jar)R Drop-in EnvironmentR models trained using CARET (.rds)Due to the time required to install all libraries recommended by CARET, only model types that are also package names are installed (e.g.,brnn,glmnet). Make a copy of this environment and modify the Dockerfile to install the additional, required packages. To decrease build times when you customize this environment, you can also remove unnecessary lines in the# Install caret modelssection, installing only what you need. Review theCARET documentationto check if your model's method matches its package name. (Log in to GitHub before clicking this link.) scikit-learnAll Python environments contain scikit-learn to help with preprocessing (if necessary), but only scikit-learn can make predictions onsklearnmodels.
3. In theFilessection, assemble the custom job. Drag files into the box, or use the options in this section to create or upload the files required to assemble a custom job: OptionDescriptionChoose from source / UploadUpload existing custom job files (run.sh,metadata.yaml, etc.) asLocal Filesor aLocal Folder.CreateCreate a new file, empty or containing a template, and save it to the custom job:Create run.sh: Creates a basic, editable example of an entry point file.Create metadata.yaml: Creates a basic, editable example of a runtime parameters file.Create README.md: Creates a basic, editable README file.Create job.py: Creates a basic, editable Python job file to print runtime parameters and deployments.Create example job: Combines all template files to create a basic, editable custom job. You can quickly configure the runtime parameters and run this example job.Create blank file: Creates an empty file. Click the edit iconnext toUntitledto provide a file name and extension, then add your custom contents. In the next step, it is possible to identify files created this way, with a custom name and content, as the entry point. After you configure the new file, clickSave. File replacementIf you add a new file with the same name as an existing file, when you clickSave, the old file is replaced in theFilessection.
4. In theSettingssection, configure theEntry pointshell (.sh) file for the job. If you've added arun.shfile, that file is the entry point; otherwise, you must select the entry point shell file from the dropdown list. The entry point file allows you to orchestrate multiple job files:
5. In theResourcessection, next to the section header, clickEditand configure the following: PreviewCustom job resource bundles are off by default. Contact your DataRobot representative or administrator for information on enabling this feature.Feature flag:Enable Resource Bundles SettingDescriptionResource bundlePreview feature. Configure the resources the custom job uses to run.Network accessConfigure the egress traffic of the custom job. UnderNetwork access, select one of the following:Public: The default setting. The custom job can access any fully qualified domain name (FQDN) in a public network to leverage third-party services.None: The custom job is isolated from the public network and cannot access third party services. Default network accessFor theManaged AI Platform, theNetwork accesssetting is set toPublicby default and the setting is configurable. For theSelf-Managed AI Platform, theNetwork accesssetting is set toNoneby default and the setting is restricted; however, an administrator can change this behavior during DataRobot platform configuration. Contact your DataRobot representative or administrator for more information.
6. (Optional) DefineRuntime parameters. Click+ Add runtime parameterto define a new runtime parameter by providing aName,Type,Value, and, optionally, aDescription. Alternativelydefine runtime parameters in ametadata.yamlfile. A template for this file is available from theFiles >Createdropdown. For existing runtime parameters, clickEditto edit parameter values, remove parameters, or reset parameter values.
7. (Optional) Configure additionalKey valuesforTags,Metrics,Training parameters, andArtifacts.

## Create a retraining job from a template

To add a pre-made retraining job from a template:

> [!NOTE] Preview
> The jobs template gallery is on by default.
> 
> Feature flag: Enable Custom Jobs Template Gallery

1. In theAdd custom job from gallerypanel, click the job template you want to create a job from.
2. Review the job description,Execution environment,Metadata, andFiles, then, clickCreate custom job: The job opens to theAssembletab.
3. On theAssembletab for the new job, click the job name (or the edit icon ()) to enter a new job name, and then click confirm:
4. In theEnvironmentsection, review theBase environmentfor the job, set by the template.
5. In theFilessection, review the files added to the job by the template:
6. If you need to add new files, use the options in this section to create or upload the files required to assemble a custom job: OptionDescriptionUploadUpload existing custom job files (run.sh,metadata.yaml, etc.) asLocal Filesor aLocal Folder.CreateCreate a new file, empty or containing a template, and save it to the custom job:Create run.sh: Creates a basic, editable example of an entry point file.Create metadata.yaml: Creates a basic, editable example of a runtime parameters file.Create README.md: Creates a basic, editable README file.Create job.py: Creates a basic, editable Python job file to print runtime parameters and deployments.Create example job: Combines all template files to create a basic, editable custom job. You can quickly configure the runtime parameters and run this example job.Create blank file: Creates an empty file. Click the edit icon () next toUntitledto provide a file name and extension, then add your custom contents. In the next step, it is possible to identify files created this way, with a custom name and content, as the entry point. After you configure the new file, clickSave. File replacementIf you add a new file with the same name as an existing file, when you clickSave, the old file is replaced in theFilessection.
7. In theSettingssection, review theEntry pointshell (.sh) file for the job, added by the template (usuallyrun.sh). The entry point file allows you to orchestrate multiple job files:
8. In theResourcessection, review the default resource settings for the job. To modify the settings, next to the section header, clickEditand configure the following: Availability informationCustom job resource bundles are off by default. Contact your DataRobot representative or administrator for information on enabling this feature.Feature flag:Enable Resource Bundles SettingDescriptionResource bundlePreview feature. Configure the resources the custom job uses to run.Network accessConfigure the egress traffic of the custom job. UnderNetwork access, select one of the following:Public: The default setting. The custom job can access any fully qualified domain name (FQDN) in a public network to leverage third-party services.None: The custom job is isolated from the public network and cannot access third party services. Default network accessFor theManaged AI Platform, theNetwork accesssetting is set toPublicby default and the setting is configurable. For theSelf-Managed AI Platform, theNetwork accesssetting is set toNoneby default and the setting is restricted; however, an administrator can change this behavior during DataRobot platform configuration. Contact your DataRobot representative or administrator for more information.
9. (Optional) DefineRuntime parameters. Click+ Add runtime parameterto define a new runtime parameter by providing aName,Type,Value, and, optionally, aDescription. Alternativelydefine runtime parameters in ametadata.yamlfile. A template for this file is available from theFiles >Createdropdown. For existing runtime parameters, clickEditto edit parameter values, remove parameters, or reset parameter values.
10. Configure additionalKey valuesforTags,Metrics,Training parameters, andArtifacts.

After you create a retraining job, you can add it to a deployment as a [retraining policy](https://docs.datarobot.com/en/docs/workbench/nxt-console/nxt-mitigation/nxt-retraining.html).
