Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Model workshop

The model workshop allows you to upload model artifacts to create, test, and deploy custom models to a centralized model management and deployment hub. Custom models are pre-trained, user-defined models that support most of DataRobot's MLOps features. DataRobot supports custom models built in a variety of languages, including Python, R, and Java. If you've created a model outside of DataRobot and want to upload your model to DataRobot, define the model content and the model environment in the model workshop.

What are custom models?

Custom models are not custom DataRobot models. They are user-defined models created outside of DataRobot and assembled in the model workshop for access to deployment, monitoring, and governance. To support the local development of the models you want to bring into DataRobot through the model workshop, the DataRobot Model Runner (or DRUM) provides you with tools to locally assemble, debug, test, and run the model before assembly in DataRobot. Before adding a custom model to the workshop, DataRobot recommends you reference the custom model assembly guidelines for building a custom model to upload to the workshop.

The following topics describe how to manage custom model artifacts in DataRobot:

Topic Describes how to
Create custom models Create custom models in the model workshop.
View and manage custom models View, share, and delete custom models in the model workshop.
Test custom models in DataRobot Test custom models in the model workshop.
Add custom model versions Create a new version of the model after updating the file contents or settings.
Register a custom model Add a custom model from the model workshop to the Registry.
Configure evaluation and moderation (Premium feature) Configure evaluation and moderation guardrails for a custom text generation model in the model workshop.
Deploy LLMs from the Hugging Face Hub (Premium feature) Create and deploy open source LLMs from the Hugging Face Hub using a vLLM environment.
NVIDIA and NeMo Guardrails (Premium feature) Quickly build out end-to-end generative AI capabilities by unlocking accelerated performance and leveraging NVIDIA open-source models and guardrails.

Once deployed to a prediction server managed by DataRobot, you can make predictions via the API and monitor your deployment.


Updated January 3, 2025