Create external LLMs with code¶
The following, designed for use with DataRobot Notebooks, outlines how you can build and validate an external LLM using the DataRobot Python client. DataRobot recommends downloading this notebook and uploading it for use in the platform.
Note: For self-managed users, code samples that reference app.datarobot.com
need to be changed to the appropriate URL for your instance.
Setup¶
The following steps outline the configuration necessary for integrating an external LLM with the DataRobot platform.
Verify that the following feature flags are enabled. Contact your DataRobot representative or administrator for information on enabling these features.
- Enable Notebooks Filesystem Management
- Enable Proxy models
- Enable Public Network Access for all Custom Models
- Enable the Injection of Runtime Parameters for Custom Models
- Enable Monitoring Support for Generative Models
- Enable Custom Inference Models
Create a new credential in the DataRobot Credentials Management tool:
- Set it as an "API Token" type credential.
- Set the display name as
OPENAI_API_KEY
. - Place your OpenAI API key in the Token field.
Add the notebook environment variables
OPENAI_API_BASE
,OPENAI_API_KEY
,OPENAI_API_VERSION
, andOPENAI_DEPLOYMENT_NAME
; set the values with your Azure OpenAI credentials.Set the notebook session timeout to 180 minutes.
Install libraries¶
Install the following libraries:
!pip install "openai==1.35.4" "datarobot-drum==1.11.5" "datarobot-early-access" `datarobot-predict==1.8.3`
import datarobot as dr
from datarobot.models.genai.custom_model_llm_validation import CustomModelLLMValidation
Connect to DataRobot¶
Read more about different options for connecting to DataRobot from the Python client.
# endpoint = "https://app.datarobot.com/api/v2"
# token="<ADD_VALUE_HERE>"
# dr.Client(endpoint=endpoint, token=token)
dr.Client()
<datarobot.rest.RESTClientObject at 0x7fba503b3c70>
Create a directory for custom code¶
Create a directory called custom_model
that will hold uour OpenAI Wrapper code.
!mkdir custom_model
Define hooks¶
The following cell defines the methods used to deploy a text generation custom model. These include loading the custom model and using the model for scoring.
import os
import pandas as pd
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY", "<ADD_VALUE_HERE>")
PROMPT_COLUMN_NAME = "promptText"
COMPLETION_COLUMN_NAME = "resultText"
ERROR_COLUMN_NAME = "error"
from openai import OpenAI
import pandas as pd
def load_model(*args, **kwargs):
try:
from datarobot_drum import RuntimeParameters
api_key = RuntimeParameters.get("OPENAI_API_KEY")["apiToken"]
except Exception:
api_key = OPENAI_API_KEY
return OpenAI(api_key=api_key)
def score(data, model, **kwargs):
prompts = data["promptText"].tolist()
responses = []
errors = []
for prompt in prompts:
try:
response = model.chat.completions.create(
model="gpt-4",
messages=[
{"role": "user", "content": f"{prompt}"},
],
temperature=0,
)
responses.append(response.choices[0].message.content)
errors.append(None)
except Exception as e:
error = f"{e.__class__.__name__}: {str(e)}"
errors.append(error)
return pd.DataFrame({PROMPT_COLUMN_NAME: prompts, COMPLETION_COLUMN_NAME: responses, ERROR_COLUMN_NAME: errors})
Test hooks locally¶
Before proceeding with the deployment, use the cells below to test that the custom model hooks function correctly.
import pandas as pd
# Test the hooks locally
score(
pd.DataFrame(
{
PROMPT_COLUMN_NAME: ["What is a large language model (LLM)?"],
}
),
load_model()
)
Save model code¶
Save the hooks above as custom_model/custom.py
. This python file will be executed by DataRobot using your credentials. This is just a copy of the cell where you previously defined the hooks.
%%writefile custom_model/custom.py
import os
import pandas as pd
OPENAI_API_KEY = os.environ.get("OPENAI_API_KEY", "")
PROMPT_COLUMN_NAME = "promptText"
COMPLETION_COLUMN_NAME = "resultText"
ERROR_COLUMN_NAME = "error"
from openai import OpenAI
import pandas as pd
def load_model(*args, **kwargs):
try:
from datarobot_drum import RuntimeParameters
api_key = RuntimeParameters.get("OPENAI_API_KEY")["apiToken"]
except Exception:
api_key = OPENAI_API_KEY
return OpenAI(api_key=api_key)
def score(data, model, **kwargs):
prompts = data["promptText"].tolist()
responses = []
errors = []
for prompt in prompts:
try:
response = model.chat.completions.create(
model="gpt-4",
messages=[
{"role": "user", "content": f"{prompt}"},
],
temperature=0,
)
responses.append(response.choices[0].message.content)
errors.append(None)
except Exception as e:
responses.append(None)
error = f"{e.__class__.__name__}: {str(e)}"
responses.append(None)
errors.append(error)
return pd.DataFrame({PROMPT_COLUMN_NAME: prompts, COMPLETION_COLUMN_NAME: responses, ERROR_COLUMN_NAME: errors})
Overwriting custom_model/custom.py
Run the hooks¶
Run the hooks directly to make sure they are working.
test_data = pd.DataFrame(
{PROMPT_COLUMN_NAME: ["What is the weather like where you live?"]}
)
score(
test_data, model=load_model()
)
promptText | resultText | error | |
---|---|---|---|
0 | What is the weather like where you live? | As an artificial intelligence, I don't have a ... | None |
Save a requirements and metadata file to help describe the model's environment and usage.
%%writefile custom_model/requirements.txt
openai==1.35.4
datarobot-drum==1.11.5
Overwriting custom_model/requirements.txt
%%writefile custom_model/model-metadata.yaml
---
name: OpenAI gpt-4
type: inference
targetType: textgeneration
runtimeParameterDefinitions:
- fieldName: OPENAI_API_KEY
type: credential
credentialType: api_token
description: Open API key
Writing custom_model/model-metadata.yaml
Test the code locally¶
The DataRobot DRUM
library allows you to test the code as if DataRobot was running it via a simple cli. To do this, supply a test file and then run it.
# Create the Test File
test_data.to_csv("custom_model/test_data.csv", index=False)
os.putenv("TARGET_NAME", COMPLETION_COLUMN_NAME)
!drum score --code-dir custom_model/ --target-type textgeneration --input custom_model/test_data.csv
Predictions ... error 0 As an artificial intelligence, I don't have a ... ... NaN [1 rows x 3 columns]
Create a Custom Model¶
The code performs a few steps to register your code with DataRobot:
- Creates a custom model which will contain the code.
- Creates a custom model version with the code in our
custom_model
folder. - Builds the environment to hold our model by installing the requirements.txt file.
- Tests the entire set up.
# List all existing base environments
execution_environments = dr.ExecutionEnvironment.list()
execution_environments
for execution_environment in execution_environments:
# print(execution_environment)
if execution_environment.name == "[DataRobot] Python 3.11 GenAI":
BASE_ENVIRONMENT = execution_environment
environment_versions = dr.ExecutionEnvironmentVersion.list(
execution_environment.id
)
break
BASE_ENVIRONMENT_VERSION = environment_versions[0]
print(BASE_ENVIRONMENT)
print(BASE_ENVIRONMENT_VERSION)
print(BASE_ENVIRONMENT.id)
ExecutionEnvironment('[DataRobot] Python 3.11 GenAI') ExecutionEnvironmentVersion('v9') 64d2ba178dd3f0b1fa2162f0
CUSTOM_MODEL_NAME = "OpenAI Wrapper Model"
if CUSTOM_MODEL_NAME not in [c.name for c in dr.CustomInferenceModel.list()]:
# Create a new custom model
print("creating new custom model")
custom_model = dr.CustomInferenceModel.create(
name=CUSTOM_MODEL_NAME,
target_type=dr.TARGET_TYPE.TEXT_GENERATION,
target_name=COMPLETION_COLUMN_NAME,
description="Wrapper for OpenAI completion",
language="Python",
is_training_data_for_versions_permanently_enabled=True, # for latest updates to Model Registry in 9.0
)
else:
print("Custom MOdel Exists")
custom_model = [c for c in dr.CustomInferenceModel.list() if c.name == CUSTOM_MODEL_NAME].pop()
Custom MOdel Exists
# Create a new custom model version in DataRobot
print("Upload new version of model to DataRobot")
model_version = dr.CustomModelVersion.create_clean(
custom_model_id=custom_model.id,
base_environment_id=BASE_ENVIRONMENT.id,
files=[
"./custom_model/custom.py",
"./custom_model/requirements.txt",
"./custom_model/model-metadata.yaml",
],
network_egress_policy=dr.NETWORK_EGRESS_POLICY.PUBLIC,
runtime_parameter_values=[dr.models.custom_model_version.RuntimeParameterValue(field_name="OPENAI_API_KEY", type="credential", value="65baa1e82e6c8bb16561f72d")]
)
Upload new version of model to DataRobot
build_info = dr.CustomModelVersionDependencyBuild.start_build(
custom_model_id=custom_model.id,
custom_model_version_id=model_version.id,
max_wait=3600, # set a long timeout
)
Upload your test dataset to DataRobot to use for testing. You can add more tests for specific responses if needed.
pred_test_dataset = dr.Dataset.create_from_in_memory_data(test_data)
pred_test_dataset.modify(name="LLM Test Data")
pred_test_dataset.update()
Test the custom inference model in DataRobot¶
Next, use the environment to run the model with prediction test data to verify that the custom model is functional before deployment. To do this, upload the inference dataset for testing predictions.
After uploading the inference dataset, you can test the custom inference model. View a sample outcome of testing below.
# Test a new version in DataRobot
print("Run test of new version in DataRobot")
custom_model_test = dr.CustomModelTest.create(
custom_model_id=custom_model.id,
custom_model_version_id=model_version.id,
dataset_id=pred_test_dataset.id,
max_wait=3600, # 1 hour timeout
)
custom_model_test.overall_status
# Option 1
HOST = "https://app.datarobot.com"
for name, test in custom_model_test.detailed_status.items():
print("Test: {}".format(name))
print("Status: {}".format(test["status"]))
print("Message: {}".format(test["message"]))
print(
"Finished testing: "
+ HOST
+ "model-registry/custom-models/"
+ custom_model.id
+ "/assemble"
)
Run test of new version in DataRobot
Test: error_check Status: succeeded Message: Test: null_value_imputation Status: skipped Message: Test: long_running_service Status: succeeded Message: Test: side_effects Status: skipped Message: Test: prediction_verification_check Status: skipped Message: Test: performance_check Status: skipped Message: Test: stability_check Status: skipped Message: Finished testing: https://app.datarobot.commodel-registry/custom-models/667c389cbdb621563b57cfe1/assemble
Register and deploy the LLM¶
Next, register the model with the DataRobot Model Registry. The model registry contains entries from all models (predictive, generative, built in DataRobot, and externally hosted).
if CUSTOM_MODEL_NAME not in [m.name for m in dr.RegisteredModel.list()]:
print("Creating New Registered Model")
registered_model_version = dr.RegisteredModelVersion.create_for_custom_model_version(
model_version.id, name=CUSTOM_MODEL_NAME, description="LLM Wrapper Example from DataRobot Docs",
registered_model_name=CUSTOM_MODEL_NAME
)
else:
print("Using Existing Model")
registered_model = [m for m in dr.RegisteredModel.list() if m.name == CUSTOM_MODEL_NAME].pop()
registered_model_version = dr.RegisteredModelVersion.create_for_custom_model_version(
model_version.id, name=CUSTOM_MODEL_NAME, description="LLM Wrapper Example from DataRobot Docs",
registered_model_id=registered_model.id
)
Creating New Registered Model
Now, deploy the model. If you are in DataRobot multi-tenant, you must select a prediction environment.
pred_server = [s for s in dr.PredictionServer.list()][0]
MODEL_DEPLOYMENT_NAME = "LLM Wrapper Deployment"
if MODEL_DEPLOYMENT_NAME not in [d.label for d in dr.Deployment.list()]:
deployment = dr.Deployment.create_from_registered_model_version(
registered_model_version.id, label=MODEL_DEPLOYMENT_NAME, description="You're new deployment",
max_wait=1000,
# only needed for DR cloud
default_prediction_server_id=pred_server.id
)
else:
deployment = [d for d in dr.Deployment.list() if d.label == MODEL_DEPLOYMENT_NAME]
Test the deployment¶
Test that the deployment can successfully provide responses to prompts.
from datarobot_predict.deployment import predict
input_df = pd.DataFrame({
PROMPT_COLUMN_NAME: [
"Give me some context on large language models and their applications?",
"What is AutoML?"
],
})
result_df, response_headers = predict(deployment, input_df)
result_df
resultText_PREDICTION | DEPLOYMENT_APPROVAL_STATUS | promptText_OUTPUT | error_OUTPUT | |
---|---|---|---|---|
0 | Large language models are a type of artificial... | APPROVED | Give me some context on large language models ... | NaN |
1 | AutoML, or Automated Machine Learning, is the ... | APPROVED | What is AutoML? | NaN |
Validate the external LLM¶
The following methods execute and validate the external LLM.
This example associates a Use Case with the validation and creates the vector database within that Use Case.
Set the use_case_id
to specify an existing Use Case or create a new one with that name.
use_case_id = "<ADD_VALUE_HERE>"
use_case = dr.UseCase.get(use_case_id)
# UNCOMMENT if you wish to create a new UseCase
#use_case = dr.UseCase.create()
CustomModelLLMValidation.create
executes the validation of the external LLM. Be sure to provide the deployment ID.
external_llm_validation = CustomModelLLMValidation.create(
prompt_column_name=PROMPT_COLUMN_NAME,
target_column_name=COMPLETION_COLUMN_NAME,
deployment_id=deployment.dr_deployment.id,
name="My External LLM",
use_case=use_case,
wait_for_completion=True
)
assert external_llm_validation.validation_status == "PASSED"
print(f"External LLM Validation ID: {external_llm_validation.id}")
This external LLM can now be used in the GenAI E2E walkthrough, for example to create the LLM blueprint.