The Portable Prediction Server is a premium feature exclusive to DatRobot MLOps. Contact your DataRobot representative or administrator for information on enabling this feature.
There are two model modes supported by the server: single-model (SM) and multi-model (MM). Use SM mode when only a single model package has been mounted into the Docker container inside the /opt/ml/model directory. Use MM mode in all other cases. Despite being compatible predictions-wise, SM mode provides a simplified HTTP API that does not require a model package to be identified on disk and preloads a model into memory on start.
The Docker container Filesystem directory should match the following layouts.
In multi-model mode, the Docker image exposes the following endpoints:
POST /deployments/:id/predictions scores a given dataset.
GET /deployments/:id/info returns information about the loaded model.
POST /deployments/:id uploads a model package to the container.
DELETE /deployments/:id deletes a model package from the container.
GET /deployments returns a list of model packages that are in the container.
GET /ping ensures the tech stack is up and running.
The :id included in the /deployments routes above refers to the unique identifier for model packages on the disk. The ID is the directory name containing the model package. Therefore, if you have the following /opt/ml/model layout:
In order to connect your model package to a certain deployment, provide the deployment ID of the deployment you want to host your prediction statistics.
If you're in Single Model (SM) mode, the deployment ID has to be provided via the MLOPS_DEPLOYMENT_ID environment variable. In Multi Model (MM) mode, a special config.yml should be prepared and dropped alongside the model package with the desired deployment_id value:
deployment_id:5fc92906ad764dde6c3264fa
If you want to track accuracy, configure it for the deployment, and then provide extra settings for the running model:
For SM mode, set the following environment variables:
If you are running PPS images that were downloaded previously, these parameters will not be available until the PPS image is manually updated:
Managed AI Platform (SaaS): starting Aug 2021
Self-Managed AI Platform: starting v7.2
By default, PPS serves predictions over an insecure listener on port 8080 (clear text HTTP over TCP).
You can also serve predictions over a secure listener port 8443 (HTTP over TLS/SSL, or simply HTTPS). When the secure listener is enabled, the insecure listener becomes unavailable.
Note
You cannot configure PPS to be available on both ports simultaneously; it is either HTTP on 8080 or HTTPS on 8443.
The configuration is accomplished using the environment variables described below:
PREDICTION_API_TLS_ENABLED: The master flag that enables HTTPS listener on port 8443 and disables HTTP listener on port 8080.
Default: false (HTTPS disabled)
Valid values (case-insensitive):
Parameter value
Interpretation
true, yes, y, 1
true
false, no, n, 0
false
Note
The flag value must be interpreted as true to enable TLS. All other PREDICTION_API_TLS_* environment variables (if passed) are ignored if this setting is not enabled.
PREDICTION_API_TLS_CERTIFICATE: PEM-formatted content of the TLS/SSL certificate.
Required: Yes if PREDICTION_API_TLS_ENABLED is true, otherwise no.
PREDICTION_API_TLS_CERTIFICATE_KEY_PASSWORD: Passphrase for the secret certificate key passed in PREDICTION_API_TLS_CERTIFICATE_KEY.
Required: Yes, only if a certificate key was created with a passphrase.
PREDICTION_API_TLS_PROTOCOLS: Encryption protocol implementation(s) to use.
Default: TLSv1.2 TLSv1.3
Valid values: SSLv2|SSLv3|TLSv1|TLSv1.1|TLSv1.2|TLSv1.3, or any space-separated combination of these values.
Warning
As of August 2021, all implementations except TLSv1.2 and TLSv1.3 are considered deprecated and/or insecure. DataRobot highly recommends using only these implementations. New installations may consider using TLSv1.3 exclusively as it is the most recent and secure TLS version.
PREDICTION_API_TLS_CIPHERS: List of cipher suites to use.
TLS support is an advanced feature. The cipher suites list has been carefully selected to follow the latest recommendations and current best practices. DataRobot does not recommend overriding it.
Sets the number of workers to spin up. This option controls the number of HTTP requests the Prediction API can process simultaneously. Typically, set this to the number of CPU cores available for the container.
1
PREDICTION_API_MODEL_REPOSITORY_PATH
Sets the path to the directory where DataRobot should look for model packages. If the PREDICTION_API_MODEL_REPOSITORY_PATH points to a directory containing a single model package in its root, the single-model running mode is assumed by PPS. Multi-model mode is assumed otherwise.
/opt/ml/model/
PREDICTION_API_PRELOAD_MODELS_ENABLED
Requires every worker to proactively preload all mounted models on start. This should help to eliminate the problem of cache misses for the first requests after the server starts and the cache is still "cold." See also PREDICTION_API_SCORING_MODEL_CACHE_MAXSIZE to completely eliminate the cache misses.
false for multi-model mode
true to single-model mode
PREDICTION_API_SCORING_MODEL_CACHE_MAXSIZE
The maximum number of scoring models to keep in each worker's RAM cache to avoid loading them on demand for each request. In practice, the default setting is low. If the server running PPS has enough RAM, you should set this to a value greater than the total number of premounted models to fully leverage caching and avoid cache misses. Note that each worker's cache is independent, so each model will be copied to each worker's cache. Also consider enabling PREDICTION_API_PRELOAD_MODELS_ENABLED for multi-model mode to avoid cache misses.
By default, the PPS will periodically attempt to read deployment information from an mplkg in cases where the package was re-uploaded via HTTP or the associated configuration is changed. If you are not planning to update the mplkg or its configuration after the PPS starts, consider setting this to a very high value (e.g., 1000000) to reduce the number of reading attempts. This will help reduce latency for some requests.
60
PREDICTION_API_MONITORING_ENABLED
Sets whether DataRobot offloads data monitoring. If true, the Prediction API will offload monitoring data to the monitoring agent.
For self-managed 8.x installations, this setting requires that the PPS run Python 2 and Python 3 interpreters. Then, the PPS automatically determines the version requirement based on which Python version the model was trained on. When this setting is enabled, PYTHON3_SERVICES is redundant and ignored. Note that this requires additional RAM to run both versions of the interpreter.
False
PYTHON3_SERVICES
Only enable this setting when the PREDICTION_API_RPC_DUAL_COMPUTE_ENABLED setting is disabled and each model was trained on Python 3. You can save approximately 400MB of RAM by excluding the Python2 interpreter service from the container.
None
Python support for self-managed installations
For Self-Managed installations before 9.0, the PPS does not support Python 3 models by default; therefore, setting PYTHON3_SERVICES to true is required to use Python 3 models in those installations.
If you are running an 8.x version of DataRobot, you can enable "dual-compute mode" (PREDICTION_API_RPC_DUAL_COMPUTE_ENABLED='true') to support both Python2 and Python 3 models; however, this configuration requires an extra 400MB of RAM. If you want to reduce the RAM footprint (and all models are either Python2 or Python3), you should avoid enabling "dual-compute mode." If all models are trained on Python 3, enable Python 3 services (PYTHON3_SERVICES='true''). If all models are trained on Python2, there is no need to configure an additional environment variable, as the default interpreter is still Python 2.
The predictions routes (POST /predictions (single-model mode) and POST /deployments/:id/predictions) have the same query arguments and HTTP headers as their standard route counterparts, with a few exceptions. As with regular Dedicated Predictions API, the exact list of supported arguments depends on the deployed model. Below is the list of general query arguments supported by every deployment.
Key
Type
Description
Example(s)
passthroughColumns
list of strings
(Optional) Controls which columns from a scoring dataset to expose (or to copy over) in a prediction response.
The request may contain zero, one, or more columns. (There’s no limit on how many column names you can pass.) Column names must be passed as UTF-8 bytes and must be percent-encoded (see the HTTP standard for this requirement). Make sure to use the exact name of a column as a value.
(Optional) Controls which columns from a scoring dataset to expose (or to copy over) in a prediction response. The only possible option is all and, if passed, all columns from a scoring dataset are exposed.
(Optional) Configures the precision of floats in prediction results. Sets the number of digits after the decimal point.
If there are no digits after the decimal point, rather than adding zeros, the float precision will be less than decimalsNumber.
?decimalsNumber=15
Note the following:
You can't pass the passthroughColumns and passthroughColumnsSet parameters in the same request.
While there is no limit on the number of column names you can pass with the passthroughColumns query parameter, there is a limit on the size of the HTTP request line (currently 8192 bytes).
You can parametrize the Prediction Explanations prediction request with the following query parameters:
Note
To trigger prediction explanations maxExplanations=N, where N is greater than 0 must be sent.
Key
Type
Description
Example(s)
maxExplanations
int OR string
(Optional) Limits the number of explanations returned by server. Previously called maxCodes (deprecated). For SHAP explanations only a special constant all is also accepted.
?maxExplanations=5
?maxExplanations=all
thresholdLow
float
(Optional) Prediction Explanation low threshold. Predictions must be below this value (or above the thresholdHigh value) for Prediction Explanations to compute.
?thresholdLow=0.678
thresholdHigh
float
(Optional) Prediction Explanation high threshold. Predictions must be above this value (or below the thresholdLow value) for Prediction Explanations to compute.
?thresholdHigh=0.345
excludeAdjustedPredictions
bool
(Optional) Includes or excludes exposure-adjusted predictions in prediction responses if exposure was used during model building. The default value is true (exclude exposure-adjusted predictions).
?excludeAdjustedPredictions=true
explanationNumTopClasses
int
(Optional) Multiclass models only;
Number of top predicted classes for each row that will be explained. Only for multiclass explanations. Defaults to 1. Mutually exclusive with explanationClassNames.
?explanationNumTopClasses=5
explanationClassNames
list of string types
(Optional) Multiclass models only. A list of class names that will be explained for each row. Only for multiclass explanations. Class names must be passed as UTF-8 bytes and must be percent-encoded (see the HTTP standard for this requirement). This parameter is mutually exclusive with explanationNumTopClasses. By default, explanationNumTopClasses=1 is assumed.
You can parametrize the time series prediction request using the following query parameters:
Key
Type
Description
Example(s)
forecastPoint
ISO-8601 string
An ISO 8601 formatted DateTime string, without timezone, representing the forecast point. This parameter cannot be used if predictionsStartDate and predictionsEndDate are passed.
?predictionsStartDate=2013-12-20T01:30:00Z
relaxKnownInAdvanceFeaturesCheck
bool
true or false. When true, missing values for known-in-advance features are allowed in the forecast window at prediction time. The default value is false. Note that the absence of known-in-advance values can negatively impact prediction quality.
?relaxKnownInAdvanceFeaturesCheck=true
predictionsStartDate
ISO-8601 string
The time in the dataset when bulk predictions begin generating. This parameter must be defined together with predictionsEndDate. The forecastPoint parameter cannot be used if predictionsStartDate and predictionsEndDate are passed.
The time in the dataset when bulk predictions stop generating. This parameter must be defined together with predictionsStartDate. The forecastPoint parameter cannot be used if predictionsStartDate and predictionsEndDate are passed.
You can also use the Docker image to read and set the configuration options listed in the table above (from /opt/ml/config). The file must contain <key>=<value> pairs, where each key name is a corresponding environment variable.