Skip to content

Upgrade from DataRobot 10.x to the latest release

This section provides a checklist of tasks that are required when upgrading from any DataRobot 10.x version to the latest release version.

Pre-upgrade actions

The following steps are required in addition to the standard upgrade procedures for the PCS and DataRobot application charts.

Set DataRobot namespace

export NAMESPACE="DATAROBOT_NAMESPACE" 

備考

Replace DATAROBOT_NAMESPACE with DataRobot namespace.

Manual CRD upgrade procedure

Update CRD labels and annotations:

kubectl label crd/notebooks.notebook.datarobot.com app.kubernetes.io/managed-by=Helm --overwrite
kubectl annotate crd/notebooks.notebook.datarobot.com meta.helm.sh/release-name=dr --overwrite
kubectl annotate crd/notebooks.notebook.datarobot.com meta.helm.sh/release-namespace=${NAMESPACE} --overwrite

kubectl label crd/notebookvolumes.notebook.datarobot.com app.kubernetes.io/managed-by=Helm --overwrite
kubectl annotate crd/notebookvolumes.notebook.datarobot.com meta.helm.sh/release-name=dr --overwrite
kubectl annotate crd/notebookvolumes.notebook.datarobot.com meta.helm.sh/release-namespace=${NAMESPACE} --overwrite

kubectl label crd/notebookvolumesnapshots.notebook.datarobot.com app.kubernetes.io/managed-by=Helm --overwrite
kubectl annotate crd/notebookvolumesnapshots.notebook.datarobot.com meta.helm.sh/release-name=dr --overwrite
kubectl annotate crd/notebookvolumesnapshots.notebook.datarobot.com meta.helm.sh/release-namespace=${NAMESPACE} --overwrite

kubectl label crd/lrs.lrs.datarobot.com app.kubernetes.io/managed-by=Helm --overwrite
kubectl annotate crd/lrs.lrs.datarobot.com meta.helm.sh/release-name=dr    --overwrite
kubectl annotate crd/lrs.lrs.datarobot.com meta.helm.sh/release-namespace=${NAMESPACE} --overwrite

kubectl label crd/executionenvironments.predictions.datarobot.com app.kubernetes.io/managed-by=Helm --overwrite
kubectl annotate crd/executionenvironments.predictions.datarobot.com meta.helm.sh/release-name=dr --overwrite
kubectl annotate crd/executionenvironments.predictions.datarobot.com meta.helm.sh/release-namespace=${NAMESPACE} --overwrite

kubectl label crd/inferenceservers.predictions.datarobot.com app.kubernetes.io/managed-by=Helm --overwrite
kubectl annotate crd/inferenceservers.predictions.datarobot.com meta.helm.sh/release-name=dr --overwrite
kubectl annotate crd/inferenceservers.predictions.datarobot.com meta.helm.sh/release-namespace=${NAMESPACE} --overwrite 

備考

If you upgrade from earlier 10.* versions, some CRDs may be missing—this does not affect the upgrade.

Power off DataRobot application pods

kubectl scale statefulset -l app.kubernetes.io/instance=dr --replicas=0 -n ${NAMESPACE} 
kubectl scale deployment -l app.kubernetes.io/instance=dr --replicas=0 -n ${NAMESPACE} 

Review Custom CA bundle configuration

If you are using Trust Manager and deploying custom CA bundles to manage root certificates across the DataRobot platform, you must verify the structure of the ca-cert-bundle.yaml file. For details, see the "upgrade notes" section of Configuring Custom CA.

Persistent Critical Services (PCS) upgrade

This step is required for installations where PCS is deployed within Kubernetes. It is not applicable for external PCS.

Update Password length

The DataRobot application has moved to chainguard images and requires the password length to be updated.

MongoDB

To update password length for MongoDB:

export MONGODB_ROOT_USER="pcs-mongodb"
export MONGODB_OLD_ROOT_PASSWORD=$(kubectl get secret --namespace ${NAMESPACE} pcs-mongo -o jsonpath="{.data.mongodb-root-password}" | base64 -d)
export MONGODB_NEW_ROOT_PASSWORD=$(openssl rand -base64 24 | tr -dc 'A-Za-z0-9' | head -c 18 )
echo "New MongoDB password: ${MONGODB_NEW_ROOT_PASSWORD}"

# update the password in mongodb
kubectl exec -i -t -n ${NAMESPACE} pcs-mongo-0 -- bash -c "mongosh --username $MONGODB_ROOT_USER --password $MONGODB_OLD_ROOT_PASSWORD --host pcs-mongo-headless --authenticationDatabase admin  --eval \"use admin;\" --eval \"db.changeUserPassword('$MONGODB_ROOT_USER', '$MONGODB_NEW_ROOT_PASSWORD')\" "
kubectl patch secret pcs-mongo -n ${NAMESPACE} -p "{\"stringData\":{\"mongodb-root-password\":\"$MONGODB_NEW_ROOT_PASSWORD\"}}" 
RabbtiMQ

To update password length for RabbitMQ:

export RABBITMQ_NEWSECRET=$(openssl rand -base64 24 | tr -dc 'A-Za-z0-9' | head -c 18 | base64)
echo "New RabbitMQ password: ${RABBITMQ_NEWSECRET}"
kubectl patch secret pcs-rabbitmq -n ${NAMESPACE} -p "{\"data\":{\"rabbitmq-password\":\"$RABBITMQ_NEWSECRET\"}}" 
Redis

To update password length for Redis:

export REDIS_NEWSECRET=$(openssl rand -base64 24 | tr -dc 'A-Za-z0-9' | head -c 18 | base64)
echo "New Redis password: ${REDIS_NEWSECRET}"
kubectl patch secret pcs-redis -n ${NAMESPACE} -p "{\"data\":{\"redis-password\":\"$REDIS_NEWSECRET\"}}" 
Elasticsearch

To update password length for Elasticsearch:

export ELASTICSEARCH_NEWSECRET=$(openssl rand -base64 24 | tr -dc 'A-Za-z0-9' | head -c 18 | base64)
echo "New Elasticsearch password: ${ELASTICSEARCH_NEWSECRET}"
kubectl patch secret pcs-elasticsearch -n ${NAMESPACE} -p "{\"data\":{\"elasticsearch-password\":\"$ELASTICSEARCH_NEWSECRET\"}}" 
Restart DB pods

To propagate updated passwords, the StatefulSet (STS) for each affected database must be scaled down to 0 and subsequently scaled back up to its proper replica count:

export MONGODB_REPLICAS=$(kubectl -n ${NAMESPACE} get statefulset pcs-mongo -o=jsonpath='{.spec.replicas}')
export RABBITMQ_REPLICAS=$(kubectl -n ${NAMESPACE} get statefulset pcs-rabbitmq -o=jsonpath='{.spec.replicas}')
export REDIS_REPLICAS=$(kubectl -n ${NAMESPACE} get statefulset pcs-redis-node -o=jsonpath='{.spec.replicas}')
export ES_REPLICAS=$(kubectl -n ${NAMESPACE} get statefulset pcs-elasticsearch-master -o=jsonpath='{.spec.replicas}')

kubectl -n ${NAMESPACE} scale statefulset pcs-mongo --replicas=0
kubectl -n ${NAMESPACE} scale statefulset pcs-rabbitmq --replicas=0
kubectl -n ${NAMESPACE} scale statefulset pcs-redis-node --replicas=0
kubectl -n ${NAMESPACE} scale statefulset pcs-elasticsearch-master --replicas=0

sleep 120
# wait for all those pods to stop

kubectl -n ${NAMESPACE} scale statefulset pcs-mongo --replicas=${MONGODB_REPLICAS}
kubectl -n ${NAMESPACE} scale statefulset pcs-rabbitmq --replicas=${RABBITMQ_REPLICAS}
kubectl -n ${NAMESPACE} scale statefulset pcs-redis-node --replicas=${REDIS_REPLICAS}
kubectl -n ${NAMESPACE} scale statefulset pcs-elasticsearch-master --replicas=${ES_REPLICAS} 

MongoDB

Remove Arbiter

Check the MongoDB replicaset configuration:

kubectl -n ${NAMESPACE} exec -it pcs-mongo-0 -- bash -c 'mongosh -u ${MONGODB_ROOT_USER} -p ${MONGODB_ROOT_PASSWORD} --eval "rs.status();"'|grep -E '^\s*name:|^\s*state|^\s*syncSourceHost' 

Identify the Arbiter node hostname that would be one of the secondary nodes in the replicaset and remove it.

If the Arbiter node is present, copy the node hostname including port and execute following command:

export ARBITER="pcs-mongo-arbiter-0.pcs-mongo-arbiter-headless.${NAMESPACE}.svc.cluster.local:27017"
kubectl -n ${NAMESPACE} exec -it pcs-mongo-0 -- bash -c "mongosh admin -u \${MONGODB_ROOT_USER} -p \${MONGODB_ROOT_PASSWORD} --authenticationDatabase admin --eval \"rs.remove(\\\"${ARBITER}\\\");\"" 

備考

  • Verify ARBITER to match the Arbiter node hostname.
Upgrade MongoDB from 5 to 6

To check which MongoDB version you are currently using, execute the following command:

kubectl -n ${NAMESPACE} get sts pcs-mongo -o yaml|grep image: 

If the returned image version is 5.x, proceed with the upgrade to MongoDB version 6.x.

Scale and edit MongoDB pcs-mongo StatefulSet by setting the image to mirror_chainguard_datarobot.com_mongodb-bitnami-fips:6.0 you have to specify the full URL including images repository.

If you are planning to pull mirror_chainguard_datarobot.com_mongodb-bitnami-fips:6.0 from docker.io registry:

export MONGODB_REPLICAS=$(kubectl -n ${NAMESPACE} get statefulset pcs-mongo -o=jsonpath='{.spec.replicas}')

kubectl -n ${NAMESPACE} scale statefulset pcs-mongo --replicas=0
kubectl -n ${NAMESPACE} patch statefulset pcs-mongo --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"docker.io/datarobot/mirror_chainguard_datarobot.com_mongodb-bitnami-fips:6.0"}]'

export DOCKERHUB_PASSWORD="MY_DOCKERHUB_PASSWORD"
kubectl create secret docker-registry dockerhub \
  --docker-server=https://index.docker.io/v1/ \
  --docker-username=datarobotprovisional \
  --docker-password="${DOCKERHUB_PASSWORD}"
  --namespace=${NAMESPACE}
kubectl patch statefulset pcs-mongo -n ${NAMESPACE} --type='json' \
  -p='[{"op": "add", "path": "/spec/template/spec/imagePullSecrets", "value":[{"name":"dockerhub"}]}]'

kubectl -n ${NAMESPACE} scale statefulset pcs-mongo --replicas=${MONGODB_REPLICAS} 

備考

Replace MY_DOCKERHUB_PASSWORD with your password to access Dockerhub.

If you are planning to pull mirror_chainguard_datarobot.com_mongodb-bitnami-fips:6.0 from custom OCI Registry:

export MONGODB_REPLICAS=$(kubectl -n ${NAMESPACE} get statefulset pcs-mongo -o=jsonpath='{.spec.replicas}')

kubectl -n ${NAMESPACE} scale statefulset pcs-mongo --replicas=0
kubectl -n ${NAMESPACE} patch statefulset pcs-mongo --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"DOCKER_REGISTRY_URL/datarobot/mirror_chainguard_datarobot.com_mongodb-bitnami-fips:6.0"}]'

kubectl -n ${NAMESPACE} scale statefulset pcs-mongo --replicas=${MONGODB_REPLICAS} 

備考

Replace DOCKER_REGISTRY_URL with the desired OCI Registry URL.

Configure Feature Compatibility if needed:

kubectl -n ${NAMESPACE} exec -it pcs-mongo-0 -- bash -c 'mongosh admin -u ${MONGODB_ROOT_USER} -p ${MONGODB_ROOT_PASSWORD} --authenticationDatabase admin --eval "db.adminCommand({ setFeatureCompatibilityVersion: \"6.0\" })"' 

Check Feature Compatibility:

kubectl -n ${NAMESPACE} exec -it pcs-mongo-0 -- bash -c 'mongosh admin -u ${MONGODB_ROOT_USER} -p ${MONGODB_ROOT_PASSWORD} --eval "db.adminCommand({ getParameter: 1, featureCompatibilityVersion: 1 })"' 

PostgreSQL

Remove wal_keep_segments from configmap {: remove-wal-keep-segments-upgrade-from-10x-to-latest }

Update the PCS ConfigMap for PostgreSQL to remove the wal_keep_segments parameter if present.

kubectl -n ${NAMESPACE} get configmap pcs-postgresql-configuration && kubectl -n ${NAMESPACE} get configmap pcs-postgresql-configuration -o yaml | sed '/wal_keep_segments/d' | kubectl apply -f - 

Scale pcs-postgresql replicas as follows:

kubectl -n ${NAMESPACE} scale sts pcs-postgresql --replicas=0
kubectl -n ${NAMESPACE} scale sts pcs-postgresql --replicas=1 

Check again to make sure pcs-postgresql-0 is the primary.

Remove unsupported functions {: unsupported-functions-removal-upgrade-from-10x-to-latest }

Before upgrading, clean up functions and aggregates in the modmon database to support the PostgreSQL upgrade from 12 to 14.

Make a note of the current PostgreSQL primary node name:

kubectl exec -i -t -n ${NAMESPACE}  pcs-postgresql-0 -c postgresql -- bash -c "/opt/bitnami/scripts/postgresql-repmgr/entrypoint.sh repmgr cluster show -f /opt/bitnami/repmgr/conf/repmgr.conf --compact" 

Run the command below on the primary node (assumed to be pcs-postgresql-0):

kubectl exec -i -t -n ${NAMESPACE} pcs-postgresql-0 -- sh -c 'export PGPASSWORD="$POSTGRES_POSTGRES_PASSWORD"; psql -U postgres -d modmon <<EOF
DROP FUNCTION IF EXISTS validate_bh_tt_count_less_than(
        actual_value_count integer,
        unique_value_count integer,
        aggregate_record_count integer,
        min_value double precision,
        max_value double precision,
        thresholds double precision[],
        random_seed double precision );
DROP FUNCTION IF EXISTS validate_bh_tt_percentiles(
        actual_value_count integer,
        unique_value_count integer,
        aggregate_record_count integer,
        min_value double precision,
        max_value double precision,
        percentiles double precision[],
        random_seed double precision);
DROP AGGREGATE IF EXISTS array_cat_agg(anyarray);
EOF' 

Verify that the functions and aggregates have been removed:

kubectl exec -i -t -n ${NAMESPACE} pcs-postgresql-0 -- sh -c 'export PGPASSWORD="$POSTGRES_POSTGRES_PASSWORD"; psql -U postgres -d modmon -t <<EOF
SELECT '\''validate_bh_tt_count_less_than'\'' AS object,
        CASE WHEN EXISTS (
            SELECT 1 FROM pg_proc p JOIN pg_namespace n ON p.pronamespace = n.oid
            WHERE p.proname = '\''validate_bh_tt_count_less_than'\''
        ) THEN '\''EXISTS'\'' ELSE '\''DOES NOT EXIST'\'' END;
SELECT '\''validate_bh_tt_percentiles'\'' AS object,
        CASE WHEN EXISTS (
            SELECT 1 FROM pg_proc p JOIN pg_namespace n ON p.pronamespace = n.oid
            WHERE p.proname = '\''validate_bh_tt_percentiles'\''
        ) THEN '\''EXISTS'\'' ELSE '\''DOES NOT EXIST'\'' END;
SELECT '\''array_cat_agg'\'' AS object,
        CASE WHEN EXISTS (
            SELECT 1 FROM pg_aggregate a
            JOIN pg_proc p ON a.aggfnoid = p.oid
            WHERE p.proname = '\''array_cat_agg'\''
        ) THEN '\''EXISTS'\'' ELSE '\''DOES NOT EXIST'\'' END;
EOF' 
Verify and convert repmgr database user passwords from md5 to scram-sha-256

Starting from version 10.2.4, the DataRobot platform adopted the more secure scram-sha-256 mechanism. This upgrade improves security by offering stronger password hashing and more robust challenge-response protection.

Verify and convert repmgr database user passwords from md5 to scram-sha-256:

kubectl exec -i -t -n ${NAMESPACE} pcs-postgresql-0 -c postgresql -- psql -U postgres -h localhost -c "ALTER SYSTEM SET password_encryption = 'scram-sha-256';"
kubectl exec -i -t -n ${NAMESPACE} pcs-postgresql-0 -c postgresql -- psql -U postgres -h localhost -c "select pg_reload_conf(); show password_encryption;"
kubectl exec -i -t -n ${NAMESPACE} pcs-postgresql-0 -c postgresql -- sh -c "psql -U postgres -h localhost -c \"ALTER USER repmgr WITH PASSWORD '\$REPMGR_PASSWORD'\""
kubectl exec -i -t -n ${NAMESPACE} pcs-postgresql-0 -c postgresql -- sh -c "psql -U postgres -h localhost -c \"SELECT rolname, rolpassword FROM pg_authid WHERE rolname = 'repmgr';\"" 

Upgrade to latest release

In the latest release, the PCS and DataRobot charts have been consolidated into a single chart. This change eliminates code duplication and simplifies the installation process.

Update values

The latest DataRobot release introduces minor changes to YAML configuration schema, therefore you must adjust the existing configuration to match the new schema. The table below shows the old and new configuration setting names. If your values files override any of the deprecated settings, make the necessary adjustments:

Old path New path デフォルト値
core.services.postgresql.hostname global.postgresql.hostname pcs-postgresql
core.services.postgresql.port global.postgresql.port 5432
core.services.mongodb.hostname global.mongodb.hosts pcs-mongo-headless
core.services.redis.hostname global.redis.hostname pcs-redis
core.services.redis.port global.redis.port 6379
core.services.sentinel.port global.redis.sentinel.port 26379
core.services.sentinel.hostname global.redis.sentinel.hosts pcs-redis
core.services.mongodb.port global.mongodb.port 27017
core.config_env_vars.REDIS_USE_TLS global.redis.tls false
core.config_env_vars.USE_REDIS_SENTINEL global.redis.sentinel.enabled true
core.config_env_vars.MONGO_DEFAULT_DATABASE global.mongodb.default_database admin
core.config_env_vars.MONGO_RS_NAME global.mongodb.replicaset_name rs0
core.config_env_vars.MONGO_CONNECT_METHOD global.mongodb.connect_method mongodb
redis.auth.password global.redis.auth.password empty string
mongodb.auth.rootUser global.mongodb.auth.username pcs-mongodb
mongodb.auth.rootPassword global.mongodb.auth.password empty
mongodb.enabled global.mongodb.internal true
postgresql-ha.enabled global.postgresql.internal true
rabbitmq.enabled global.rabbitmq.internal true
redis.enabled global.redis.internal true

Additionally, the filestore configuration has been refactored. The table below outlines the old and new configuration setting names. If your values files override any of the deprecated settings, make the necessary adjustments:

Old path New path
core.config_env_vars.FILE_STORAGE_TYPE global.filestore.type
core.config_env_vars.S3_* global.filestore.environment.S3_*
core.config_env_vars.AZURE_* global.filestore.environment.AZURE_*
core.config_env_vars.GOOGLE_* global.filestore.environment.GOOGLE_*

Minimal values files

For platform-specific configuration examples, refer to the minimal values files located within the datarobot-prime/override Helm chart artifact directory.

tar xzf datarobot-prime-X.X.X.tgz
cd datarobot-prime/override 

備考

Replace X.X.X with the latest release chart version.

Use the current values_dr.yaml and values_pcs.yaml files as reference when populating the minimal values files.

External PCS

Updating the values is required if DataRobot was already configured to use External PCS.

PostgreSQL

If you are running DataRobot on the OpenShift platform, update the postgresql-ha section of your values_pcs.yaml file to include these settings:

postgresql-ha:
  serviceAccount:
    create: false
    name: pcs-postgresql-sa
  global:
    compatibility:
      openshift:
        adaptSecurityContext: disabled 

Review custom CA bundle configuration

If you are using Trust Manager and deploying custom CA bundles to manage root certificates across the DataRobot platform, you must verify the structure of the ca-cert-bundle.yaml file. For details, see Configuring custom CA.

Review notebook configuration

The naming for notebook chart values was updated this change may impact your process. For detailed instructions, see the Notebooks Upgrade Guide.

Upgrade GenAI

Enable the LLM Gateway service in values_dr.yaml. For details, see the Generative AI documentation.

Application upgrade steps

Power off PCS

kubectl scale statefulset -l app.kubernetes.io/instance=pcs --replicas=0 -n ${NAMESPACE} 
kubectl scale deployment -l app.kubernetes.io/instance=pcs --replicas=0 -n ${NAMESPACE} 

Delete RabbitMQ and Redis PVCs

kubectl -n ${NAMESPACE} delete pvc data-pcs-rabbitmq-0

kubectl -n ${NAMESPACE} delete pvc redis-data-pcs-redis-node-0
kubectl -n ${NAMESPACE} delete pvc redis-data-pcs-redis-node-1
kubectl -n ${NAMESPACE} delete pvc redis-data-pcs-redis-node-2 

Delete obsolete resources

kubectl -n ${NAMESPACE} delete sts pcs-rabbitmq tileservergl-app prediction-server-app --cascade=orphan
kubectl -n ${NAMESPACE} delete deployment auth-server-hydra 

備考

If you upgrade from earlier 10.* versions, some deployments may be missing—this does not affect the upgrade.

Delete DataRobot namespace ingresses

kubectl -n ${NAMESPACE} get ingress --no-headers |awk '{print $1}'|xargs kubectl -n ${NAMESPACE} delete ingress 

Delete Elasticsearch certificate

kubectl -n ${NAMESPACE} delete secret pcs-elasticsearch-master-crt 

Change resource labels

for kind in secret pvc networkpolicy serviceaccount configmap service role rolebinding pdb; do
    for sts_name in $(kubectl get $kind -l app.kubernetes.io/instance=pcs -n ${NAMESPACE}  -o jsonpath='{.items[*].metadata.name}'); do
        echo "Retagging $kind/$sts_name"
        kubectl label $kind $sts_name app.kubernetes.io/instance=dr -n ${NAMESPACE}  --overwrite
        kubectl annotate $kind $sts_name meta.helm.sh/release-name=dr -n ${NAMESPACE}  --overwrite
    done
done 

Preserve PersistentVolumes (PVs) during the upgrade

kubectl get pvc -n ${NAMESPACE}  -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | while read pvc_name; do
    pv_name=$(kubectl get pvc $pvc_name -n ${NAMESPACE}  -o jsonpath='{.spec.volumeName}')
    if [ -z "$pv_name" ]; then continue; fi
    reclaimPolicy=$(kubectl get pv $pv_name -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')
    if [[ $pvc_name == *"pcs"* && $reclaimPolicy != "Retain" ]]; then
        echo "Patching PV: $pv_name to Retain"
        kubectl patch pv $pv_name -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
    fi
done 

Delete old PCS resources

for sec_name in pcs-db-buildservice pcs-db-cspspark pcs-db-identityresourceservice pcs-db-messagequeue pcs-db-modmon pcs-db-predenv pcs-db-sushihydra pcs-pgpool pcs-pgpool-custom-users pcs-pgpool-userdb pcs-postgresql-initdb pcs-postgresql-initdb-cfg pcs-redis; do
    kubectl delete secret $sec_name -n ${NAMESPACE}  --ignore-not-found=true
done
kubectl delete statefulset -l app.kubernetes.io/instance=pcs -n ${NAMESPACE}  --ignore-not-found=true
kubectl delete deployment -l app.kubernetes.io/instance=pcs -n ${NAMESPACE}  --ignore-not-found=true 

Temporary disable PostgreSQL HA

If High Availability (HA) is configured for PostgreSQL, reduce the PostgreSQL replicaCount to prevent potential conflicts during the upgrade procedure:

postgresql-ha:
    postgresql:
        replicaCount: 1 

Run Helm upgrade to latest version

helm upgrade --install dr datarobot-prime-X.X.X.tgz \
--namespace ${NAMESPACE} \
--values values_dr.yaml \
--set pg-upgrade.enabled=true \
--debug \
--timeout 20m 

備考

Replace X.X.X with the latest release chart version.

Post-upgrade tasks

After your helm upgrade command has completed successfully and the application pods are running, perform any applicable final steps from the sections below to complete the upgrade process.

Auth Server Hydra

If you notice that your auth-server-hydra pods fail to start, you may need to perform the following steps:

kubectl exec -i -t -n ${NAMESPACE} pcs-postgresql-0 -- sh -c 'psql postgres://postgres:$POSTGRES_POSTGRES_PASSWORD@localhost:$POSTGRESQL_PORT_NUMBER << SQL
  CREATE USER $POSTGRES_USER WITH ENCRYPTED PASSWORD '$POSTGRES_PASSWORD';
  CREATE DATABASE $POSTGRES_DB WITH OWNER $POSTGRES_USER;
  GRANT ALL PRIVILEGES ON DATABASE $POSTGRES_DB TO $POSTGRES_USER;
SQL'
kubectl exec -i -t -n ${NAMESPACE} pcs-postgresql-0 -- sh -c 'for f in /docker-entrypoint-initdb.d/secret/*.sql; do
  psql postgres://postgres:$POSTGRES_POSTGRES_PASSWORD@localhost:$POSTGRESQL_PORT_NUMBER -f $f
done' 

Then, re-run the Helm upgrade command without --set pg-upgrade.enabled=true. 例:

helm upgrade --install dr datarobot-prime-X.X.X.tgz \
  --namespace ${NAMESPACE} \
  --values values_dr.yaml \
  --debug \
  --timeout 20m 

備考

Replace X.X.X with the latest release chart version.

Scale PostgreSQL replica

If High Availability (HA) for PostgreSQL is required, scale the PostgreSQL replica count back up to the desired value to restore HA functionality.

kubectl scale statefulset pcs-postgresql --replicas=3 -n ${NAMESPACE} 

備考

To preserve HA for PostgreSQL make sure to update replica count in the values_dr.yaml file:

postgresql-ha:
    postgresql:
        replicaCount: 3 

Clean up the old PCS Helm release metadata

for secret_name in $(kubectl get secret -l owner=helm,name=pcs -n ${NAMESPACE} -o jsonpath='{.items[*].metadata.name}'); do
    kubectl delete secret $secret_name -n ${NAMESPACE} 
done 

Remove deprecated OAuth2Client objects

Execute the following command to remove old OAuth2Client objects, which are no longer needed.

kubectl get oauth2client -n ${NAMESPACE} -o name | xargs -I {} kubectl patch {} -n ${NAMESPACE} -p '{"metadata":{"finalizers":[]}}' --type=merge 

Verify that all OAuth2Client objects have been removed:

kubectl get oauth2client -n ${NAMESPACE} 

Remove deprecated deployments

After a successful upgrade, you can remove deployments that were deprecated in previous versions.

The following deployments were deprecated:

  • auth-server-app
  • auth-server-hydra-app
  • identity-resource-svc-app
  • service-registrion-controller

To remove a deprecated deployment, run the following command:

kubectl delete deployment auth-server-app -n ${NAMESPACE}
kubectl delete deployment auth-server-hydra-app -n ${NAMESPACE}
kubectl delete deployment identity-resource-svc-app -n ${NAMESPACE}
kubectl delete deployment service-registrion-controller -n ${NAMESPACE} 

備考

If you upgrade from earlier 10.* versions, some deployments may be missing—this does not affect the upgrade.

Upgrade GenAI

For every installation that includes a GenAI subsystem, you must run static files migration and execution environment migration. This is done automatically for installations with internet access, however, offline clusters must do this manually: