Scale down the DataRobot Application services¶
Overview¶
Taking periodical backups does not require stopping the DataRobot application. However, stopping the application during the backup and restoration process is recommended in scenarios where absolute data consistency is crucial (for example, migration). This helps prevent any conflicting transactions while the database is being restored.
-
Graceful Shutdown: Coordinate with stakeholders to schedule a maintenance window for stopping the application gracefully. Notify users in advance to minimize disruptions.
-
Application Connection Termination: Ensure that all active connections to the PostgreSQL database from the application are terminated before initiating the restoration.
-
Database Locks: Consider applying locks or restrictions to prevent any write operations on the database during the restoration process.
Prerequisites (in case scaling application down for absolute data consistency)¶
- Utility kubectl of version 1.23 is installed on the host where backup will be created
- Utility
kubectlis configured to access the Kubernetes cluster where DataRobot application is running, verify this withkubectl cluster-infocommand.
Export name of DataRobot application Kubernetes namespace in DR_CORE_NAMESPACE variable:
export DR_CORE_NAMESPACE=<namespace>
Before scaling the application down, let's annotate each deployment with a number of replicas that it is running now:
for d in $(kubectl -n $DR_CORE_NAMESPACE get deploy -o name -l release=dr); do
r=$(kubectl -n $DR_CORE_NAMESPACE get $d -o jsonpath='{.spec.replicas}')
kubectl -n $DR_CORE_NAMESPACE annotate --overwrite $d replicas=$r
done
Scale the application services down:
kubectl -n $DR_CORE_NAMESPACE scale deploy -l release=dr --replicas=0
kubectl -n $DR_CORE_NAMESPACE scale deployments/pcs-pgpool --replicas=0