Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

Mistral 7B on Google GCP and DataRobot

Access this AI accelerator on GitHub

There are a wide variety of open source large language models (LLMs). For example, there has been a lot of interest in Llama and variations such as Alpaca, Vicuna, Falcon, and Mistral. Because these LLMs require expensive GPUs, users often want to compare cloud providers to find the best hosting option. In this accelerator, you will work with Google Cloud Platform to host Llama 2.

You may also want to integrate with the cloud provider that hosts your Virtual Private Cloud (VPC) so that you can ensure proper authentication and access it only from within the VPC. While this accelerator uses authentication over the public internet, it is possible to leverage Google's cloud infrastructure to adjust and suit your cloud architectural needs, including provisioning scaleout policies.

Finally, by leveraging Vertex AI in a managed format, you can integrate that infrastructure into your existing stack to meet monitoring needs—things like monitoring service health, CPU usage, and low-level alerting to billing, cost attribution, and account management and, using GCP's tools to route information into BigQuery for ad hoc analytics, log exploration, and more.

For information about Mistral, you can read the model card on HuggingFace, the Arxiv page and the release announcement. It is available under an Apache 2.0 License.

Updated May 20, 2024