Skip to content

On-premise users: click in-app to access the full platform documentation for your version of DataRobot.

RFPbot use case

The following example illustrates how DataRobot leverages its own technology-agnostic, scalable, extensible, flexible, and repeatable framework to build an end-to-end generative AI solution at scale.

This article and the embedded video showcase a Request for Proposal Assistant named RFPBot. RFPBot has a predictive and a generative component and was built entirely within DataRobot in the course of a single afternoon.

In the image below, notice the content that follows the paragraph of generated text. There are four links to references, five subscores from the audit model, and an instruction to up-vote or down-vote the response.

RFPBot uses an organization’s internal data to help salespeople generate RFP responses in a fraction of the usual time. The speed increase is attributable to three sources:

  1. The custom knowledge base underpinning the solution. This stands in for the experts that would otherwise be tapped to answer the RFP.

  2. The use of Generative AI to write the prose.

  3. Integration with the organization’s preferred consumption environment (Slack, in this case).

RFPBot integrates best-of-breed components during development. Post-development the entire solution is monitored in real-time. RFPBot both showcases the framework itself and the power of combining generative and predictive AI more generally to deliver business results.

Note that the concepts and processes are transferable to any other use case that requires accurate and complete written answers to detailed questions.

Applying the framework

Within each major framework component, there are many choices of tools and technology. When implementing the framework, any choice is possible at each stage. Because organizations want to use best-of-breed—and which technology is best-of-breed will change over time—what really matters is flexibility and interoperability in a rapidly changing tech landscape. The icons shown are among the current possibilities.

RFPBot uses the following. Each choice at each stage in the framework is independent. The role of the DataRobot AI Platform is to orchestrate, govern, and monitor the whole solution.

  • Word, Excel, and Markdown files as source content.
  • An embedding model from Hugging Face (all-MiniLM-L6-v2)
  • Facebook AI Similarity Search (FAISS) Vector Database
  • ChatGPT 3.5 Turbo
  • A Logistic Regression
  • A Streamlit application
  • A Slack integration.

Generative and predictive models work together. Users are actually interacting with two models each time they type a question—a Query Response Model and an Audit Model.

  • The Query Response Model is generative: It creates the answer to the query.
  • The Audit Model is predictive: It evaluates the correctness of the answer given as a predicted probability.

The citations listed as resources in the RFPBot example are citations of internal documents drawn from the knowledge base. The knowledge base was created by applying an embedding model to a set of documents and files and storing the result in a vector database. This step solves the problem of LLMs being stuck in time and lacking the context from private data. When a user queries RFPBot, context-specific information drawn from the knowledge base is made available to the LLM and shown to the user as a source for the generation.

Orchestration and monitoring

The entirety of the end-to-end solution integrating best-of-breed components is built in a DataRobot-hosted notebook, which has enterprise security, sharing, and version control.

Once built, the solution is monitored using standard and custom-defined metrics. In the image below notice the metrics specific to LLMOps such as Informative Response, Truthful Response, Prompt Toxicity Score, and LLM Cost.

By abstracting away infrastructure and environment management tasks, a single person can create an application such as RFPBot in hours or days, not weeks or months. By using an open, extensible platform for developing GenAI applications and following a repeatable framework organizations avoid vendor lock-in and the accumulation of technical debt. They also vastly simplify model lifecycle management by being able to upgrade and replace individual components within the framework over time.




Updated August 28, 2024