| Product feedback automation |
Use Predictive AI models in tandem with Generative AI models to overcome the limitation of guardrails around automating the summarization and segmentation of sentiment text. |
| Teams/Slack chatbots |
Build collaborative app plug-ins, such as bots for Teams and Slack. |
| AI cluster labeling |
Use cluster insights provided by DataRobot with ChatGPT to provide business- or domain-specific labels to the clusters using OpenAI and DataRobot APIs. |
| Customer communication AI |
How generative AI models, like GPT-3, can be used to augment predictions and provide customer-friendly subject matter expert responses. |
| Support workflow optimization |
Use generative AI models to cater to level-one requests, allowing support teams to focus on more pressing and high-visibility requests. |
| Data annotator app |
Leverage the data annotator app to both label new data and label predicted data within an active learning situation after training a model with DataRobot. |
| AI data prep assistant |
Use the AI data preparation assistant to streamline and automate the data preparation process. |
| JITR bot responses |
Create a deployment to provide context-aware answers 'on the fly' using "Just In Time Retrieval" (JITR). |
| PDF RAG with LLM |
Use an LLM as an OCR tool to extract all the text, table, and graph data from a PDF, then build a RAG and playground chat on DataRobot. |
| Healthcare conversation agent |
Use Retrieval Augmented Generation to build a conversational agent for Healthcare professionals. |
| Teams GenAI integration |
With DataRobot's Generative AI offerings, organizations can deploy chatbots without the need for an additional front-end or consumption layers. |
| Vector chunk visualization |
Implement a Streamlit application to gain insights from a vector database of chunks. |
| XoT implementation |
Implement and evaluate Everything of Thoughts (XoT) in DataRobot, an approach to make generative AI "think like humans." |
| Zero-shot error analysis |
Use zero-shot text classification with large language models (LLMs), focusing on its application in error analysis of supervised text classification models. |