# XoT implementation

> XoT implementation - Implement and evaluate Everything of Thoughts (XoT) in DataRobot, an approach
> to make generative AI

This Markdown file sits beside the HTML page at the same path (with a `.md` suffix). It summarizes the topic and lists links for tools and LLM context.

Companion generated at `2026-05-06T18:17:09.580093+00:00` (UTC).

## Primary page

- [XoT implementation](https://docs.datarobot.com/en/docs/api/dev-learning/accelerators/llm-and-genai-apps/xot-implementation.html): Full documentation for this topic (HTML).

## Related documentation

- [Developer documentation](https://docs.datarobot.com/en/docs/api/index.html): Linked from this page.
- [Developer learning](https://docs.datarobot.com/en/docs/api/dev-learning/index.html): Linked from this page.
- [AI accelerators](https://docs.datarobot.com/en/docs/api/dev-learning/accelerators/index.html): Linked from this page.
- [LLM and GenAI applications](https://docs.datarobot.com/en/docs/api/dev-learning/accelerators/llm-and-genai-apps/index.html): Linked from this page.

## Documentation content

[Access this AI accelerator on GitHub](https://github.com/datarobot-community/ai-accelerators/tree/main/generative_ai/XoT%20Evaluation)

Implement and evaluate Everything of Thoughts (XoT) in DataRobot, an approach to make generative AI "think like humans." In the world of generative AI, various methods (called thought generation) are researched to help AI acquire more human-like "thinking patterns." In particular, XoT aims to produce more accurate answers by teaching generative AI the "thinking process." There are two main methods to achieve XoT:

1. Chain-of-Thought (CoT): A method of thinking by connecting multiple thoughts like a chain and reasoning through them
2. Retrieval Augmented Thought Tree (RATT): A method of thinking by expanding multiple possibilities like tree branches and retrieving relevant information from the external knowledge base.

This accelerator explains how to implement these methods. Specifically, it introduces how to set up and compare three types of LLM prompts: direct, Chain-of-Thought, and RATT. "Direct" referring to the well-known "you are a helpful assistant." The accelerator also explains how to conduct performance evaluations using sample datasets, comparing the accuracy and efficiency of each method, and analyze using multiple evaluation metrics.
