Skip to content

備考

This document contains terms that pertain to agents or agentic workflows. For the full glossary, see the DataRobot glossary.

Agentic glossary

This document contains all terms from the DataRobot glossary that pertain to agents or agentic workflows.

A


Agent

An AI-powered component within DataRobot designed to execute complex, multi-step tasks autonomously. An agent can be configured with specific goals, LLMs, and a set of tools, allowing it to perform actions like orchestrating a data preparation workflow, running a modeling experiment, or generating an analysis without direct human intervention. Agents exhibit autonomous behavior, can reason about their environment, make decisions, and adapt their strategies based on feedback. Multiple agents can be combined in an agentic workflow to solve more sophisticated business problems through collaboration and coordination.

Agent-based modeling

Computational modeling approaches that simulate complex systems by modeling individual agents and their interactions. Agent-based modeling enables the study of emergent behaviors and system-level properties that arise from individual agent behaviors. In DataRobot's platform, agent-based modeling capabilities allow users to simulate business processes, test agent strategies, and understand how different agent configurations affect overall system performance.

エージェント型AI

A paradigm of artificial intelligence where AI systems are designed to act as autonomous agents that can perceive their environment, reason about goals, plan actions, and execute tasks with minimal human oversight. Agentic AI systems are characterized by their ability to make independent decisions, learn from experience, and adapt their behavior to achieve objectives. In DataRobot's platform, agentic AI enables sophisticated automation of complex data science workflows, allowing AI systems to handle end-to-end processes from data preparation to model deployment and monitoring.

エージェントのワークフロー

Systems that leverage AI agents to perform tasks and make decisions within a workflow, often with minimal human intervention. Agentic workflows can be built in a local IDE using DataRobot templates and a CLI and managed with real-time LLM intervention and moderation with out-of-the-box and custom guards, including integration with NVIDIA's NeMo for content safety and topical rails in the UI or with code.

Agent Framework (AF) components

Agent Framework (AF) components provide modular building blocks for constructing sophisticated AI agents. AF components include reasoning engines, memory systems, action planners, and communication modules that can be combined to create custom agent architectures. In DataRobot's platform, AF components enable rapid development of specialized agents with specific capabilities while maintaining consistency and interoperability across different agent implementations.

Agent-to-Agent (A2A)

Agent-to-Agent (A2A) refers to communication protocols and frameworks that enable direct interaction and coordination between AI agents. A2A systems facilitate information sharing, task delegation, and collaborative problem-solving among multiple agents. In DataRobot's agentic workflows, A2A capabilities enable agents to work together seamlessly, share context and knowledge, and coordinate complex multi-agent operations while maintaining security and governance controls.

Alignment

The critical process of steering an AI model's outputs and behavior to conform to an organization's specific ethical guidelines, safety requirements, and business objectives. In DataRobot, alignment is practically applied through features like guardrails, custom system prompts, and content moderation policies. This practice helps to mitigate risks from biased, unsafe, or off-topic model responses, ensuring the AI remains a trustworthy and reliable tool for the enterprise.

アプリ

AIアプリを参照してください。

Autonomy

The ability of an AI agent to operate independently and make decisions without constant human oversight. Autonomous agents can plan, execute, and adapt their behavior based on changing conditions and feedback. In DataRobot's agentic workflows, autonomous capabilities are balanced with human oversight through guardrails and monitoring to ensure safe and effective operation. Autonomy enables agents to handle complex, multi-step processes while maintaining alignment with business objectives and safety requirements.

C


Chain-of-thought

A prompting technique that encourages language models to break down complex problems into step-by-step reasoning processes. In DataRobot's agentic workflows, chain-of-thought prompting enhances agent reasoning capabilities by requiring explicit intermediate steps in decision-making, leading to more transparent and reliable outcomes. This technique improves problem-solving accuracy and enables better debugging and validation of agent behavior in multi-step tasks.

チャット

単一の LLMブループリントに基づいてLLMエンドポイントにプロンプトを送信(その結果、LLMペイロードを送信)し、LLMからレスポンスを受信します。 この場合、以前のプロンプト/レスポンスのコンテキストがペイロードとともに送信されます。

チャンキング

非構造化テキストの本文を取得し、より小さな非構造化テキスト (トークン)に分割するアクション。

引用

LLMレスポンスの生成中に使用される ベクターデータベースからのテキストのチャンク。

CLI

Command Line Interface (CLI) tools that enable programmatic interaction with DataRobot's agentic workflows and platform services. CLI tools provide scriptable access to agent configuration, workflow execution, and platform management functions. In DataRobot's agentic ecosystem, CLI tools support automation of agent deployment, monitoring, and maintenance tasks, enabling integration with CI/CD pipelines and automated workflows.

Cognitive architecture

The underlying structural framework that defines how AI agents process information, make decisions, and interact with their environment. Cognitive architectures specify the components, processes, and relationships that enable intelligent behavior in agents. In DataRobot's agentic workflows, cognitive architectures provide the foundation for agent reasoning, memory management, learning, and decision-making capabilities, enabling sophisticated autonomous behavior.

Connected vector database

ベクターデータベースを作成するために、サポートされているプロバイダーに直接接続してアクセスする外部ベクターデータベース。 データソースがデータレジストリにローカルに保存されて、構成設定が適用され、作成されたベクターデータベースがプロバイダーに書き戻されます。 Connected vector databases maintain real-time synchronization with the platform and provide seamless access to embeddings and text chunks for grounding LLM responses.

Context window

The limited amount of information, measured in tokens, that a large language model can hold in its active memory for a single chat conversation turn. This 'memory' includes the user's prompt, any recent conversation history provided, and data retrieved via Retrieval Augmented Generation (RAG). The size of the context window is a critical parameter in an LLM blueprint, as it dictates the model's ability to handle long documents or maintain coherence over extended dialogues; any information outside this window is not considered when generating the next response.

Conversation memory

The ability of an AI system to remember and reference previous interactions within a conversation session (meaning that the session contains one or more chat conversation turns). Conversation memory enables contextual continuity, allowing the AI to maintain awareness of earlier exchanges and build upon previous responses. In DataRobot's chat interfaces, conversation memory helps maintain coherent, contextually relevant dialogues.

D


デプロイ(プレイグラウンドから)

LLMブループリントとそれに関連するすべての設定はレジストリに登録され、DataRobotの製品スイートでデプロイできます。

Directed acyclic graph (DAG)

A mathematical structure used to represent workflows where nodes represent tasks or operations and edges represent dependencies between them. In AI workflows, DAGs ensure that tasks are executed in the correct order without circular dependencies, enabling efficient orchestration of complex multi-step processes like data preprocessing, model training, and deployment pipelines.

E


埋め込み

テキストの数値(ベクター)表現、またはテキストの数値表現のコレクション。 埋め込みを生成するアクションは、非構造化テキストの チャンクを取得し、テキスト埋め込みモデルを使用してテキストを数値表現に変換することを意味します。 The chunk is the input to the embedding model and the embedding is the "prediction" or output of the model.

Episodic memory

Memory systems that store specific experiences, events, and contextual information about past interactions and situations. Episodic memory enables AI agents to recall specific instances, learn from particular experiences, and apply contextual knowledge to similar situations. In DataRobot's agentic workflows, episodic memory allows agents to remember specific user interactions, successful task executions, and contextual details that inform future decision-making.

F


Few-shot learning

A capability of a model to learn to perform a task from a small number of examples provided in the prompt.

Few-shotプロンプティング

A technique where a few examples are provided in the prompt (either in an input or system prompt) to guide the model's behavior and improve its performance on specific tasks. Few-shot prompting helps models understand the desired output format and style without requiring fine-tuning, making it useful for quick adaptation to new tasks or domains.

Fine-tuning

The process of adapting pre-trained foundation models to specific tasks or domains by continuing training on targeted datasets. In DataRobot's platform, fine-tuning enables users to customize large language models for particular use cases, improving performance on domain-specific tasks while preserving general capabilities. Unlike prompt engineering which works with existing model weights, fine-tuning modifies the model's internal parameters to create specialized versions optimized for particular applications, industries, or data types.

基盤モデル

A powerful, large-scale AI model, like GPT or Claude, that provides broad, general-purpose capabilities learned from massive datasets. In the DataRobot platform, these models act as the core component or 'foundation' of an LLM blueprint. Rather than being a ready-made solution, a foundation model is the versatile starting point that can be customized for specific business needs through techniques like prompting, RAG, or fine-tuning.

Function calling

The capability of large language models to invoke external functions, tools, or APIs based on user requests and conversation context. In DataRobot's agentic workflows, function calling enables agents to perform actions beyond text generation, such as data retrieval, mathematical computations, API interactions, and system operations. This allows agents to execute complex tasks, integrate with enterprise systems, and provide dynamic responses based on real-time information. Function calling transforms conversational AI into actionable systems that can manipulate data and interact with external services.

G


Generative AI (GenAI)

A type of artificial intelligence that generates new content based on learned patterns from training data. In DataRobot's platform, GenAI capabilities include text generation, content creation, and intelligent responses through LLM blueprints. Unlike traditional predictive models that analyze existing data, GenAI creates novel outputs through prompting and can be integrated into DataRobot workflows for content generation, analysis, and automated decision-making processes.

Guardrails

Safety mechanisms that prevent AI systems from generating harmful or inappropriate content. Guardrails include content filtering, output validation, and behavioral constraints that ensure AI responses align with safety guidelines and organizational policies. In DataRobot, guardrails can be configured and help maintain responsible AI practices and prevent the generation of unsafe or unethical content.

Grounding

The process of ensuring that language model responses are based on specific, verifiable data sources rather than relying solely on training data. In DataRobot's platform, grounding is achieved through Retrieval Augmented Generation (RAG) workflows that connect LLMs to vector databases containing relevant documents, knowledge bases, or enterprise data. This technique improves response accuracy, reduces hallucinations, and ensures that AI outputs are contextualized with current, relevant information from trusted sources.

H


Hallucination

When a language model generates information that is plausible-sounding but factually incorrect or not grounded in the provided data.

High code

A development approaches that emphasizes custom programming and fine-grained control over application behavior. High-code solutions provide maximum flexibility and customization capabilities for complex requirements. In DataRobot's agentic workflows, high-code capabilities enable advanced users to create highly specialized agents with custom logic, integrate with complex enterprise systems, and implement sophisticated decision-making algorithms.

Human in the loop (HILT)

Integration patterns that incorporate human oversight, validation, and intervention into AI agent workflows. Human-in-the-loop systems enable humans to review agent decisions, provide feedback, correct errors, and guide agent behavior at critical decision points. In DataRobot's agentic workflows, human-in-the-loop capabilities ensure quality control, enable learning from human expertise, and maintain human authority over sensitive or high-stakes decisions.

I


In-context learning

The ability of LLMs to learn from examples provided in the prompt without requiring fine-tuning. In-context learning allows models to adapt their behavior based on the context and examples given in the current conversation, enabling them to perform new tasks or follow specific instructions without additional training.

Instruction tuning

Training LLMs to follow specific instructions or commands by fine-tuning them on instruction-response pairs. Instruction tuning improves a model's ability to understand and execute user requests, making it more useful for practical applications where following directions is important.

K


Knowledge cutoff

The date after which an LLM's training data ends, limiting its knowledge of historical events, information, and developments that occurred after that point. Knowledge cutoff dates are important for understanding the temporal scope of a model's information and determining when additional context or real-time data sources may be needed.

L


大規模言語モデル(LLM)

A deep learning model trained on extensive text datasets that can understand, generate, and process human language. In DataRobot's platform, LLMs form the core of LLM blueprints and can be configured with various settings, system prompts, and vector databases to create customized AI applications. These models enable DataRobot users to build intelligent chatbots, content generators, and analysis tools that can understand context and provide relevant responses.

LLMブループリント

保存されたブループリントは、 デプロイに使用できます。 LLMブループリントは、LLMからレスポンスを生成するために必要なものの完全なコンテキストを表し、結果の出力は、 プレイグラウンド内で比較できます。 この情報は、 LLMブループリント設定でキャプチャされます。

LLMブループリントのコンポーネント

LLMブループリント設定を構成するエンティティ。これは、ベクターデータベース、ベクターデータベースを生成する埋め込みモデルユーザー、LLM設定、システムプロンプトなどを指します。これらのコンポーネントは、DataRobot内でネイティブに提供することも、外部ソースから取り込むこともできます。

LLMブループリントの設定

レスポンスを生成するためにLLMに送信されるパラメーター(ユーザーが入力したプロンプトと連動)。 これには、単一のLLM、LLM設定、オプションでシステムプロンプト、さらにオプションでベクターデータベースが含まれます。 ベクターデータベースが割り当てられていない場合、LLMはトレーニングからの学習を使用してレスポンスを生成します。 LLM ブループリントの設定は変更可能なので、さまざまな設定を試すことができます。

LLM gateway

A centralized service in DataRobot that manages access to multiple large language models from external providers with support for unified authentication, rate limiting, and request routing. The LLM gateway enables organizations to standardize their interactions with various LLM providers while maintaining security, monitoring, and cost controls across all model usage.

LLM payload

レスポンスを生成するためにLLMエンドポイントに送信される内容のバンドル。 これには、ユーザープロンプト、LLM設定、システムプロンプト、ベクターデータベースから取得した情報が含まれます。

LLMのレスポンス

LLMエンドポイントに送信されたペイロードに基づいてLLMから生成されたテキスト。

LLM設定

LLMがユーザープロンプトを入力してレスポンスを生成する方法を定義するパラメーター。 これらは、レスポンスを変更するためにLLMブループリント内で調整できます。 現在、これらのパラメーターは「Temperature」、「Top P」、「最大出力トークン数」設定で表されます。

Low code

A development approach that minimizes the amount of manual coding required to build applications and workflows. Low-code platforms provide visual interfaces, drag-and-drop components, and pre-built templates that enable rapid development. In DataRobot's agentic workflows, low-code capabilities allow users to create sophisticated AI agents and workflows through configuration interfaces rather than extensive programming, making agentic AI accessible to non-technical users.

M


Multi-agent flow

A workflow pattern where multiple AI agents collaborate to solve complex problems by dividing tasks among specialized agents. Each agent has specific capabilities and responsibilities, and they communicate and coordinate to achieve the overall objective. Multi-agent flows enable more sophisticated problem-solving by leveraging the strengths of different specialized agents. See also Agentic workflow.

Model Context Protocol (MCP) server

A Model Context Protocol (MCP) server provides standardized interfaces for AI agents to interact with external systems and data sources. MCP servers enable secure, controlled access to tools, databases, APIs, and other resources that agents need to accomplish their tasks. In DataRobot's agentic workflows, MCP servers facilitate seamless integration between agents and enterprise systems while maintaining security and governance controls.

Model alignment

Techniques to ensure AI models behave according to human values and intentions. Model alignment involves training and fine-tuning processes that help models produce outputs that are helpful, honest, and harmless, reducing risks of harmful or unintended behaviors in production environments.

モデレーション

The process of monitoring and filtering model outputs to ensure they comply with safety, ethical, and policy guidelines.

いいえ


NAT

Neural Architecture Transfer (NAT) enables efficient transfer of learned representations and architectures between different AI models and tasks. NAT techniques allow agents to leverage pre-trained components and adapt them for specific use cases without full retraining. In DataRobot's agentic workflows, NAT capabilities enable rapid deployment of specialized agents by transferring knowledge from general-purpose models to domain-specific applications.

NIM

NVIDIA Inference Microservice (NIM) is a containerized AI model that provides optimized, high-performance inference with low latency and efficient resource utilization. DataRobotのプラットフォームでは、NIMをエージェントワークフローに統合することで、高度なAI機能を提供できるため、エージェントは最適なパフォーマンスとスケーラビリティを維持しながら、特定のタスクに最先端のモデルを活用できます。

O


One-shot learning

A capability of a model to learn to perform a task from only a single example.

Orchestration

The coordination of multiple AI components, tools, and workflows to achieve complex objectives. Orchestration involves managing the flow of data and control between different AI services, ensuring proper sequencing, error handling, and resource allocation. In DataRobot, orchestration enables the creation of sophisticated multi-step AI workflows that combine various capabilities and tools.

P


Parameter efficient fine-tuning (PEFT)

Methods to fine-tune large models using fewer parameters than full fine-tuning. PEFT techniques, such as LoRA (Low-Rank Adaptation) and adapter layers, allow for efficient model customization while maintaining most of the original model's performance and reducing computational requirements.

プレイグラウンド

LLMブループリント(LLMおよび関連する設定)を作成して操作する場所。それぞれのレスポンスを比較して、本番環境で使用するものを決定します。 多くのLLMブループリントは、プレイグラウンド内に存在することができます。 プレイグラウンドはユースケースのアセットです。1つのユースケースに複数のプレイグラウンドが存在する場合があります。

プレイグラウンドの比較

比較のためにLLMブループリントをプレイグラウンドに追加し、これらのLLMブループリントにプロンプトを送信し、レンダリングされたレスポンスを評価する場所です。 RAGでは、以前のプロンプトを参照せずに、単一のプロンプトがLLMに送信され、単一のレスポンスが生成されます。 これにより、ユーザーは複数のLLMブループリントからのレスポンスを比較できます。

プロンプト

チャット中に行う入力で、LLMのレスポンスの生成に使用されます。

Prompt engineering

The practice of designing and refining input prompts to guide a language model toward producing desired outputs.

Prompt injection

A security vulnerability where malicious prompts can override system instructions or safety measures. Prompt injection attacks attempt to manipulate AI systems into generating inappropriate content or performing unintended actions by crafting inputs that bypass the model's intended constraints and guidelines.

プロンプトテンプレート

システムプロンプトを参照してください。

Pulumi

Infrastructure as Code (IaC) platform that enables developers to define and manage cloud infrastructure using familiar programming languages. Pulumi supports multiple cloud providers and provides a unified approach to infrastructure management. In DataRobot's agentic workflows, Pulumi enables automated provisioning and management of infrastructure resources needed for agent deployment, scaling, and monitoring across different environments.

R


Reinforcement learning from human feedback (RLHF)

A training method that uses human feedback to improve model behavior. RLHF involves collecting human preferences on model outputs and using reinforcement learning techniques to fine-tune the model to produce responses that align with human values and preferences, improving safety and usefulness.

ReAct

A Reasoning and Acting (ReAct) framework combines reasoning capabilities with action execution in AI agents. ReAct enables agents to think through problems step-by-step, plan actions, execute them, and observe results to inform subsequent reasoning. In DataRobot's agentic workflows, ReAct capabilities allow agents to perform complex problem-solving by iteratively reasoning about situations, taking actions, and learning from outcomes to achieve their goals.

検索

The process of finding relevant information from a knowledge base or database. In the context of RAG workflows, retrieval involves searching through vector databases or other knowledge sources to find the most relevant content that can be used to ground and inform AI responses, improving accuracy and reducing hallucination.

検索拡張生成(RAG)

プロンプト、システムプロンプト、LLM設定、ベクターデータベース(またはベクターデータベースのサブセット)、およびこのペイロードに基づいて対応するテキストを返すLLMを含むペイロードをLLMに送信するプロセス。 これには、ベクターデータベースから関連情報を取得し、プロンプト、システムプロンプト、およびLLM設定とともにLLMエンドポイントに送信して、ベクターデータベース内のデータに基づくレスポンスを生成するプロセスが含まれます。 この操作には、オプションで複数のプロンプトのチェーンを実行するためのオーケストレーションを組み込むこともできます。

Retrieval Augmented Generation (RAG) workflow

An AI system that runs RAG, which includes data preparation, vector database creation, LLM configuration, and response generation. RAG workflows typically involve steps such as document chunking, embedding generation, similarity search, and context-aware response generation, all orchestrated to provide accurate, grounded responses to user queries. See also Retrieval Augmented Generation (RAG).

S


Semantic memory

Memory systems that store general knowledge, facts, concepts, and relationships that are not tied to specific experiences. Semantic memory enables AI agents to maintain domain knowledge, understand concepts, and apply general principles to new situations. In DataRobot's agentic workflows, semantic memory allows agents to maintain knowledge about business processes, domain expertise, and general problem-solving strategies.

Search method that finds content based on meaning rather than exact keyword matches. Semantic search uses vector embeddings to understand the intent and context of queries, enabling more accurate and relevant results even when the exact words don't match. This approach is particularly useful in RAG systems for finding the most relevant information to ground AI responses.

Short-term memory

Temporary storage systems that AI agents use to maintain context and information during active task execution. Short-term memory enables agents to remember recent interactions, maintain conversation context, and track progress on current tasks. In DataRobot's agentic workflows, short-term memory allows agents to maintain coherence across multi-step processes and provides continuity in user interactions.

Long-term memory

Persistent storage systems that AI agents use to retain knowledge, experiences, and learned patterns across multiple sessions and tasks. Long-term memory enables agents to build upon previous experiences, maintain learned behaviors, and accumulate domain knowledge over time. In DataRobot's agentic workflows, long-term memory allows agents to improve performance through experience and maintain consistency across different use cases.

ストリーミング

Real-time generation of text where output is displayed as it's being generated. Streaming provides immediate feedback to users by showing AI responses as they are produced, rather than waiting for the complete response. This approach improves user experience by reducing perceived latency and allowing users to see progress in real-time.

Single agent flow

A workflow pattern where a single AI agent handles all aspects of a task from start to finish. The agent receives input, processes it through its capabilities, and produces output without requiring coordination with other agents. Single agent flows are suitable for straightforward tasks that can be completed by one specialized agent.

サイドカーのモデル

回答を返すLLMをサポートする構造的なコンポーネント。 プロンプトが有害かどうか、インジェクション攻撃かどうかなどを判断するのに役立ちます。DataRobotでは、ホストされたカスタム指標を使って監視を行います。

Stop sequence

A specific token or set of tokens that signals a language model to stop generating further output.

Syftr

A specialized agent framework component that provides secure, privacy-preserving data processing capabilities for AI agents. Syftr enables agents to work with sensitive data while maintaining confidentiality and compliance with privacy regulations. In DataRobot's agentic workflows, Syftr components allow agents to process encrypted or anonymized data, perform federated learning, and maintain data privacy throughout the agent lifecycle.

システムプロンプト

オプションのフィールドであるシステムプロンプトは、個々のすべてのプロンプトの先頭にある「汎用」プロンプトです。 LLMのレスポンスを指示およびフォーマットします。 システムプロンプトは、レスポンス生成中に作成される構造、トーン、形式、コンテンツに影響を与えることがあります。

T


Temperature

A parameter that controls the creativity and randomness of LLM responses. Lower temperature values (0.1-0.3) produce more focused, consistent outputs suitable for factual responses, while higher values (0.7-1.0) generate more creative and diverse content. DataRobot's playground interface allows you to experiment with different temperature values in LLM blueprint settings to find the optimal balance for your specific use case.

テンプレート

Pre-configured frameworks or structures that provide a starting point for creating agentic workflows, applications, or configurations. Templates in DataRobot include predefined agent configurations, workflow patterns, and code structures that accelerate development and ensure best practices. Templates can include agent goals, tool configurations, guardrails, and integration patterns, allowing users to quickly deploy sophisticated agentic systems without starting from scratch.

トークン

The smallest unit of text that LLMs process when parsing prompts/generating responses. In DataRobot's platform, tokens are used to measure input/output size of chats and calculate usage costs for LLM operations. When you send prompts to LLM blueprints, the system tokenizes your text and tracks consumption for billing and performance monitoring. Token usage is displayed in DataRobot's playground and deployment interfaces to help you optimize costs and stay within platform limits.

Tokenization

The process of breaking text into smaller units called tokens, which can be words, subwords, or characters, for processing by a language model.

Token usage

The number of tokens consumed by an LLM for input and output, often used for billing and cost management. Token usage is a key metric for understanding the computational cost of AI operations, as most LLM providers charge based on the number of tokens processed. Monitoring token usage helps optimize costs and resource allocation in AI applications.

ツール

A software component or service that provides specific functionality to AI agents or workflows. Tools can perform various tasks such as data retrieval, computation, API calls, or specialized processing. In DataRobot's agentic workflows, tools are modular components that agents can invoke to extend their capabilities and perform complex operations beyond their core functionality.

Toolkit

A collection of tools, utilities, and resources designed to support the development and deployment of agentic AI systems. Toolkits provide standardized interfaces, common functionality, and best practices for building AI agents. In DataRobot's platform, toolkits include pre-built tools for data processing, model training, API integration, and workflow orchestration, enabling rapid development of sophisticated agentic applications.

Top-k

A decoding parameter that limits the model's next-token choices to the k most likely options, sampling from only those candidates to generate more focused or creative responses.

Top-p (nucleus sampling)

A decoding parameter that limits the model's next-token choices to the smallest set whose cumulative probability exceeds a threshold p, allowing for dynamic selection of likely tokens.

毒性

The presence of harmful, offensive, or inappropriate language in model outputs, which safety and moderation systems aim to detect and prevent.

U


非構造化テキスト

テーブルにすっきり収まらないテキスト。 最も一般的な例は、通常、何らかの種類のドキュメントまたはフォームの大きなテキストブロックです。

V


ベクターデータベース

A specialized database that stores text chunks alongside their numerical representations (embeddings) for efficient similarity search. In DataRobot's platform, vector databases enable RAG operations by allowing LLM blueprints to retrieve relevant information from large document collections. When you upload documents to DataRobot, the system automatically chunks the text, generates embeddings, and stores them in a vector database that can be connected to LLM blueprints for grounded, accurate responses based on your specific content.

Z


Zero-shot learning

A capability of a model to perform a task without having seen any examples of that task during training, relying on generalization from related knowledge.