Skip to main content
Pipelines are generated by the selected AI agent whenever an application calls the RAG framework. The agent examines the incoming request, determines the most effective approach, and constructs a pipeline by selecting the appropriate tools from the tool repository. Each pipeline contains a sequence of subtasks, where the main task is broken down into smaller, manageable steps. These subtasks are executed in a specific order (specified by the Agent), with each subtask involving a call to a specific tool. The RAG framework leverages OpenAI’s function-calling capabilities to handle structured outputs, allowing for organized data processing.

Structure of a Pipeline

Each step in the pipeline follows an Input-Process-Output (IPO) loop with built-in error handling to ensure reliable execution:
  • Input: Each step starts with input arguments, defined by the tool being used. These arguments, derived from previous steps or initial data, serve as the input for the IPO loop.
  • Process: The tool processes the input according to its function, whether retrieving data, performing calculations, or generating a response. A process can be a simple calcultion or an API call or any other classic programmatical task.
  • Output: The tool produces a return value as output. This return value is collected by the AI agent, which stores it for use as an argument in the next step of the pipeline, maintaining continuity and data flow across steps.
RAG Framework

Error Handling and Iterative Self-Correction

If a step fails, the pipeline is equipped with error-handling capabilities that allow the AI agent to address the issue and adjust the pipeline. The AI agent interprets the error message and can rebuild or adjust the pipeline iteratively, using the error feedback to identify alternative tools or adjust parameters.