How to create Reusable AI Agents as Frends Subprocesses

Reusable AI Agents as part of your Processes.

This guide explains how to design and use Frends Subprocesses as reusable, autonomous AI Agents. By encapsulating AI-driven logic within a Subprocess, you can create modular, intelligent components that can be called from any Process.

In Frends, an AI Agent is a design pattern, not a specific component. It can be implemented as a specialized Subprocess that performs a specific business function autonomously using the Intelligent AI Connector to reason and act. For example, you can build an "AI Invoice Agent" to handle invoice processing or an "AI Support Agent" to classify customer tickets.

This approach allows you to treat AI logic as a reusable tool, promoting modularity and simplifying complex workflows. You will also learn how to execute these AI Agents on different Agent Groups, such as an on-premises Agent, to run local AI models for enhanced security and data privacy.

The AI Agent Design Pattern

An AI Agent as a Frends Subprocess is designed to function as a self-contained, intelligent unit. It uses Large Language Models (LLMs) within the Frends Platform to perform specific business functions autonomously.

Think of an AI Agent Subprocess as having a specific purpose—it's built to perform a single, well-defined business task like extracting data from a document or classifying an email. The AI agent uses one or more AI Connectors to perform analysis, make decisions, or generate content. It can also leverage other Frends Tasks and Connectors as tools to interact with external systems, such as calling an API to fetch customer data from a CRM.

Because it's a Subprocess, the AI Agent is fully reusable and can be called from any parent Process. This means you maintain the logic in one place. Every action, prompt, and reasoning step taken by the AI is logged within the Frends Process Instance, providing full auditability and transparency.

By building agentic logic into a Subprocess, you create a clear distinction between the core business process orchestration and the specialized AI action. This makes your integrations easier to manage and scale.

How to build a Reusable AI Agent

In this example, we will create an AI Invoice Agent. This agent will be a Subprocess that receives an invoice as a PDF file, extracts key information using AI, and returns a structured JSON object.

Create the Subprocess and Define Inputs

Start by creating a new Subprocess in the Frends UI. Navigate to Subprocesses and click Create New. When you set up the Subprocess, select the Manual Trigger to define the input parameters your agent will accept.

For our invoice agent, you'll need to accept a file as input. In the trigger settings, add a parameter with Key set to InvoiceFilePath, and a clear Description like "File path to the invoice file to be processed (e.g., PDF, PNG)."

This trigger configuration creates a clear interface for the agent, specifying that it expects a file path as input.

Implement the AI Logic

Now it's time to add the core AI logic. Add an AI Connector to the canvas and connect it to the Trigger. When you configure the connector, you can pass the file path from the Trigger directly as a parameter if you're using a multimodal model like OpenAI's GPT-4o.

In the User Prompt field, provide clear instructions for what the AI should do. Here's an example prompt that tells the AI to extract invoice data:

Extract the following fields from the provided invoice document:
- Invoice Number
- Invoice Date
- Due Date
- Vendor Name
- Total Amount
- A list of line items, where each item includes a description, quantity, and price.

Return the result as a single, clean JSON object. Do not include any explanatory text outside of the JSON.

This prompt instructs the AI to perform optical character recognition (OCR), understand the document's structure, and return the data in a predictable format. The AI Connector already includes a system prompt created by Frends, that further optimizes the result to be operable within the Frends Process.

Enrich Data Using Tools (Optional)

Your AI agent can use other Frends Tasks as tools to perform actions or gather more information. For instance, after extracting the vendor name, the AI agent could look up the vendor's ID in an ERP system. To do this, add an HTTP Request Task after the AI Connector and configure it to call your internal ERP API. You can use the vendor name extracted by the AI as a query parameter like this: https://my-erp.api/vendors?name={{#result[Intelligent AI Connector].Response.VendorName}}.

The result of this Task—such as the vendor ID—can then be merged with the invoice data extracted by the AI when you construct the final return object.

Define the Return Value

To complete your agent, add a Return shape at the end of the Subprocess. Configure the Return shape to output a structured JSON object containing all the processed and enriched data. You can use an expression to construct the final object from the results of the previous steps.

Here's what the final JSON output might look like:

{
  "invoiceNumber": "INV-2024-101",
  "invoiceDate": "2024-10-15",
  "dueDate": "2024-11-14",
  "vendorName": "Example Corp",
  "vendorId": "VNDR-5678",
  "totalAmount": 1500.00,
  "lineItems": [
    {
      "description": "Product A",
      "quantity": 2,
      "price": 500.00
    },
    {
      "description": "Service B",
      "quantity": 1,
      "price": 500.00
    }
  ]
}

After saving and deploying this Subprocess, your "AI Invoice Agent" is ready to be called from any Frends Process.

Executing AI Agents on Different Agent Groups

A powerful feature of Frends is the ability to execute Subprocesses on different Agent Groups. This is useful for AI use cases where data cannot leave the premises for security or compliance reasons. You can create an AI Agent that runs on an on-premises Frends Agent and uses a locally hosted LLM, such as one served via Ollama.

Set Up On-Premises Environment

Begin by installing a Frends Agent on a server within your local network. Deploy a local LLM server, such as Ollama, on the same network. Create a dedicated Agent Group for this on-premises Agent. You might name it something like "OnPrem-AI-Agents" to clearly identify its purpose.

Develop and Deploy the AI Agent Subprocess

Create your Subprocess following the same pattern described in the previous example. The key difference is in the AI Connector configuration—instead of pointing to a cloud-based LLM, provide the URL of your local LLM instance. Deploy this Subprocess and assign it to the "OnPrem-AI-Agents" Agent Group you just created.

Call the Subprocess Remotely

In your main Process, which might be running in the Frends cloud, add a Call Subprocess Task and select the AI Agent Subprocess you created. In the Subprocess settings, expand Show advanced settings and enable Remote call. Configure the remote call to execute in the "OnPrem-AI-Agents" group for the relevant Environment.

When the main Process runs, the Call Subprocess Task will securely send the request to the on-premises Agent over Azure Service Bus. The AI Agent will execute locally, process the data using the local LLM, and return the result again over the service bus. This ensures that sensitive data is processed entirely within your network, and no data is directly transferred from the on-premise environment to cloud.

Best Practices for Designing AI Agents

  • Single Responsibility Principle: Design each AI Agent to perform one specific business function. This makes them easier to test, maintain, and reuse.

  • Stateless Design: Whenever possible, design agents to be stateless. They should receive input, perform their task, and return an output without retaining memory of past executions. For stateful operations, use Frends' long-running process capabilities or an external data store.

  • Define Clear Interfaces: Use well-defined parameters in the Manual Trigger and return a consistent, predictable data structure (like a JSON object) from the Return shape.

  • Robust Error Handling: Use Scope shapes with Catch blocks to handle potential failures, such as the AI model being unavailable or returning an invalid response. You can also define a global error-handling Subprocess for unhandled exceptions.

Last updated

Was this helpful?