# How to visualize Frends telemetry with OpenTelemetry

OpenTelemetry is a powerful, open-source observability framework for collecting telemetry data like metrics and traces. By enabling OpenTelemetry in your Frends Agent, you can send operational data about Agent health, infrastructure operations, and external dependencies to various observability platforms—whether that's Grafana, Datadog, Splunk, or Honeycomb. This guide walks you through setting up the complete pipeline from your Frends Agent to your chosen monitoring platform.

{% hint style="info" %}
OpenTelemetry was introduced to Frends on version 6.1, with large improvements to the feature in version 6.2. We recommend upgrading to the latest Frends version if you wish to use OpenTelemetry.
{% endhint %}

## Prerequisites

Before you begin, make sure you have a running Frends Agent where you have access to modify configuration files. You'll also need an account with your chosen observability platform—whether that's Grafana, Datadog, Splunk, or Honeycomb.

The key piece you'll need is a running instance of the [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/getting-started/). The Collector is a separate piece of software that sits between your Frends Agent and your monitoring platform, acting as a processing pipeline for your telemetry data. Think of it as a smart relay that receives data from Frends, processes it, and forwards it wherever you need it to go.

OpenTelemetry Collector runs on Docker, requiring it to be installed on the Agent machine, along with other required software to configure and run it. Please refer to the [OpenTelemetry Collector instructions](https://opentelemetry.io/docs/collector/getting-started/) to install it.

### Enabling OpenTelemetry in your Frends Agent

The first thing you'll need to do is configure your Frends Agent to start collecting and exporting telemetry data. This happens through the [Agent's configuration file](/reference/architecture/agent-application-settings.md), and specifically you'll want to use the `appsettings.production.json` file. Using this specific file ensures your changes persist through Agent updates, which saves you from having to reconfigure things later.

Locate the `appsettings.production.json` file in your Frends Agent installation directory and open it with a text editor. You'll need to add two settings that control what telemetry data gets collected and sent.

```json
{
  "EnableOTPLMetrics": true,
  "EnableOTPLTracing": true
}
```

The `EnableOTPLMetrics` setting tells the Frends Agent to start collecting and publishing OpenTelemetry metrics, including Agent health indicators and performance data from infrastructure operations. The `EnableOTPLTracing` setting enables trace collection, which currently instruments the Agent's health check endpoints (`/FrendsStatusInfo` and `/FrendsStatusInfoLiveness`) along with SQL client operations and HTTP client requests that your Processes initiate.

Once you've added these settings, save the file and restart the Frends Agent service for the changes to take effect. The Agent will now start collecting telemetry, but it needs to know where to send it.

### Configuring the OpenTelemetry endpoint

The Frends Agent sends its telemetry data to an endpoint specified by an **operating system environment variable**. You'll need to set this variable on the server where your Frends Agent is running.

Create a new environment variable named `OTEL_EXPORTER_OTLP_ENDPOINT` and set its value to the URL of your OpenTelemetry Collector's receiver. The Collector typically listens on port `4317` for gRPC connections or port `4318` for HTTP connections. If you're running the Collector on the same machine as your Frends Agent, the endpoint would be something like `http://localhost:4317`.

The beauty of using an environment variable is that the Frends Agent will automatically pick it up and use it to push telemetry data. No hardcoding, no config file changes beyond what you've already done—just set the variable and the Agent handles the rest.

{% hint style="info" %}
On Frends version 6.1, you will also need to specify the `HttpStatusinfoPort` value in [Frends application settings](/reference/architecture/agent-application-settings.md#httpstatusinfoport) to enable OpenTelemetry. OpenTelemetry Collector will need to listen to this port.
{% endhint %}

### Setting up the OpenTelemetry Collector

Now comes configuring your OpenTelemetry Collector. The Collector acts as a smart pipeline: it receives data from sources like your Frends Agent, optionally processes or transforms that data, and then exports it to one or more destinations. You might send metrics to Datadog while simultaneously logging traces to Splunk, or forward everything to Grafana—the Collector gives you that flexibility.

Create a configuration file for your Collector. You can name it something like `otel-collector-config.yaml`. The configuration has a straightforward structure with three main sections: receivers (where data comes from), exporters (where data goes to), and processors (what happens to the data in between). All of these come together in a service pipeline definition.

Let's start with the receivers section. Since your Frends Agent will be sending OpenTelemetry data, you'll use the OTLP receiver which supports both gRPC and HTTP protocols.

```yaml
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318
```

This tells the Collector to listen for incoming data on both ports, making it flexible enough to accept data regardless of which protocol the Frends Agent uses.

Next up is the exporters section, where you configure where your telemetry data should go. The configuration here depends entirely on which platform you're using. Here are some examples for different platforms—you'll want to configure the one that matches your setup.

For Datadog, you'll need your API key and the configuration looks something like this:

```yaml
exporters:
  datadog:
    api:
      key: ${DATADOG_API_KEY}
    metrics:
      summaries:
        mode: "distribution"
```

If you're using Splunk, you'll configure the HTTP Event Collector (HEC) endpoint:

```yaml
exporters:
  splunk_hec:
    token: ${SPLUNK_HEC_TOKEN}
    endpoint: ${SPLUNK_ENDPOINT}
```

For Honeycomb, you'll use the OTLP HTTP exporter with your API key in the headers:

```yaml
exporters:
  otlphttp/honeycomb:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": ${HONEYCOMB_API_KEY}
      "x-honeycomb-dataset": "frends-telemetry"
```

And if you're using Grafana with a backend like Prometheus or Tempo, you might use exporters like `prometheus` or `otlp` depending on your Grafana stack configuration.

While you're testing and setting things up, it's incredibly useful to include a logging exporter that simply writes telemetry data to the console:

```yaml
exporters:
  logging:
    loglevel: detailed
```

This lets you see exactly what data is flowing through your pipeline, which makes troubleshooting much easier.

The processors section is optional but recommended. At minimum, you'll want to include a batch processor, which groups telemetry data into batches before sending it to your exporters. This reduces network overhead and improves performance.

```yaml
processors:
  batch:
```

Finally, you tie everything together in the service section, where you define your pipelines. You'll typically want separate pipelines for traces and metrics since they're different types of data:

```yaml
service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [logging, datadog]
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [logging, datadog]
```

In this example, both traces and metrics come in through the OTLP receiver, get batched for efficiency, and then get exported to both the console (for debugging) and Datadog (for visualization). You can adjust the exporters list to match whichever platforms you're using—just replace `datadog` with `splunk_hec`, `otlphttp/honeycomb`, or whatever exporter you configured.

Here's what a complete configuration might look like:

```yaml
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

exporters:
  datadog:
    api:
      key: ${DATADOG_API_KEY}
    metrics:
      summaries:
        mode: "distribution"
  
  logging:
    loglevel: detailed

processors:
  batch:

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [logging, datadog]
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [logging, datadog]
```

For specific details about configuring exporters for your chosen platform—including API key formats, endpoint URLs, and additional options—refer to the [OpenTelemetry Collector Contrib documentation](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter). Each exporter has its own configuration reference with examples and best practices.

### Running the Collector and visualizing your data

With everything configured, it's time to start the OpenTelemetry Collector using your configuration file. The exact command depends on how you installed the Collector, but it typically looks something like `otelcol --config=otel-collector-config.yaml`.

Once the Collector is running and your Frends Agent is up, the telemetry data will start flowing automatically. The Agent sends health check telemetry along with traces from SQL and HTTP client operations that your Processes initiate to the Collector, which processes them and forwards them to your chosen destination.

Log in to your observability platform and navigate to wherever metrics and traces are displayed. In Datadog, you'll find them under the APM and Infrastructure monitoring sections. In Grafana, you'll create dashboards that query your Prometheus or Tempo backend. In Splunk, you'll use the Observability Cloud interface, and in Honeycomb, you'll see your data in the datasets view.

You can now build dashboards to visualize Agent health and availability through health check endpoint monitoring, track external dependencies through SQL and HTTP operation traces, and observe basic resource utilization at the Agent infrastructure level. This telemetry pipeline helps you verify that your Agents are responsive, monitor their connectivity to external systems, and understand the performance characteristics of database and HTTP operations that your integration Processes depend on.

### What telemetry data to expect

Once everything is up and running, you'll receive telemetry data focused on Agent infrastructure health and external dependency performance. The metrics include Agent health indicators, along with HTTP and SQL client operation data from your running Processes. The trace data covers health check endpoints (`/FrendsStatusInfo` and `/FrendsStatusInfoLiveness`), SQL database operations, and outbound HTTP requests, giving you visibility into Agent availability, external system connectivity, and the performance characteristics of infrastructure-level operations with standard OpenTelemetry span attributes like status codes, durations, and error messages.

For detailed Process execution monitoring—tracking which integration workflows ran, individual Task performance within Processes, or execution counts by Process name—continue using Frends' built-in monitoring through the Frends UI or the [Grafana Dashboard for Frends](https://docs.frends.com/guides/integration-management/how-to-create-grafana-dashboard-for-frends) guide. The OpenTelemetry integration complements these tools by providing standardized infrastructure telemetry that you can aggregate across multiple Agents and correlate with other systems in your observability platform, while Frends native monitoring handles Process-level execution analytics with success rates, failure tracking, and Task-by-Task performance data.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.frends.com/guides/integration-management/how-to-visualize-frends-telemetry-with-opentelemetry.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
