How to visualize Frends telemetry with OpenTelemetry

Frends Agent can publish OpenTelemetry data for your use.

OpenTelemetry is a powerful, open-source observability framework for collecting telemetry data like metrics and traces. By enabling OpenTelemetry in your Frends Agent, you can send operational data about Agent health, infrastructure operations, and external dependencies to various observability platforms—whether that's Grafana, Datadog, Splunk, or Honeycomb. This guide walks you through setting up the complete pipeline from your Frends Agent to your chosen monitoring platform.

Prerequisites

Before you begin, make sure you have a running Frends Agent where you have access to modify configuration files. You'll also need an account with your chosen observability platform—whether that's Grafana, Datadog, Splunk, or Honeycomb.

The key piece you'll need is a running instance of the OpenTelemetry Collector. The Collector is a separate piece of software that sits between your Frends Agent and your monitoring platform, acting as a processing pipeline for your telemetry data. Think of it as a smart relay that receives data from Frends, processes it, and forwards it wherever you need it to go.

Enabling OpenTelemetry in your Frends Agent

The first thing you'll need to do is configure your Frends Agent to start collecting and exporting telemetry data. This happens through the Agent's configuration file, and specifically you'll want to use the appsettings.production.json file. Using this specific file ensures your changes persist through Agent updates, which saves you from having to reconfigure things later.

Locate the appsettings.production.json file in your Frends Agent installation directory and open it with a text editor. You'll need to add two settings that control what telemetry data gets collected and sent.

{
  "EnableOTPLMetrics": true,
  "EnableOTPLTracing": true
}

The EnableOTPLMetrics setting tells the Frends Agent to start collecting and publishing OpenTelemetry metrics, including Agent health indicators and performance data from infrastructure operations. The EnableOTPLTracing setting enables trace collection, which currently instruments the Agent's health check endpoints (/FrendsStatusInfo and /FrendsStatusInfoLiveness) along with SQL client operations and HTTP client requests that your Processes initiate.

Once you've added these settings, save the file and restart the Frends Agent service for the changes to take effect. The Agent will now start collecting telemetry, but it needs to know where to send it.

Configuring the OpenTelemetry endpoint

The Frends Agent sends its telemetry data to an endpoint specified by an operating system environment variable. You'll need to set this variable on the server where your Frends Agent is running.

Create a new environment variable named OTEL_EXPORTER_OTLP_ENDPOINT and set its value to the URL of your OpenTelemetry Collector's receiver. The Collector typically listens on port 4317 for gRPC connections or port 4318 for HTTP connections. If you're running the Collector on the same machine as your Frends Agent, the endpoint would be something like http://localhost:4317.

The beauty of using an environment variable is that the Frends Agent will automatically pick it up and use it to push telemetry data. No hardcoding, no config file changes beyond what you've already done—just set the variable and the Agent handles the rest.

Setting up the OpenTelemetry Collector

Now comes configuring your OpenTelemetry Collector. The Collector acts as a smart pipeline: it receives data from sources like your Frends Agent, optionally processes or transforms that data, and then exports it to one or more destinations. You might send metrics to Datadog while simultaneously logging traces to Splunk, or forward everything to Grafana—the Collector gives you that flexibility.

Create a configuration file for your Collector. You can name it something like otel-collector-config.yaml. The configuration has a straightforward structure with three main sections: receivers (where data comes from), exporters (where data goes to), and processors (what happens to the data in between). All of these come together in a service pipeline definition.

Let's start with the receivers section. Since your Frends Agent will be sending OpenTelemetry data, you'll use the OTLP receiver which supports both gRPC and HTTP protocols.

This tells the Collector to listen for incoming data on both ports, making it flexible enough to accept data regardless of which protocol the Frends Agent uses.

Next up is the exporters section, where you configure where your telemetry data should go. The configuration here depends entirely on which platform you're using. Here are some examples for different platforms—you'll want to configure the one that matches your setup.

For Datadog, you'll need your API key and the configuration looks something like this:

If you're using Splunk, you'll configure the HTTP Event Collector (HEC) endpoint:

For Honeycomb, you'll use the OTLP HTTP exporter with your API key in the headers:

And if you're using Grafana with a backend like Prometheus or Tempo, you might use exporters like prometheus or otlp depending on your Grafana stack configuration.

While you're testing and setting things up, it's incredibly useful to include a logging exporter that simply writes telemetry data to the console:

This lets you see exactly what data is flowing through your pipeline, which makes troubleshooting much easier.

The processors section is optional but recommended. At minimum, you'll want to include a batch processor, which groups telemetry data into batches before sending it to your exporters. This reduces network overhead and improves performance.

Finally, you tie everything together in the service section, where you define your pipelines. You'll typically want separate pipelines for traces and metrics since they're different types of data:

In this example, both traces and metrics come in through the OTLP receiver, get batched for efficiency, and then get exported to both the console (for debugging) and Datadog (for visualization). You can adjust the exporters list to match whichever platforms you're using—just replace datadog with splunk_hec, otlphttp/honeycomb, or whatever exporter you configured.

Here's what a complete configuration might look like:

For specific details about configuring exporters for your chosen platform—including API key formats, endpoint URLs, and additional options—refer to the OpenTelemetry Collector Contrib documentation. Each exporter has its own configuration reference with examples and best practices.

Running the Collector and visualizing your data

With everything configured, it's time to start the OpenTelemetry Collector using your configuration file. The exact command depends on how you installed the Collector, but it typically looks something like otelcol --config=otel-collector-config.yaml.

Once the Collector is running and your Frends Agent is up, the telemetry data will start flowing automatically. The Agent sends health check telemetry along with traces from SQL and HTTP client operations that your Processes initiate to the Collector, which processes them and forwards them to your chosen destination.

Log in to your observability platform and navigate to wherever metrics and traces are displayed. In Datadog, you'll find them under the APM and Infrastructure monitoring sections. In Grafana, you'll create dashboards that query your Prometheus or Tempo backend. In Splunk, you'll use the Observability Cloud interface, and in Honeycomb, you'll see your data in the datasets view.

You can now build dashboards to visualize Agent health and availability through health check endpoint monitoring, track external dependencies through SQL and HTTP operation traces, and observe basic resource utilization at the Agent infrastructure level. This telemetry pipeline helps you verify that your Agents are responsive, monitor their connectivity to external systems, and understand the performance characteristics of database and HTTP operations that your integration Processes depend on.

What telemetry data to expect

Once everything is up and running, you'll receive telemetry data focused on Agent infrastructure health and external dependency performance. The metrics include Agent health indicators, along with HTTP and SQL client operation data from your running Processes. The trace data covers health check endpoints (/FrendsStatusInfo and /FrendsStatusInfoLiveness), SQL database operations, and outbound HTTP requests, giving you visibility into Agent availability, external system connectivity, and the performance characteristics of infrastructure-level operations with standard OpenTelemetry span attributes like status codes, durations, and error messages.

For detailed Process execution monitoring—tracking which integration workflows ran, individual Task performance within Processes, or execution counts by Process name—continue using Frends' built-in monitoring through the Frends UI or the Grafana Dashboard for Frends guide. The OpenTelemetry integration complements these tools by providing standardized infrastructure telemetry that you can aggregate across multiple Agents and correlate with other systems in your observability platform, while Frends native monitoring handles Process-level execution analytics with success rates, failure tracking, and Task-by-Task performance data.

Summary

You now have a complete observability pipeline sending Frends Agent health and infrastructure telemetry to your chosen platform. This gives you visibility into Agent availability, external dependency performance, and infrastructure-level operations. For detailed Process execution monitoring and Task-level performance analysis, use Frends' built-in monitoring features through the Frends UI or the native Grafana integration.

Last updated

Was this helpful?