Supported Monitoring Systems
This section briefly describes each of the supported monitoring systems.
AppOptics
By default, the AppOptics registry periodically pushes metrics to api.appoptics.com/v1/measurements
.
To export metrics to SaaS AppOptics, your API token must be provided:
-
Properties
-
YAML
management.appoptics.metrics.export.api-token=YOUR_TOKEN
management:
appoptics:
metrics:
export:
api-token: "YOUR_TOKEN"
Atlas
By default, metrics are exported to Atlas running on your local machine. You can provide the location of the Atlas server:
-
Properties
-
YAML
management.atlas.metrics.export.uri=https://atlas.example.com:7101/api/v1/publish
management:
atlas:
metrics:
export:
uri: "https://atlas.example.com:7101/api/v1/publish"
Datadog
A Datadog registry periodically pushes metrics to datadoghq. To export metrics to Datadog, you must provide your API key:
-
Properties
-
YAML
management.datadog.metrics.export.api-key=YOUR_KEY
management:
datadog:
metrics:
export:
api-key: "YOUR_KEY"
If you additionally provide an application key (optional), then metadata such as meter descriptions, types, and base units will also be exported:
-
Properties
-
YAML
management.datadog.metrics.export.api-key=YOUR_API_KEY
management.datadog.metrics.export.application-key=YOUR_APPLICATION_KEY
management:
datadog:
metrics:
export:
api-key: "YOUR_API_KEY"
application-key: "YOUR_APPLICATION_KEY"
By default, metrics are sent to the Datadog US site (api.datadoghq.com
).
If your Datadog project is hosted on one of the other sites, or you need to send metrics through a proxy, configure the URI accordingly:
-
Properties
-
YAML
management.datadog.metrics.export.uri=https://api.datadoghq.eu
management:
datadog:
metrics:
export:
uri: "https://api.datadoghq.eu"
You can also change the interval at which metrics are sent to Datadog:
-
Properties
-
YAML
management.datadog.metrics.export.step=30s
management:
datadog:
metrics:
export:
step: "30s"
Dynatrace
Dynatrace offers two metrics ingest APIs, both of which are implemented for Micrometer.
You can find the Dynatrace documentation on Micrometer metrics ingest here.
Configuration properties in the v1
namespace apply only when exporting to the Timeseries v1 API.
Configuration properties in the v2
namespace apply only when exporting to the Metrics v2 API.
Note that this integration can export only to either the v1
or v2
version of the API at a time, with v2
being preferred.
If the device-id
(required for v1 but not used in v2) is set in the v1
namespace, metrics are exported to the v1
endpoint.
Otherwise, v2
is assumed.
v2 API
You can use the v2 API in two ways.
Auto-configuration
Dynatrace auto-configuration is available for hosts that are monitored by the OneAgent or by the Dynatrace Operator for Kubernetes.
Local OneAgent: If a OneAgent is running on the host, metrics are automatically exported to the local OneAgent ingest endpoint. The ingest endpoint forwards the metrics to the Dynatrace backend.
Dynatrace Kubernetes Operator: When running in Kubernetes with the Dynatrace Operator installed, the registry will automatically pick up your endpoint URI and API token from the operator instead.
This is the default behavior and requires no special setup beyond a dependency on io.micrometer:micrometer-registry-dynatrace
.
Manual configuration
If no auto-configuration is available, the endpoint of the Metrics v2 API and an API token are required.
The API token must have the “Ingest metrics” (metrics.ingest
) permission set.
We recommend limiting the scope of the token to this one permission.
You must ensure that the endpoint URI contains the path (for example, /api/v2/metrics/ingest
):
The URL of the Metrics API v2 ingest endpoint is different according to your deployment option:
-
SaaS:
https://{your-environment-id}.live.dynatrace.com/api/v2/metrics/ingest
-
Managed deployments:
https://{your-domain}/e/{your-environment-id}/api/v2/metrics/ingest
The example below configures metrics export using the example
environment id:
-
Properties
-
YAML
management.dynatrace.metrics.export.uri=https://example.live.dynatrace.com/api/v2/metrics/ingest
management.dynatrace.metrics.export.api-token=YOUR_TOKEN
management:
dynatrace:
metrics:
export:
uri: "https://example.live.dynatrace.com/api/v2/metrics/ingest"
api-token: "YOUR_TOKEN"
When using the Dynatrace v2 API, the following optional features are available (more details can be found in the Dynatrace documentation):
-
Metric key prefix: Sets a prefix that is prepended to all exported metric keys.
-
Enrich with Dynatrace metadata: If a OneAgent or Dynatrace operator is running, enrich metrics with additional metadata (for example, about the host, process, or pod).
-
Default dimensions: Specify key-value pairs that are added to all exported metrics. If tags with the same key are specified with Micrometer, they overwrite the default dimensions.
-
Use Dynatrace Summary instruments: In some cases the Micrometer Dynatrace registry created metrics that were rejected. In Micrometer 1.9.x, this was fixed by introducing Dynatrace-specific summary instruments. Setting this toggle to
false
forces Micrometer to fall back to the behavior that was the default before 1.9.x. It should only be used when encountering problems while migrating from Micrometer 1.8.x to 1.9.x.
It is possible to not specify a URI and API token, as shown in the following example. In this scenario, the automatically configured endpoint is used:
-
Properties
-
YAML
management.dynatrace.metrics.export.v2.metric-key-prefix=your.key.prefix
management.dynatrace.metrics.export.v2.enrich-with-dynatrace-metadata=true
management.dynatrace.metrics.export.v2.default-dimensions.key1=value1
management.dynatrace.metrics.export.v2.default-dimensions.key2=value2
management.dynatrace.metrics.export.v2.use-dynatrace-summary-instruments=true
management:
dynatrace:
metrics:
export:
# Specify uri and api-token here if not using the local OneAgent endpoint.
v2:
metric-key-prefix: "your.key.prefix"
enrich-with-dynatrace-metadata: true
default-dimensions:
key1: "value1"
key2: "value2"
use-dynatrace-summary-instruments: true # (default: true)
v1 API (Legacy)
The Dynatrace v1 API metrics registry pushes metrics to the configured URI periodically by using the Timeseries v1 API.
For backwards-compatibility with existing setups, when device-id
is set (required for v1, but not used in v2), metrics are exported to the Timeseries v1 endpoint.
To export metrics to Dynatrace, your API token, device ID, and URI must be provided:
-
Properties
-
YAML
management.dynatrace.metrics.export.uri=https://{your-environment-id}.live.dynatrace.com
management.dynatrace.metrics.export.api-token=YOUR_TOKEN
management.dynatrace.metrics.export.v1.device-id=YOUR_DEVICE_ID
management:
dynatrace:
metrics:
export:
uri: "https://{your-environment-id}.live.dynatrace.com"
api-token: "YOUR_TOKEN"
v1:
device-id: "YOUR_DEVICE_ID"
For the v1 API, you must specify the base environment URI without a path, as the v1 endpoint path is added automatically.
Version-independent Settings
In addition to the API endpoint and token, you can also change the interval at which metrics are sent to Dynatrace.
The default export interval is 60s
.
The following example sets the export interval to 30 seconds:
-
Properties
-
YAML
management.dynatrace.metrics.export.step=30s
management:
dynatrace:
metrics:
export:
step: "30s"
You can find more information on how to set up the Dynatrace exporter for Micrometer in the Micrometer documentation and the Dynatrace documentation.
Elastic
By default, metrics are exported to Elastic running on your local machine. You can provide the location of the Elastic server to use by using the following property:
-
Properties
-
YAML
management.elastic.metrics.export.host=https://elastic.example.com:8086
management:
elastic:
metrics:
export:
host: "https://elastic.example.com:8086"
Ganglia
By default, metrics are exported to Ganglia running on your local machine. You can provide the Ganglia server host and port, as the following example shows:
-
Properties
-
YAML
management.ganglia.metrics.export.host=ganglia.example.com
management.ganglia.metrics.export.port=9649
management:
ganglia:
metrics:
export:
host: "ganglia.example.com"
port: 9649
Graphite
By default, metrics are exported to Graphite running on your local machine. You can provide the Graphite server host and port, as the following example shows:
-
Properties
-
YAML
management.graphite.metrics.export.host=graphite.example.com
management.graphite.metrics.export.port=9004
management:
graphite:
metrics:
export:
host: "graphite.example.com"
port: 9004
Micrometer provides a default HierarchicalNameMapper
that governs how a dimensional meter ID is mapped to flat hierarchical names.
To take control over this behavior, define your
|
Humio
By default, the Humio registry periodically pushes metrics to cloud.humio.com. To export metrics to SaaS Humio, you must provide your API token:
-
Properties
-
YAML
management.humio.metrics.export.api-token=YOUR_TOKEN
management:
humio:
metrics:
export:
api-token: "YOUR_TOKEN"
You should also configure one or more tags to identify the data source to which metrics are pushed:
-
Properties
-
YAML
management.humio.metrics.export.tags.alpha=a
management.humio.metrics.export.tags.bravo=b
management:
humio:
metrics:
export:
tags:
alpha: "a"
bravo: "b"
Influx
By default, metrics are exported to an Influx v1 instance running on your local machine with the default configuration.
To export metrics to InfluxDB v2, configure the org
, bucket
, and authentication token
for writing metrics.
You can provide the location of the Influx server to use by using:
-
Properties
-
YAML
management.influx.metrics.export.uri=https://influx.example.com:8086
management:
influx:
metrics:
export:
uri: "https://influx.example.com:8086"
JMX
Micrometer provides a hierarchical mapping to JMX, primarily as a cheap and portable way to view metrics locally.
By default, metrics are exported to the metrics
JMX domain.
You can provide the domain to use by using:
-
Properties
-
YAML
management.jmx.metrics.export.domain=com.example.app.metrics
management:
jmx:
metrics:
export:
domain: "com.example.app.metrics"
Micrometer provides a default HierarchicalNameMapper
that governs how a dimensional meter ID is mapped to flat hierarchical names.
To take control over this behavior, define your
|
KairosDB
By default, metrics are exported to KairosDB running on your local machine. You can provide the location of the KairosDB server to use by using:
-
Properties
-
YAML
management.kairos.metrics.export.uri=https://kairosdb.example.com:8080/api/v1/datapoints
management:
kairos:
metrics:
export:
uri: "https://kairosdb.example.com:8080/api/v1/datapoints"
New Relic
A New Relic registry periodically pushes metrics to New Relic. To export metrics to New Relic, you must provide your API key and account ID:
-
Properties
-
YAML
management.newrelic.metrics.export.api-key=YOUR_KEY
management.newrelic.metrics.export.account-id=YOUR_ACCOUNT_ID
management:
newrelic:
metrics:
export:
api-key: "YOUR_KEY"
account-id: "YOUR_ACCOUNT_ID"
You can also change the interval at which metrics are sent to New Relic:
-
Properties
-
YAML
management.newrelic.metrics.export.step=30s
management:
newrelic:
metrics:
export:
step: "30s"
By default, metrics are published through REST calls, but you can also use the Java Agent API if you have it on the classpath:
-
Properties
-
YAML
management.newrelic.metrics.export.client-provider-type=insights-agent
management:
newrelic:
metrics:
export:
client-provider-type: "insights-agent"
Finally, you can take full control by defining your own NewRelicClientProvider
bean.
OpenTelemetry
By default, metrics are exported to OpenTelemetry running on your local machine. You can provide the location of the OpenTelemtry metric endpoint to use by using:
-
Properties
-
YAML
management.otlp.metrics.export.url=https://otlp.example.com:4318/v1/metrics
management:
otlp:
metrics:
export:
url: "https://otlp.example.com:4318/v1/metrics"
Prometheus
Prometheus expects to scrape or poll individual application instances for metrics.
Spring Boot provides an actuator endpoint at /actuator/prometheus
to present a Prometheus scrape with the appropriate format.
By default, the endpoint is not available and must be exposed. See exposing endpoints for more details. |
The following example scrape_config
adds to prometheus.yml
:
scrape_configs:
- job_name: "spring"
metrics_path: "/actuator/prometheus"
static_configs:
- targets: ["HOST:PORT"]
Prometheus Exemplars are also supported.
To enable this feature, a SpanContextSupplier
bean should be present.
If you use Micrometer Tracing, this will be auto-configured for you, but you can always create your own if you want.
Please check the Prometheus Docs, since this feature needs to be explicitly enabled on Prometheus' side, and it is only supported using the OpenMetrics format.
For ephemeral or batch jobs that may not exist long enough to be scraped, you can use Prometheus Pushgateway support to expose the metrics to Prometheus. To enable Prometheus Pushgateway support, add the following dependency to your project:
<dependency>
<groupId>io.prometheus</groupId>
<artifactId>simpleclient_pushgateway</artifactId>
</dependency>
When the Prometheus Pushgateway dependency is present on the classpath and the management.prometheus.metrics.export.pushgateway.enabled
property is set to true
, a PrometheusPushGatewayManager
bean is auto-configured.
This manages the pushing of metrics to a Prometheus Pushgateway.
You can tune the PrometheusPushGatewayManager
by using properties under management.prometheus.metrics.export.pushgateway
.
For advanced configuration, you can also provide your own PrometheusPushGatewayManager
bean.
SignalFx
SignalFx registry periodically pushes metrics to SignalFx. To export metrics to SignalFx, you must provide your access token:
-
Properties
-
YAML
management.signalfx.metrics.export.access-token=YOUR_ACCESS_TOKEN
management:
signalfx:
metrics:
export:
access-token: "YOUR_ACCESS_TOKEN"
You can also change the interval at which metrics are sent to SignalFx:
-
Properties
-
YAML
management.signalfx.metrics.export.step=30s
management:
signalfx:
metrics:
export:
step: "30s"
Simple
Micrometer ships with a simple, in-memory backend that is automatically used as a fallback if no other registry is configured. This lets you see what metrics are collected in the metrics endpoint.
The in-memory backend disables itself as soon as you use any other available backend. You can also disable it explicitly:
-
Properties
-
YAML
management.simple.metrics.export.enabled=false
management:
simple:
metrics:
export:
enabled: false
Stackdriver
The Stackdriver registry periodically pushes metrics to Stackdriver. To export metrics to SaaS Stackdriver, you must provide your Google Cloud project ID:
-
Properties
-
YAML
management.stackdriver.metrics.export.project-id=my-project
management:
stackdriver:
metrics:
export:
project-id: "my-project"
You can also change the interval at which metrics are sent to Stackdriver:
-
Properties
-
YAML
management.stackdriver.metrics.export.step=30s
management:
stackdriver:
metrics:
export:
step: "30s"
StatsD
The StatsD registry eagerly pushes metrics over UDP to a StatsD agent. By default, metrics are exported to a StatsD agent running on your local machine. You can provide the StatsD agent host, port, and protocol to use by using:
-
Properties
-
YAML
management.statsd.metrics.export.host=statsd.example.com
management.statsd.metrics.export.port=9125
management.statsd.metrics.export.protocol=udp
management:
statsd:
metrics:
export:
host: "statsd.example.com"
port: 9125
protocol: "udp"
You can also change the StatsD line protocol to use (it defaults to Datadog):
-
Properties
-
YAML
management.statsd.metrics.export.flavor=etsy
management:
statsd:
metrics:
export:
flavor: "etsy"
Wavefront
The Wavefront registry periodically pushes metrics to Wavefront. If you are exporting metrics to Wavefront directly, you must provide your API token:
-
Properties
-
YAML
management.wavefront.api-token=YOUR_API_TOKEN
management:
wavefront:
api-token: "YOUR_API_TOKEN"
Alternatively, you can use a Wavefront sidecar or an internal proxy in your environment to forward metrics data to the Wavefront API host:
-
Properties
-
YAML
management.wavefront.uri=proxy://localhost:2878
management:
wavefront:
uri: "proxy://localhost:2878"
If you publish metrics to a Wavefront proxy (as described in the Wavefront documentation), the host must be in the proxy://HOST:PORT format.
|
You can also change the interval at which metrics are sent to Wavefront:
-
Properties
-
YAML
management.wavefront.metrics.export.step=30s
management:
wavefront:
metrics:
export:
step: "30s"