Vespa provides metrics integration with CloudWatch, Datadog and Prometheus / Grafana, as well as a JSON HTTP API.
There are two main approaches to transfer metrics to an external system:
Use the example overview of two nodes running Vespa for where the APIs are set up and how they interact:
See the metrics guide for how to get a metric using /metrics/v1/values
and /prometheus/v1/values
.
This guide also documents use of custom metrics and histograms.
Each Vespa node has a metrics-proxy process running for this API, default port 19092. It aggregates metrics from all processes on the node, and across nodes:
The metrics-proxy normally listens on port 19092 - use vespa-model-inspect to validate.
See the metrics guide for the metrics interfaces hosted by the metrics proxy.
Metric-proxies intercommunicate to build a metric cache served on the internal applicationmetrics/v1/ API. This is replicated on the container on /metrics/v2/values for easy access to all metrics for an application.
The metrics-proxy is started by the config-sentinel and is not configurable. The metrics-proxy process looks like:
$ ps ax | grep admin/metrics/vespa-container 703 ? Sl 0:10 /usr/bin/java -Dconfig.id=admin/metrics/vespa-container ... -cp /opt/vespa/lib/jars/jdisc_core-jar-with-dependencies.jar com.yahoo.jdisc.core.StandaloneMain file:/opt/vespa/lib/jars/container-disc-jar-with-dependencies.jar
Per-process health status is found at http://host:port/state/v1/health
/state/v1/health
is most commonly used for heartbeating,
see the reference for details. Example:
Per-process metrics are found at http://host:port/state/v1/metrics
Internally, Vespa aggregates metrics in the APIs above from the per-process metrics and health APIs. While most users would use the aggregated APIs, the per-process metric APIs could be used for specific cases.
Metrics are reported in snapshots, where the snapshot specifies the time window the metrics are gathered from. Typically, the service will aggregate metrics as they are reported, and after each snapshot period, a snapshot is taken of the current values, and they are reset. Using this approach, min and max values are tracked, and enables values like 95% percentile for each complete snapshot period.
Refer to the reference for details.
Vespa supports custom metrics.
Example:
A flat list of metrics is returned.
Each metric value reported by a component should be a separate metric.
For related metrics, prefix metric names with common parts and dot separate the names -
e.g. memory.free
and memory.virtual
.
This API can be used for monitoring, using products like Prometheus and DataDog. The response contains a selected set of metrics from each service running on the node, see the reference for details. Example:
$ curl http://localhost:19092/metrics/v1/values
$ curl http://localhost:19092/metrics/v2/values
A container service on the same node as the metrics proxy might forward /metrics/v2/values
on its own port, normally 8080.
/metrics/v2/values
exposes a selected set of metrics for every service on all nodes for the application.
For example, it can be used to
pull Vespa metrics to Cloudwatch using an AWS lambda function.
The metrics API exposes a selected set of metrics for the whole application, or for a single node, to allow integration with graphing and alerting services.
The response is a nodes
list with metrics (see example output below),
see the reference for details.
Vespa provides a node metrics API on each node at http://host:port/prometheus/v1/values
Port and content is the same as /metrics/v1/values.
The prometheus API on each node exposes metrics in a text based format that can be scraped by Prometheus. See below for a Prometheus / Grafana example.
All pull-based solutions use Vespa's metrics API, which provides metrics in JSON format, either for the full system or for a single node. The polling frequency should be limited to max once every 30 seconds as more frequent polling would not give increased granularity but only lead to unnecessary load on your systems.
Service | Description |
---|---|
CloudWatch | Metrics can be pulled into CloudWatch from both Vespa Cloud and self-hosted Vespa. The recommended solution is to use an AWS lambda function, as described in Pulling Vespa metrics to Cloudwatch. |
Datadog | The Vespa team has created a Datadog Agent integration to allow real-time monitoring of Vespa in Datadog. The Datadog Vespa integration is not packaged with the agent, but is included in Datadog's integrations-extras repository. Clone it and follow the steps in the README.
Note:
The Datadog Agent integration currently works for self-hosted Vespa only.
|
Prometheus |
Vespa exposes metrics in a text based
format that can be
scraped by Prometheus.
For Vespa Cloud, append /prometheus/v1/values
to your endpoint URL. For self-hosted Vespa the URL is:
http://<container-host>:<port>/prometheus/v1/values, where
the port is the same as for searching, e.g. 8080. Metrics for each individual host
can also be retrieved at See the below for a Prometheus / Grafana example. |
Note: This method currently works for self-hosted Vespa only.
This is presumably the most convenient way to monitor Vespa in CloudWatch. Steps / requirements:
<metrics> <consumer id="my-cloudwatch"> <metric-set id="default" /> <cloudwatch region="us-east-1" namespace="my-vespa-metrics"> <shared-credentials file="/path/to/credentials-file" /> </cloudwatch> </consumer> </metrics>This configuration sends the default set of Vespa metrics to the CloudWatch namespace
my-vespa-metrics
in the us-east-1
region.
Refer to the
metric list
for default
metric set.
Follow these steps to set up monitoring with Grafana for a Vespa instance. This guide builds on the quick start by adding three more Docker containers and connecting these in the Docker monitoring network:
Run the Quick Start:
Complete steps 1-7 (or 1-10), but skip the removal step. Clone repository:
$ git clone --depth 1 https://github.com/vespa-engine/sample-apps.git && \ cd sample-apps/examples/operations/monitoring/album-recommendation-monitoring
Create a network and add the vespa container to it:
$ docker network create --driver bridge monitoring && \ docker network connect monitoring vespa
This creates the monitoring network and attaches the vespa container to it. Find details in docker-compose.yml.
Launch Prometheus:
$ docker run --detach --name sample-apps-prometheus --hostname prometheus \ --network monitoring \ --publish 9090:9090 \ --volume `pwd`/prometheus/prometheus-selfhosted.yml:/etc/prometheus/prometheus.yml \ prom/prometheus
Prometheus is a time-series database, which holds a series of values associated with a timestamp. Open Prometheus at http://localhost:9090/. One can easily find what data Prometheus has, the input box auto-completes, e.g. enter feed_operations_rate and click Execute. Also explore the Status dropdown.
Launch Grafana:
$ docker run --detach --name sample-apps-grafana \ --network monitoring \ --publish 3000:3000 \ --volume `pwd`/grafana/provisioning:/etc/grafana/provisioning \ grafana/grafana
This launches Grafana. Grafana is a visualisation tool that can be used to easily make representations of important metrics surrounding Vespa. Open http://localhost:3000/ and find the Grafana login screen - log in with admin/admin (skip changing password). From the list on the left, click Browse under Dashboards (the symbol with 4 blocks), then click the Vespa Detailed Monitoring Dashboard. The dashboard displays detailed Vespa metrics - empty for now.
Build the Random Data Feeder:
$ docker build album-recommendation-random-data --tag random-data-feeder
This builds the Random Data Feeder - it generates random sets of data and puts them into the Vespa instance. Also, it repeatedly runs queries, for Grafana visualisation. Compiling the Random Data Feeder takes a few minutes.
Run the Random Data Feeder:
$ docker run --detach --name sample-apps-random-data-feeder \ --network monitoring \ random-data-feeder
Check the updated Grafana metrics:
Graphs will now show up in Grafana and Prometheus - it might take a minute or two. The Grafana dashboard is fully customisable. Change the default modes of Grafana and Prometheus by editing the configuration files in album-recommendation-monitoring.
Remove containers and network:
$ docker rm -f vespa \ sample-apps-grafana \ sample-apps-prometheus \ sample-apps-random-data-feeder
$ docker network rm monitoring
Metric histograms is supported for Gauge metrics. Create the metric like in album-recommendation-java, adding the histogram:
public HitCountSearcher(MetricReceiver receiver) {
this.hitCountMetric = receiver.declareGauge(EXAMPLE_METRIC_NAME, Optional.empty(),
new MetricSettings.Builder().histogram(true).build());
}
The histograms for the last five minutes of logged data are available as CSV per dimension at /state/v1/metrics/histograms. Example output:
# start of metric hits_per_query, dimensions: { "chain": "metalchain" } "Value","Percentile","TotalCount","1/(1-Percentile)" 1.00,0.000000000000,1,1.00 1.00,1.000000000000,1,Infinity # end of metric hits_per_query, dimensions: { "chain": "metalchain" } # start of metric example_hitcounts, dimensions: { "query_language": "en" } "Value","Percentile","TotalCount","1/(1-Percentile)" 1.00,0.000000000000,1,1.00 1.00,1.000000000000,1,Infinity # end of metric example_hitcounts, dimensions: { "query_language": "en" } # start of metric query_latency, dimensions: { "chain": "metalchain" } "Value","Percentile","TotalCount","1/(1-Percentile)" 5.69,0.000000000000,1,1.00 5.69,1.000000000000,1,Infinity # end of metric query_latency, dimensions: { "chain": "metalchain" } # start of metric totalhits_per_query, dimensions: { "chain": "metalchain" } "Value","Percentile","TotalCount","1/(1-Percentile)" 1.00,0.000000000000,1,1.00 1.00,1.000000000000,1,Infinity # end of metric totalhits_per_query, dimensions: { "chain": "metalchain" }