Vespa CloudThis page's content is applicable to Vespa Cloud.
Benchmarking
This is a step-by-step guide to get started with benchmarking on Vespa Cloud,
based on the Vespa benchmarking guide,
using the sample app.
Overview:
Set up a performance test instance
Use an instance in a dev zone for benchmarks.
To deploy an instance there, use the getting started guide,
and make sure to specify the resources using a deploy:environment="dev" attribute:
-o output.txt is useful when validating the test - remove this option when load testing.
Make sure there are no SSL_do_handshake errors in the output.
Expect HTTP status code 200:
Starting clients...
Stopping clients
Clients stopped.
.
Clients Joined.
*** HTTP keep-alive statistics ***
connection reuse count -- 4
***************** Benchmark Summary *****************
clients: 1
ran for: 1 seconds
cycle time: 0 ms
lower response limit: 0 bytes
skipped requests: 0
failed requests: 0
successful requests: 5
cycles not held: 5
minimum response time: 128.17 ms
maximum response time: 515.35 ms
average response time: 206.38 ms
25 percentile: 128.70 ms
50 percentile: 129.60 ms
75 percentile: 130.20 ms
90 percentile: 361.32 ms
95 percentile: 438.36 ms
99 percentile: 499.99 ms
actual query rate: 4.80 Q/s
utilization: 99.03 %
zero hit queries: 5
http request status breakdown:
200 : 5
At this point, running queries using vespa-fbench works well from local laptop.
Run queries inside data center
Next step is to run this from the same location (data center) as the dev zone.
In this example, an AWS zone.
Deduce the AWS zone from Vespa Cloud zone name.
Below is an example using a host with Amazon Linux 2023 AMI (HVM) image:
Create the host - here assume key pair is named key.pem.
No need to do anything other than default.
At this point, you are able to benchmark using vespa-fbench in the same zone as the Vespa Cloud dev instance.
Run benchmark
Use the Vespa Benchmarking Guide
to plan and run benchmarks.
Also see sizing below.
Make sure the client running the benchmark tool has sufficient resources.
Periodically dump all metrics using consumer=Vespa.
Make sure you will not exhaust your serving threads on your container nodes while in production. This can be verified by making
sure this expression stays well below 100% (typically below 50%) for the traffic you expect:
100 * (jdisc.thread_pool.active_threads.sum / jdisc.thread_pool.active_threads.count) / jdisc.thread_pool.size.max
for each threadpool value. You can increase the number of threads in the pools by using larger container nodes,
more container nodes or by tuning the number of threads as described in
services-search.
In the case you do exhaust a threadpool and its queue you will experience HTTP 503 responses for requests that are rejected by
the container.
Making changes
Whenever deploying changes to configuration, track progress in the Deployment dashboard.
Some changes, like changing
requestthreads
will restart content nodes, and this is done in sequence and takes time.
Wait for successful completion in Wait for services and endpoints to come online.
When changing node type/count, wait for auto data redistribution to complete,
watching the vds.idealstate.merge_bucket.pending.average metric:
After changing the number of content nodes, this metric will jump, then decrease (not necessarily linearly) -
speed depending on data volume.
Sizing
Using Vespa Cloud enables the Vespa Team to assist you to optimise the application to reduce resource spend.
Based on 150 applications running on Vespa Cloud today, savings are typically 50%.
Cost optimization is hard to do without domain knowledge -
but few teams are experts in both their application and its serving platform.
Sizing means finding both the right node size and the right cluster topology:
Applications use Vespa for their primary business use cases.
Availability and performance vs. cost are business decisions.
The best sized application can handle all expected load situations,
and is configured to degrade quality gracefully for the unexpected.
Even though Vespa is cost-efficient out of the box,
Vespa experts can usually spot over/under-allocations in CPU, memory and disk space/IO,
and discuss trade-offs with the application team.
Using automated deployments applications go live with little risk.
After launch, right-size the application based on true load after using Vespa’s elasticity features
with automated data migration.
Use the Vespa sizing guide
to size the application and find metrics used there. Pro-tips:
60% is a good max memory allocation
50% is a good max CPU allocation, although application dependent.
70% is a good max disk allocation
Rules of thumb:
Memory and disk scales approximately linearly for indexed fields' data -
attributes have a fixed cost for empty fields.