Vespa is optimized to sustain a high feed load while serving - also during planned and unplanned changes to the instance. Vespa supports feed rates at memory speed, this guide goes through how to configure, test and size the application for optimal feed performance.
Read reads and writes first. This has an overview of Vespa, where the key takeaway is the stateless container cluster and the stateful content cluster. The processing of documents PUT to Vespa is run in the container cluster, and includes both Vespa-internal processing like tokenization and application custom code in document processing. The stateless cluster is primarily CPU bound, read indexing for how to separate search and write to different container clusters. Other than that, make sure the container cluster has enough memory to avoid excessive GC - the heap must be big enough. Allocate enough CPU for the indexing load.
All operations are written and synched to the transaction log. This is sequential (not random) IO but might impact overall feed performance if running on NAS attached storage where the synch operation has a much higher cost than on local attached storage (e.g. SSD). See sync-transactionlog.
The remainder of this document concerns how to configure the application to optimize feed performance on the content node.
The content node runs the proton process - overview:
As there are multiple components and data structures involved, this guide starts with the simplest examples and then adds optimizations and complexity. Flushing of index structures is covered at the end of this guide.
Documents are written to the document store in all indexing modes - this is where the copy of the PUT document is persisted. See Summary Manager + Document store in illustration above. In short, this is append operations to large files, and (simplified) each PUT is one write.
PUT-ing documents to the document store can be thought of as appending to files using sequential IO, expecting a high write rate, using little memory and CPU. Writing a new version of a document (PUT a document that already exists) is the same as non-existent - the index on the document is updated to point to the latest version in both cases.
A partial UPDATE to a document incurs a read from the document store to get the current fields. Then the new field values are applied and the new version or the document is written. Hence, like a PUT with an extra read.
schema music { document music { field artist type string { indexing: summary | index }
Observe that documents are written to summary (i.e. document store), but there is also an index. See Index Manager + Index in illustration above.
Refer to partial updates for the index write. In short, it updates the memory index, which is flushed regularly. The PUT to the index is a memory only operation, but uses CPU to update the index.
Some applications have a limited set of documents, with a high change-rate to fields in the documents (e.g. stock prices - number of stocks is almost fixed, prices changes constantly). Such applications are easily write bound.
To real-time update fields in high volume, use attribute fields:
schema ticker { document ticker { field price type float { indexing: summary | attribute }
Attribute fields are not stored in the document store, so there is no IO (except sequential flushing). This enables application to write at memory speed to vespa - a 10k update rate per node is possible.
To achieve memory-only updates, make sure all attributes to update are ready, meaning the content node has loaded the attribute into memory:
When debugging update performance, it is useful to know if an update hits the document store or not. Enable spam log level and look for SummaryAdapter::put - then do an update:
$ vespa-logctl searchnode:proton.server.summaryadapter spam=on .proton.server.summaryadapter ON ON ON ON ON ON OFF ON $ vespa-logfmt -l all -f | grep 'SummaryAdapter::put' [2019-10-10 12:16:47.339] SPAM : searchnode proton.proton.server.summaryadapter summaryadapter.cpp:45 SummaryAdapter::put(serialnum = '12', lid = 1, stream size = '199')
Existence of such log messages indicates that the update was accessing the document store.
Multivalued attributes are weightedset, array of struct/map, map of struct/map and tensor. The attributes have different characteristics, which affects write performance. Generally, updates to multivalue fields are more expensive as the field size grows:
Attribute | Description |
---|---|
weightedset | Memory-only operation when updating: read full set, update, write back. Make the update as inexpensive as possible using numeric types instead of strings, where possible Example: a weighted set of string with many (1000+) elements. Adding an element to the set means an enum store lookup/add and add/sort of the attribute multivalue map - details in attributes. Use a numeric type instead to speed this up - this has no string comparisons. |
array/map of struct/map | Update to array of struct/map and map of struct/map requires a read from the document store and will reduce update rate - see #10892. |
tensor | Updating tensor cell values is a memory-only operation: copy tensor, update, write back. For large tensors, this implicates reading and writing a large chunk of memory for single cell updates. |
Parent documents are global, i.e. has a replica on all nodes. Writing to fields in parent documents often simplify logic, compared to the de-normalized case where all (child) documents are updated. Write performance depends on the average number of child documents vs number of nodes in the cluster - examples:
Hence, the more children, the better performance effect for parent writes.
A conditional update looks like:
{ "update" : "id:namespace:myDoc::1", "condition" : "myDoc.myField == \"abc\"", "fields" : { "myTimestamp" : { "assign" : 1570187817 } } }
If the document store is accessed when evaluating the condition, performance drops. Conditions should be evaluated using attribute values for high performance - in the example above, myField should be an attribute.
Note: If the condition uses struct or map, values are read from the document store:
"condition" : "myDoc.myMap{1} == 3"
This is true even though all struct fields are defined as attribute. Improvements to this is tracked in #10892.
Consider the difference when sending two fields assignments to the same document:
{ "update" : "id:namespace:doctype::1", "fields" : { "myMap{1}" : { "assign" : { "timestamp" : 1570187817 } } "myMap{2}" : { "assign" : { "timestamp" : 1570187818 } } } }
vs.
{ "update" : "id:namespace:doctype::1", "fields" : { "myMap{1}" : { "assign" : { "timestamp" : 1570187817 } } } } { "update" : "id:namespace:doctype::1", "fields" : { "myMap{2}" : { "assign" : { "timestamp" : 1570187818 } } } }
In the first case, one update operation is sent from the vespa-feed-client - in the latter, the client will send the second update operation after receiving and ack for the first. When updating multiple fields, put the updates in as few operations as possible. See ordering details.
A content node normally has a fixed set of resources (CPU, memory, disk). Configure the CPU allocation for feeding vs. searching in concurrency - value from 0 to 1.0 - a higher value means more CPU resources for feeding.
When testing for feeding capacity:
Other scenarios: Feed testing for capacity for sustained load in a system in steady state, during state changes, during query load.
Metrics |
Use metrics from content nodes and look at queues - queue wait time and queue size (all metrics in milliseconds): vds.filestor.averagequeuewait.sum vds.filestor.queuesize Check content node metrics across all nodes to see if there are any outliers. Also check vds.filestor.allthreads.update.latency |
---|---|
Failure rates |
Inspect these metrics for failures during load testing: vds.distributor.updates.latency vds.distributor.updates.ok vds.distributor.updates.failures.total vds.distributor.puts.latency vds.distributor.puts.ok vds.distributor.puts.failures.total vds.distributor.removes.latency vds.distributor.removes.ok vds.distributor.removes.failures.total |
Blocked feeding |
This metric should be 0 - refer to feed block: content.proton.resource_usage.feeding_blocked |
Concurrent mutations |
Multiple clients updating the same document concurrently will stall writes: vds.distributor.updates.failures.concurrent_mutationsMutating client operations towards a given document ID are sequenced on the distributors. If an operation is already active towards a document, a subsequently arriving one will be bounced back to the client with a transient failure code. Usually this happens when users send feed from multiple clients concurrently without synchronisation. Note that feed operations sent by a single client are sequenced client-side, so this should not be observed with a single client only. Bounced operations are never sent on to the backends and should not cause elevated latencies there, although the client will observe higher latencies due to automatic retries with back-off. |
Wrong distribution |
vds.distributor.updates.failures.wrongdistributorIndicates that clients keep sending to the wrong distributor. Normally this happens infrequently (but is does happen on client startup or distributor state transitions), as clients update and cache all state required to route directly to the correct distributor (Vespa uses a deterministic CRUSH-based algorithmic distribution). Some potential reasons for this:
|
Cluster out of sync |
update_puts/gets indicate "two-phase" updates: vds.distributor.update_puts.latency vds.distributor.update_puts.ok vds.distributor.update_gets.latency vds.distributor.update_gets.ok vds.distributor.update_gets.failures.total vds.distributor.update_gets.failures.notfound If replicas are out of sync, updates cannot be applied directly on the replica nodes as they risk ending up with diverging state. In this case, Vespa performs an explicit read-consolidate-write (write repair) operation on the distributors. This is usually a lot slower than the regular update path because it doesn't happen in parallel. It also happens in the write-path of other operations, so risks blocking these if the updates are expensive in terms of CPU. Replicas being out of sync is by definition not the expected steady state of the system. For example, replica divergence can happen if one or more replica nodes are unable to process or persist operations. Track (pending) merges: vds.idealstate.buckets vds.idealstate.merge_bucket.pending vds.idealstate.merge_bucket.done_ok vds.idealstate.merge_bucket.done_failed |