Proton

Proton is Vespa's search core. Proton maintains disk and memory structures for documents. As the data is dynamic, these structures are periodically optimized. These jobs temporarily consume resources - make sure the host running Vespa has sufficient buffer.

Proton stores its index in $VESPA_HOME/var/db/vespa/search/.

Internal structure

The internal structure of a proton node:

The search node consists of a bucket management system which sends requests to a set of document databases, which each consists of three sub-databases.

Bucket management

When the node starts up it first needs to get an overview of what partitions it have, and what buckets each partition currently stores. Once it knows this, it is in initializing mode, able to handle load, but distributors do not yet know bucket metadata for all the buckets, and thus can't know whether buckets are consistent with copies on other nodes. Once metadata for all buckets are known, the content nodes transitions from initializing to up state. As the distributors wants quick access to bucket metadata, it keeps an in-memory bucket database to efficiently serve these requests.

It implements elasticity support in terms of the SPI. Operations are ordered according to priority, and only one operation per bucket can be in-flight at a time. Below bucket management is the persistence engine, which implements the SPI in terms of Vespa search. The persistence engine reads the document type from the document id, and dispatches requests to the correct document database.

Document database

Each document database is responsible for a single document type. It has a component called FeedHandler which takes care of incoming documents, updates, and remove requests. All requests are first written to a transaction log to make sure they are not lost if the node restarts. They are then handed to the appropriate sub-database, based on the request type.

Sub-databases

There are three types of sub-databases, each with its own document meta store and document store. The document meta store holds a map from the document id to a local id. This local id is used to address the document in the document store. The document meta store also maintains information on the state of the buckets that are present in the sub-database.

The sub-databases are maintained by the index maintainer. The document distribution changes as the system is resized. When the number of nodes in the system changes, the index maintainer will move documents between the Ready and Not Ready sub-databases to reflect the new distribution. When an entry in the Removed sub-database gets old it is purged. The sub-databases are:

Not Ready Holds the redundant documents that are not searchable, i.e. the not ready documents. Documents that are not ready are only stored, not indexed. It takes some processing to move from this state to the ready state.
Ready Maintains an index of all ready documents and keeps them searchable. One of the ready copies is active while the rest are not active:
  • Active: There should always be exactly one active copy of each document in the system, though intermittently there may be more. These documents produce results when queries are evaluated.
  • Not Active: The ready copies that are not active are indexed but will not produce results. By being indexed, they are ready to take over immediately if the node holding the active copy becomes unavailable.
Removed Keeps track of documents that have been removed. The id and timestamp for each document is kept. This information is used when buckets from two nodes are merged. If the removed document exists on another node but with a different timestamp, the most recent entry prevails.
Hence, only active documents in Ready are searchable:
Not ReadyReadyRemoved
Not searchablenot indexedindexed - not activenot indexed
Searchableindexed - active

Proton maintenance jobs

Tune the jobs in Proton tuning. Sizing search describes the static proton sizing - this article details the temporal resource usage for the proton jobs.

There is only one instance of each job at a time - e.g. attributes are flushed in sequence. When a job is running, its metric is set to 1 - otherwise 0. Use this to correlate oberserved performance with job runs - see Run metric.

Refer to the implementation of performance metrics, see getSearchNodeMetrics(). Metrics are available at the Metrics API.

attribute flush Flush an attribute vector from memory to disk, based on configuration in the flushstrategy. This controls memory usage and query performance. This also makes proton starts quicker - see flush on shutdown.
CPU Little - one thread flushes to disk
Memory Little - some temporal use
Disk A new file is written to, hence 2x the size of an attribute on disk.
Run metric content.proton.documentdb.job.attribute_flush
Metric prefix content.proton.documentdb.[ready|notready].attribute.memory_usage.
Metrics allocated_bytes.average
used_bytes.average
dead_bytes.average
onhold_bytes.average
memory index flush Flush a memory index to disk, then trigger disk index fusion. The goal is to shrink memory usage by adding to the disk-backed indices. Performance characteristics for this flush is similar to indexing. Note: A high feed rate can cause multiple smaller flushed indices, like $VESPA_HOME/var/db/vespa/search/cluster.name/n1/documents/doc/0.ready/index/index.flush.102 - see the high index number. Multiple smaller indices is a symptom of too small memory indices compared to feed rate - to fix, increase flushstrategy > native > component > maxmemorygain.
CPU Little - one thread indexes to disk
Memory Little
Disk Creates a new disk index, size of the memory index.
Run metric content.proton.documentdb.job.memory_index_flush
Metric prefix content.proton.documentdb.index.memory_usage.
Metrics allocated_bytes
used_bytes
dead_bytes
onhold_bytes
disk index fusion Merge the primary disk index with smaller indices generated by memory index flush - triggered by the memory index flush.
CPU Little - one thread merges indices
Memory Little
Disk Creates a new index while serving from the current - hence 2x temporal disk usage for the given index.
Run metric content.proton.documentdb.job.disk_index_fusion
document store flush Flushes the document store.
CPU Little
Memory Little
Disk Little
Run metric content.proton.documentdb.job.document_store_flush
document store compaction Defragment and sort document store files as documents are updated and deleted, in order to reduce disk space and speed up streaming search. The file is sorted in bucket order on output. Triggered by diskbloatfactor.
CPU Little - one thread reads one files, sorts and writes a new file
Memory Holds a document summary store file in memory plus memory for sorting the file. Note: This is important on hosts with little memory! Reduce maxfilesize to increase number of files and use less temporal memory for compaction.
Disk A new file is written while the current is serving, max temporal usage is 2x.
Run metric content.proton.documentdb.job.document_store_compact
Metric prefix content.proton.documentdb.[ready|notready|removed].document_store.
Metrics disk_usage.average
disk_bloat.average
max_bucket_spread.average
memory_usage.allocated_bytes.average
memory_usage.used_bytes.average
memory_usage.dead_bytes.average
memory_usage.onhold_bytes.average
bucket move Triggered by nodes going up/down, refer to elastic Vespa and searchable-copies. Causes documents to be indexed or de-indexed, similar to feeding. This moves documents in or out of ready/active sub-databases.
CPU CPU similar to feeding. Consumes capacity from the index write thread - hence has feeding impact
Memory As feeding - the memory index will grow
Disk As feeding
Run metric content.proton.documentdb.job.bucket_move
lid-space compaction As bucket move, however moves documents within a sub-database. This is often triggered when a cluster grows with more nodes, documents are redistributed to new nodes and each nodes has less documents - a LIDspace compaction is hence triggered. This inplace defragments the document meta store. Resources are freed on a subsequent attribute flush.
CPU like feeding - add and delete doc
Memory Little
Disk 0
Run metric content.proton.documentdb.job.lid_space_compact
Metric prefix content.proton.documentdb.[ready|notready|removed].lid_space.
Metrics lid_bloat_factor.average
lid_fragmentation_factor.average
removed documents pruning Prunes the deleted documents sub-database which keeps IDs for deleted documents. Default runs once per hour.
CPU Little
Memory Little
Disk Little
Run metric content.proton.documentdb.job.removed_documents_prune