Proton

Proton is Vespa's search core. Proton maintains disk and memory structures for documents. As the data is dynamic, these structures are periodically optimized. These jobs temporarily consume resources - make sure the host running Vespa has sufficient buffer.

Proton stores its index in $VESPA_HOME/var/db/vespa/search/.

Internal structure

The internal structure of a proton node:

The search node consists of a bucket management system which sends requests to a set of document databases, which each consists of three sub-databases.

Bucket management

When the node starts up it first needs to get an overview of what partitions it have, and what buckets each partition currently stores. Once it knows this, it is in initializing mode, able to handle load, but distributors do not yet know bucket metadata for all the buckets, and thus can't know whether buckets are consistent with copies on other nodes. Once metadata for all buckets are known, the content nodes transitions from initializing to up state. As the distributors wants quick access to bucket metadata, it keeps an in-memory bucket database to efficiently serve these requests.

It implements elasticity support in terms of the SPI. Operations are ordered according to priority, and only one operation per bucket can be in-flight at a time. Below bucket management is the persistence engine, which implements the SPI in terms of Vespa search. The persistence engine reads the document type from the document id, and dispatches requests to the correct document database.

Document database

Each document database is responsible for a single document type. It has a component called FeedHandler which takes care of incoming documents, updates, and remove requests. All requests are first written to a transaction log, then handed to the appropriate sub-database, based on the request type.

Sub-databases

There are three types of sub-databases, each with its own document meta store and document store. The document meta store holds a map from the document id to a local id. This local id is used to address the document in the document store. The document meta store also maintains information on the state of the buckets that are present in the sub-database.

The sub-databases are maintained by the index maintainer. The document distribution changes as the system is resized. When the number of nodes in the system changes, the index maintainer will move documents between the Ready and Not Ready sub-databases to reflect the new distribution. When an entry in the Removed sub-database gets old it is purged. The sub-databases are:

Not Ready Holds the redundant documents that are not searchable, i.e. the not ready documents. Documents that are not ready are only stored, not indexed. It takes some processing to move from this state to the ready state.
Ready Maintains an index of all ready documents and keeps them searchable. One of the ready copies is active while the rest are not active:
  • Active: There should always be exactly one active copy of each document in the system, though intermittently there may be more. These documents produce results when queries are evaluated.
  • Not Active: The ready copies that are not active are indexed but will not produce results. By being indexed, they are ready to take over immediately if the node holding the active copy becomes unavailable.
Removed Keeps track of documents that have been removed. The id and timestamp for each document is kept. This information is used when buckets from two nodes are merged. If the removed document exists on another node but with a different timestamp, the most recent entry prevails.
Hence, only active documents in Ready are searchable:
Not ReadyReadyRemoved
Not searchablenot indexedindexed - not activenot indexed
Searchableindexed - active

Transaction log

Content nodes have a transaction log to persist mutating operations. The transaction log persists operations by file append. Having a transaction log simplifies proton's in-memory index structures and enables steady-state high performance, read more below.

Operations are written immediately, irrespective of visibility-delay. Operations are not sync'ed to the file system - hence, writes are not guaranteed to exist in the transaction log. Flush jobs (below) sync the transaction log, for consistency.

Default, proton will flush components like attribute vectors and memory index on shutdown, for quicker startup.

Index

Index fields are string fields, used for text search. Other field types are attributes.

Updates

For all indexed fields, proton has a memory index for the recent changes, implemented using B-trees. This is periodically flushed to a disk-based posting list index. Disk-based indexes are subsequently merged.

Updating the in-memory B-trees is lock-free, implemented using copy-on-write semantics. This gives high performance, with a predictable steady-state CPU/memory use. The driver for this design is the requirement for a sustained, high change rate, with stable, predictable read latencies and small temporary increases in CPU/memory. This compared to index hierarchies, merging smaller real-time indices into larger, causing temporary hot-spots.

When updating an indexed field, the document is read from the document store, the field is updated, and the full document is written back to the store. At this point, the change is searchable, and an ACK is returned to the client. Use attributes to avoid such document disk accesses and hence increase performance for partial updates. Find more details in writing to Vespa.

To increase write throughput, writes to the memory index can be batched using visibility-delay.

Relevance

Proton stores position information in text indices by default, for proximity relevance - posocc (below). All the occurrences of a term is stored in the posting list, with its position. This provides superior ranking features, but is somewhat more expensive than just storing a single occurrence per document. For most applications it is the correct tradeoff, since most of the cost is usually elsewhere and relevance is valuable.

Applications that only need occurrence information for filtering can use rank: filter to optimize performance, using only boolocc-files (below).

The memory index has a dictionary per index field. This contains all unique words in that field with mapping to posting lists with position information. The position information is used during ranking, see nativeRank for details on how a text match score is calculated.

The disk index stores the content of each index field in separate folders. Each folder contains:

  • Dictionary. Files: dictionary.pdat, dictionary.spdat, dictionary.ssdat.
  • Compresssed posting lists with position information. File: posocc.dat.compressed.
  • Posting lists with only occurrence information (bitvector). These are generated for common words. Files: boolocc.bdat, boolocc.idx.
Example: $VESPA_HOME/var/db/vespa/search/cluster.name/n1/documents/doc/0.ready/index/index.flush.1/my_index_field

Proton maintenance jobs

Tune the jobs in Proton tuning. Sizing search describes the static proton sizing - this article details the temporary resource usage for the proton jobs.

There is only one instance of each job at a time - e.g. attributes are flushed in sequence. When a job is running, its metric is set to 1 - otherwise 0. Use this to correlate observed performance with job runs - see Run metric.

Refer to the implementation of performance metrics, see getSearchNodeMetrics(). Metrics are available at the Metrics API.

attribute flush Flush an attribute vector from memory to disk, based on configuration in the flushstrategy. This controls memory usage and query performance. This also makes proton starts quicker - see flush on shutdown.
CPU Little - one thread flushes to disk
Memory Little - some temporary use
Disk A new file is written to, hence 2x the size of an attribute on disk.
Run metric content.proton.documentdb.job.attribute_flush
Metric prefix content.proton.documentdb.[ready|notready].attribute.memory_usage.
Metrics allocated_bytes.average
used_bytes.average
dead_bytes.average
onhold_bytes.average
memory index flush Flush a memory index to disk, then trigger disk index fusion. The goal is to shrink memory usage by adding to the disk-backed indices. Performance characteristics for this flush is similar to indexing. Note: A high feed rate can cause multiple smaller flushed indices, like $VESPA_HOME/var/db/vespa/search/cluster.name/n1/documents/doc/0.ready/index/index.flush.102 - see the high index number. Multiple smaller indices is a symptom of too small memory indices compared to feed rate - to fix, increase flushstrategy > native > component > maxmemorygain.
CPU Little - one thread indexes to disk
Memory Little
Disk Creates a new disk index, size of the memory index.
Run metric content.proton.documentdb.job.memory_index_flush
Metric prefix content.proton.documentdb.index.memory_usage.
Metrics allocated_bytes
used_bytes
dead_bytes
onhold_bytes
disk index fusion Merge the primary disk index with smaller indices generated by memory index flush - triggered by the memory index flush.
CPU Multiple threads merge indices, configured as a function of feeding concurrency - refer to this for details
Memory Little
Disk Creates a new index while serving from the current - hence 2x temporary disk usage for the given index.
Run metric content.proton.documentdb.job.disk_index_fusion
document store flush Flushes the document store.
CPU Little
Memory Little
Disk Little
Run metric content.proton.documentdb.job.document_store_flush
document store compaction Defragment and sort document store files as documents are updated and deleted, in order to reduce disk space and speed up streaming search. The file is sorted in bucket order on output. Triggered by diskbloatfactor.
CPU Little - one thread reads one files, sorts and writes a new file
Memory Holds a document summary store file in memory plus memory for sorting the file. Note: This is important on hosts with little memory! Reduce maxfilesize to increase number of files and use less temporary memory for compaction.
Disk A new file is written while the current is serving, max temporary usage is 2x.
Run metric content.proton.documentdb.job.document_store_compact
Metric prefix content.proton.documentdb.[ready|notready|removed].document_store.
Metrics disk_usage.average
disk_bloat.average
max_bucket_spread.average
memory_usage.allocated_bytes.average
memory_usage.used_bytes.average
memory_usage.dead_bytes.average
memory_usage.onhold_bytes.average
bucket move Triggered by nodes going up/down, refer to elastic Vespa and searchable-copies. Causes documents to be indexed or de-indexed, similar to feeding. This moves documents in or out of ready/active sub-databases.
CPU CPU similar to feeding. Consumes capacity from the index write thread - hence has feeding impact
Memory As feeding - the memory index will grow
Disk As feeding
Run metric content.proton.documentdb.job.bucket_move
lid-space compaction As bucket move, however moves documents within a sub-database. This is often triggered when a cluster grows with more nodes, documents are redistributed to new nodes and each nodes has less documents - a LIDspace compaction is hence triggered. This inplace defragments the document meta store. Resources are freed on a subsequent attribute flush.
CPU like feeding - add and delete doc
Memory Little
Disk 0
Run metric content.proton.documentdb.job.lid_space_compact
Metric prefix content.proton.documentdb.[ready|notready|removed].lid_space.
Metrics lid_bloat_factor.average
lid_fragmentation_factor.average
removed documents pruning Prunes the deleted documents sub-database which keeps IDs for deleted documents. Default runs once per hour.
CPU Little
Memory Little
Disk Little
Run metric content.proton.documentdb.job.removed_documents_prune