This document describes tuning certain features of an application for high query serving performance, the main focus is on content cluster search features, see Container tuning for tuning of container clusters. The search sizing guide is about scaling an application deployment.
The attribute documentation summaries when to use attribute in the indexing statement. Also see the procedure for changing from attribute to index and vice-versa.
field timestamp type long { indexing: summary | attribute }
If both index and attribute is configured for string type fields,
Vespa will do search and matching against the index with default match text
.
All numeric type fields and tensor fields are attribute (in-memory) fields in Vespa.
By default, Vespa does not build any posting list index structures over attribute fields. Adding fast-search to the attribute definition as shown below will add an in-memory B-tree posting list structure which enables faster search for some cases (but not all, see next paragraph):
field timestamp type long { indexing: summary | attribute attribute: fast-search rank: filter }
When Vespa runs a query with multiple query items, it builds a query execution plan. It tries to optimize the plan so the temporary result set is as small as possible. To do this, query tree items that are restrictive (matching few documents) are evaluated early . The query execution plan looks at hit count estimates for each part of the query tree using the index and B-tree dictionaries which track the number of documents a given term occurs in.
However, for attribute fields without fast-search
there is no hit count estimate,
so the estimate becomes equal to the total number of documents (matches all)
and is thus moved to the end of the query evaluation.
A query with only one query term searching an attribute field without fast-search
would be a linear scan over all documents and thus expensive:
select * from sources * where range(timestamp, 0, 100);
But if this query term is and-ed with another term which matches fewer documents, that term will determine the cost instead, and fast-search won't be necessary, e.g.:
select * from sources * where range(timestamp, 0, 100) and uuid contains "123e4567-e89b-12d3-a456-426655440000";
The general rules of thumb for when to use fast-search for an attribute field is:
Changing fast-search aspect of the attribute is a live change which does not require any re-feeding, so testing the performance with and without is low effort. Adding or removing fast-search requires restart.
Note that attribute fields with fast-search that are not used in term based ranking should use rank: filter for optimal performance. See reference rank: filter.
See optimization for sorting on a single-value numeric attribute with fast-search using sorting.degrading.
Vespa supports hybrid query evaluation over inverted indexes, combining TAAT and DAAT evaluation to combine the best of both query evaluation techniques. Hybrid is not enabled per default and is triggered by a run time query parameter.
Generally Vespa does DAAT (document-at-a-time) query evaluation and not TAAT (term-at-a time) for the reason listed above.
Ranking (score calculation) and matching (does the document match the query logic) is not fully two separate disjunct phases, where one first find matches and in a later phase calculates the ranking score. Matching and first-phase score calculation is interleaved when using DAAT.
The first-phase ranking score is assigned to the hit when it satisfies the query constraints. At that point, the term iterators are positioned at the document id and one can unpack additional data from the term posting lists - e.g. for term proximity scoring used by the nativeRank ranking feature, which also requires unpacking of positions of the term within the document.
The way hybrid query evaluation is done is that TAAT is used for sub-branches of the overall query tree which is not used for term based ranking.
Using TAAT can speed up query matching significantly (up to 30-50%) in cases where the query tree is large and complex, and where only parts of the query tree is used for term based ranking. Examples of query tree branches that would require DAAT is using text ranking features like bm25 or nativeRank. The list of ranking features which can handle TAAT is long, but using attribute or tensor features only can have the entire tree evaluated using TAAT.
For example, for a query where there is a user text query from an end user, one can use userQuery() YQL syntax and combine it with application level constraints. The application level filter constraints in the query could benefit from using TAAT. Given the following document schema:
search news { document news { field title type string {} field body type string{} field popularity type float {} field market type string { rank:filter indexing: attribute attribute: fast-search } field language type string { rank:filter indexing: attribute attribute: fast-search } } fieldset default { fields: title,body } rank-profile text-and-popularity { first-phase { expression: attribute(popularity) + log10(bm25(title)) + log10(bm25(body)) } } }
In this case the rank profile only uses two ranking features, the popularity attribute and the bm25 score of the userQuery(). These are used in the default fieldset containing the title and body. Notice how neither market or language is used in the ranking expression.
In this query example, there is a language constraint and a market constraint, where both language and market is queried with a long list of valid values using OR, meaning that the document should match any of the market constraints and any of the language constraints.
{ 'hits': 10, 'ranking.profile': text-and-popularity", 'yql': 'select * from sources * where userQuery() and \ (language contains "en" or language contains "br") and \ (market contains "us" or market contains "eu" or market contains "apac" or market contains ".." ... ..... ..)', 'query': 'cat video', 'ranking.matching.termwiselimit': 0.1 }
The language and the market constraints in the query tree are not used in the ranking score and that part of the query tree could be evaluated using TAAT. See also multi lookup set filter for how to most efficiently search with large set filters. The subtree result is then passed as a bit vector into the DAAT query evaluation, which could speed up the overall evaluation significantly.
Enabling hybrid TAAT is done by passing
ranking.matching.termwiselimit=0.1
as a request parameter. It's possible to
evaluate the performance impact by changing this limit. Setting the limit to 0 will force
termwise evaluation, which might hurt performance.
One can evaluate if using the hybrid evaluation improves search performance by adding the above parameter. The limit is compared to the hit fraction estimate of the entire query tree, if the hit fraction estimate is higher than the limit, the termwise evaluation is used to evaluate sub-branch of the query.
When configuring string
type fields with index
,
the default match mode is text
.
This means Vespa will tokenize the content and index the tokens.
The string representation of an Universally unique identifier (UUID) is 32 hexadecimal (base 16) digits, in five groups, separated by hyphens, in the form 8-4-4-4-12, for a total of 36 characters (32 alphanumeric characters and four hyphens).
Example: Indexing 123e4567-e89b-12d3-a456-426655440000
with the above document definition,
Vespa will tokenize this into 5 tokens: [123e4567,e89b,12d3,a456,426655440000]
,
each of which could be matched independently, leading to possible incorrect matches.
To avoid this, change the mode to match: word to treat the entire uuid as one token/word:
field uuid type string { indexing: summary | index match: word rank: filter }
In addition, configure the uuid
as a
rank: filter field -
the field will then be represented as efficient as possible during search and ranking.
The rank:filter
behavior can also be triggered at query time on a per-query item basis
by the com.yahoo.prelude.query.Item.setRanked()
in a custom searcher.
When searching imported attribute fields (with fast-search
) from parent document types
there is an additional indirection that can be reduced significantly
if the imported field is defined with rank:filter
and visibility-delay is configured to > 0.
The rank:filter setting impacts posting list granularity
and visibility-delay
enables a cache for the indirection between the child and parent document.
Vespa scales with the number of hits the query retrieves per node/search thread, and which needs to be evaluated by the first-phase ranking function. Read more on phased ranking. Using phased ranking enables spending more resources during a second phase ranking step than in the first-phase. The first-phase should be focused on getting decent recall (retrieve relevant documents in the top k), while second phase is used to tune precision.
For text search applications, consider using the WAND query operator - WAND can efficiently (sublinear) find the top-k documents using an inner scoring function.
Several real-world search use cases are built around limiting or filtering based on a set filter. If the contents of a field in the document matches any of the values in the query set, it should be retrieved. E.g. searching data for a set of users:
select * from sources * where user_id = 1 or user_id = 2 or user_id = 3 or user_id = 3 or user_id = 4 or user_id 5 ...
For OR filters over the same field it is strongly recommended using the in query operator instead. It has considerably better performance than plain OR for set filtering:
select * from sources * where user_id in (1, 2, 3, 4, 5)
Attribute fields used like the above without other stronger query terms,
should have fast-search
and rank: filter
.
If there is a large number of unique values in the field,
it is also faster to use hash
dictionary instead of btree
,
which is the default data structure for dictionaries for attribute fields with fast-search
:
field user_id type long { indexing: summary | attribute attribute: fast-search dictionary: hash rank: filter }
For string
fields, we also need to include match
settings if using the hash
dictionary:
field user_id_str type string { indexing: summary | attribute attribute: fast-search match: cased rank: filter dictionary { hash cased } }
If having 10M unique user_ids in the dictionary and searching for 1000 users per query, the btree dictionary would be 1000 lookup times log(10M), while hash based would be 1000 lookups times O(1). Still, the btree dictionary offers more flexibility in terms of match settings.
The in
query set filtering approach can be used in combination with hybrid TAAT,
evaluation to further improve performance. See the hybrid TAAT/DAAT section.
Also see the dictionary schema reference.
If queries request many (thousands) of hits from a content cluster with few content nodes, increasing the summary cache might reduce latency and cost.
Using explicit document summaries,
Vespa can support memory-only summary fetching, if all fields referenced in the document summary are
all defined with attribute
. Dedicated in-memory summaries avoid (potential)
disk read and summary chunk decompression.
Vespa document summaries are stored using compressed
chunks.
See also the practical
search performance guide on hits fetching.
When selecting attribute field type, considering performance, this is a rule of thumb:
Refer to attributes for details.
The ranking workload can be significant for large tensors - it is important to have an understanding of both the potential memory and computational cost for each query.
Assume the dot product of two tensors with 1000 values of 8 bytes each as in
tensor<double>(x[1000])
.
With one query tensor and one document tensor,
the dot product is sum(query(tensor1) * attribute(tensor2))
.
Given a Haswell CPU architecture, where the theoretical upper memory bandwidth is 68 GB/sec,
this gives 68 GB/sec / 8 KB = 9M ranking evaluations/sec.
In other words, for a 1 M index, 9 queries per second before being memory bound.
See below for using smaller cell value types, and read more about quantization.
When using tensor types with at least one mapped dimension (sparse or mixed tensor),
attribute: fast-rank
can be used to optimize the tensor attribute for ranking expression evaluation at the cost of using more memory.
This is a good tradeoff if benchmarking indicates significant latency improvements with fast-rank
.
When optimizing ranking functions with tensors, try to avoid temporary objects. Use the Tensor Playground to evaluate what the expressions map to, using the execution details to list the detailed steps - find examples below.
To save both memory and compute resources, use multiphase ranking. In short, use less expensive ranking evaluations to find the most promising candidates, then a high-precision evaluation for the top-k candidates.
The blog post series on Building Billion-Scale Vector Search is a good read on this.
Type | Description |
---|---|
double |
The 64-bit floating-point |
float |
The 32-bit floating-point format |
bfloat16 |
This type has the range as a normal 32-bit float but only 8 bits of precision,
and can be thought of as a "float with lossy compression" -
see Wikipedia.
If memory (or memory bandwidth) is a concern,
change the most space-consuming tensors to use the
When doing calculations,
In some cases, having tensors with |
int8 |
If using machine-learning to generate a model with data quantization,
one can target the
Note:
If the input for an
int8 cell is not directly representable,
the resulting cell value is undefined,
so take care to only input numbers in the [-128,127] range.
It's also possible to use |
The following is a primer into inner/outer products and execution details:
tensor a | tensor b | product | sum | comment |
---|---|---|---|---|
tensor(x[3]):[1.0, 2.0, 3.0] | tensor(x[3]):[4.0, 5.0, 6.0] | tensor(x[3]):[4.0, 10.0, 18.0] | 32 |
Playground example. The dimension name and size is the same in both tensors - this is an inner product, with a scalar result. |
tensor(x[3]):[1.0, 2.0, 3.0] | tensor(y[3]):[4.0, 5.0, 6.0] | tensor(x[3],y[3]):[ [4.0, 5.0, 6.0], [8.0, 10.0, 12.0], [12.0, 15.0, 18.0] ] |
90 |
Playground example. The dimension size is the same in both tensors, but dimensions have different names -> this is an outer product, the result is a two-dimensional tensor. |
tensor(x[3]):[1.0, 2.0, 3.0] | tensor(x[2]):[4.0, 5.0] | undefined |
Playground example. Two tensors in same dimension, but with different length -> undefined. |
|
tensor(x[3]):[1.0, 2.0, 3.0] | tensor(y[2]):[4.0, 5.0] | tensor(x[3],y[2]):[ [4.0, 5.0], [8.0, 10.0], [12.0, 15.0] ] |
54 |
Playground example. Two tensors with different names and dimensions -> this is an outer product, the result is a two-dimensional tensor. |
Inner product - observe optimized into DenseDotProductFunction
with no temporary objects:
Outer product, parsed into a tensor multiplication (DenseSimpleExpandFunction
),
followed by a Reduce
operation:
Note that an inner product can also be run on mapped tensors (Playground example):
sum(model_id * models, m_id)
tensor name | tensor type |
---|---|
model_id | tensor(m_id{}) |
models | tensor(m_id{}, x[3]) |
Using a mapped dimension to select an indexed tensor can be considered a
mapped lookup.
This is similar to creating a slice, but optimized into a single MappedLookup
-
see
Tensor Playground example.
sum(query(model_id) * model_weights * model_features)
tensor name | tensor type |
---|---|
query(model_id) | tensor(model{}) |
model_weights | tensor(model{}, feature{}) |
model_features | tensor(feature{}) |
Three-way mapped (sparse) dot product: Tensor Playground
sum(query(model_id) * model_weights * model_features)
tensor name | tensor type |
---|---|
query(model_id) | tensor(model{}) |
model_weights | tensor(model{}, feature[2]) |
model_features | tensor(feature[2]) |
Three-way mapped (mixed) dot product: Tensor Playground