Vespa provides a tensor data model and computation engine to support advanced computations over data. This guide explains the tensor support in Vespa. See also the tensor reference, and our published paper (pdf).
A tensor in Vespa is a data structure which generalizes scalars, vectors and matrices to any number of dimensions:
Tensors consist of a set of scalar valued cells, with each cell having a unique address. A cell's address is specified by its index or label in all the dimensions of that tensor. The number of dimensions in a tensor is the rank of the tensor. Each dimension can be either mapped or indexed. Mapped dimensions are sparse and allow any label (string identifier) designating their address, while indexed dimensions use dense numeric indices starting at 0.
Example: Using literal form, the tensor:
{ {user:bob, movie:"Heat"}:0.1, {user:alice, movie:"Frozen"}:0.9, {user:carol, movie:"Top Gun"}:0.3, }
has two dimensions named user
and movie
, and has three cells with defined values:
A tensor has a type, which consists of a set of dimension names, dimension types, and a tensor cell value type. The dimension name can be anything. This defines a 2-dimensional mapped tensor (sparse 2D matrix) of floats as illustrated above:
tensor<float>(user{},movie{})This is a 2-dimensional indexed tensor (a 2D 1280x720 matrix). For example, used to represent an image:
tensor<int8>(x[1280],y[720])This is a 3-dimensional indexed tensor. For example, used to represent spatial data:
tensor<float>(x[256], y[256], z[128])This is a mixed tensor combining a mapped dimension and an indexed dimension. For example, used to represent word2vec:
tensor<bfloat16>(word_id{},vec[300])Another mixed tensor used to represent paragraph embeddings used to power multi-vector indexing.
tensor<float>(paragraph{},embedding[768])
Vespa uses the tensor type information to optimize tensor expression execution plans at configuration time.
Document fields in schemas can be of any tensor type:
schema product { document product { field title type string { indexing: summary | index } field price type int { indexing: summary | attribute } field popularity type float { indexing: summary | attribute } field sales_score type tensor<float>(category{}) { indexing: summary | attribute } field embedding type tensor<float>(x[4]) { indexing: summary | attribute | index attribute { distance-metric: dotproduct } } } }
The above schema exemplifies a product with two tensor fields.
The sales_score
tensor field represents
how popular a product is per unique category. This information could be used when
ranking products for a user query. The embedding
tensor field represents an embedding vector representation of the product.
category
.
Mapped dimensions are sparse and allow any label (string identifier) designating their address.To perform computations over a document tensor field in ranking, the field must be defined with attribute.
Tensors with the following types can be indexed with HNSW and searched efficiently using the nearestNeighbor query operator:
See nearest neighbor search and approximate nearest neighbor search.
An example product document in Vespa JSON format. This example uses the product category string as the mapped label key. The embedding tensor stores and indexes (HNSW) a dense embedding.
The above JSON feed format example uses short value form. Tensor fields can be represented using different JSON format verbosity. You can use partial updates of tensor fields with add, remove and modify tensor cells, or assign a completely new tensor value. From container components you can create and modify tensor values using the tensor Java API.
One can imagine that the per category sales scores are re-calculated outside of Vespa and updated in Vespa regularly using partial updates, avoiding re-feed or re-index other fields.
Query input tensors must be defined in the schema rank-profile using inputs:
rank-profile product_ranking inherits default { inputs { query(q_category) tensor<float>(category{}) query(q_embedding) tensor<float>(x[4]) } ..... }
The above defines two query input tensors that we can reference in ranking expressions. With the tensor query name and tensor type defined, you can:
Query.getRanking().getFeatures.put("query(q_embedding)", myTensorInstance)
, or
input.query(q_embedding)
and
passing a tensor value.
An example query request using Vespa CLI query request:
This query request example assumes that the user query has been mapped (classified) to be related to the
Tablet Keyboard Cases and Keyboards categories. Similarly, the user query has been mapped to
a dense vector representation (query(q_embedding
) and is used as
input to the nearestNeighbor
query operator, expressed with the YQL query language.
The Vespa CLI uses HTTP GET and you can use the -v flag to see the curl GET equivalent. For POST query requests using JSON, the equivalent JSON is:
If the input query tensor used for the nearestNeighbor operator is not defined in the schema rank-profile, the request will fail:
"Expected 'query(q_embedding)' to be a tensor, but it is the string '[1,2,3,4]'"
Tensors can be used in making inference computations over documents that are matched by a query. These computations are expressed with ranking expressions in schema rank profiles. We can use this support to rank products by both the dense embedding dot product similarity and the category sales score.
rank-profile product_ranking inherits default { inputs { query(q_category) tensor<float>(category{}) query(q_embedding) tensor<float>(x[4]) } function p_sales_score() { expression: sum(query(q_category) * attribute(sales_score)) } function p_embedding_score() { expression: closeness(field, embedding) } first-phase { expression: p_sales_score() + p_embedding_score() } match-features: p_sales_score() p_embedding_score() }
The above profile uses a combination of two dot product calculations in the first phase expression.
The first-phase
expression is invoked for all documents that are retrieved
by the YQL query language.
function p_embedding_score() { expression: sum(query(q_embedding) * attribute(embedding)) }
The full list of tensor functions are listed in the ranking expression reference. Using match-features, developers can debug, or log function outputs in the search result.
"matchfeatures": { "p_embedding_score": 30.0, "p_sales_score": 8.0, }, "documentid": "id:shopping:product::B0BFW5SXX2", "title": "Keyboard Case for iPad Pro 12.9 inch"
If you need to make tensor computations from non-tensor single-valued attributes, arrays or weighted sets, you can convert them in a ranking expression:
function to_indexed_tensor() { expression: tensor(x[2]):[attribute(price),attribute(popularity)] }
function to_mapped_tensor() { expression: tensor(x{}):{key1:attribute(price),key2:attribute(popularity)} }
tensorFromLabels
.
tensorFromWeightedSet
.
Converting non-tensor fields to tensors at query runtime has a performance penalty that is linear with the number of elements
in the array/weightedset. Prefer using native tensor fields instead. The benefit of converting non-tensor fields
is that non-tensor fields like int, float, weightedset
can be efficiently queried. Only specific tensor
types can be searched efficiently using the nearestNeighbor
query operator.
In addition to document tensors and query tensors, constant tensors can be put in the application package. This is useful for adding machine learned models. Example:
constants { my_tensor_constant tensor<float>(x[4]): file: constants/constant_tensor_file.json }
This defines a new tensor with the type as defined and the contents distributed with the application package in the file constants/constant_tensor_file.json. The format of this file is the constant tensor JSON format:
To use this constant tensor in a ranking expression, encapsulate the constant name with constant(...)
:
rank-profile use_constant_tensor inherits product_ranking { constants { my_tensor_constant tensor<float>(x[4]): file: constants/constant_tensor_file.json } first-phase { expression: sum(query(q_embedding) * attribute(embedding) * constant(my_tensor_constant)) } }
Note that the rank profile inherit
the inputs we defined in the product_ranking
profile.
With the example data used, the first-phase expression returns the 16.0 since:
"embedding": [ 1.0, 2.0, 3.0, 4.0 ], "query(q_embedding)": [ 1.0, 2.0, 3.0, 4.0 ], "constant(my_tensor_constant)": [ 0.0, 0.0, 0.0, 1.0 ]
Tensors in Vespa cannot have strings as values, since the mathematical tensor functions would be undefined for such "tensors". However, you can still represent sets of strings in tensors by using the strings as keys in a mapped tensor dimensions, using e.g 1.0 as values. This allows you to perform set operations on strings and similar without making those tensors incompatible with other tensors and with normal tensor operations.
See also: