• [+] expand all

Embedding

A common technique is to map unstructured data - say, text or images - to points in an abstract vector space and then do computation in that space. For example, retrieve similar data by finding nearby points in the vector space, or using the vectors as input to a neural net. This mapping is referred to as embedding. Read more about embedding and embedding management in this blog post.

Embedding vectors can be sent to Vespa in queries and writes:

document- and query-embeddings

Alternatively, you can use the embed function to generate the embeddings inside Vespa to reduce vector transfer cost and make clients simpler:

Vespa's embedding feature, creating embeddings from text

Configuring embedders

Embedders are components which must be configured in your services.xml. Components are shared and can be used across schemas.

<container id="default" version="1.0">
    <component id="hf-embedder" type="hugging-face-embedder">
        <transformer-model path="my-models/model.onnx"/>
        <tokenizer-model path="my-models/tokenizer.json"/>
    </component>
    ...
</container>

You can write your own, or use embedders provided in Vespa.

Embedding a query text

Where you would otherwise supply a tensor in a query request, you can (with an embedder configured) instead supply any text enclosed in embed():

input.query(q)=embed(myEmbedderId, "Hello%20world")

Both single and double quotes are permitted, and if you have only configured a single embedder, you can skip the embedder id argument and the quotes.

The text argument can be supplied by a referenced parameter instead, using the @parameter syntax:

{
    "yql": "select * from doc where {targetHits:10)nearestNeighbor(embedding_field, query_embedding)",
    "text": "my text to embed",
    "input.query(query_embedding)": "embed(@text)",
}

Remember that regardless of whether you are using embedders, input tensors must always be defined in the schema's rank-profile.

Embedding a document field

Use the embed function of the indexing language to convert string into embeddings:

schema doc {

    document doc {

        field title type string {
            indexing: summary | index
        }

    }

    field embeddings type tensor<bfloat16>(x[384]) {
        indexing {
            input title | embed embedderId | attribute | index
        }
    }

}

Notice that the embedding field is defined outside the document clause in the schema. If you only have configured a single embedder you can skip the embedder id argument.

The input field can also be an array, where the output becomes a rank 2 tensor, see this blog post:

schema doc {

    document doc {

        field chunks type array<string> {
            indexing: index | summary
        }

    }

    field embeddings type tensor<bfloat16>(p{},x[5]) {
        indexing: input chunks | embed embedderId | attribute | index
    }

}

Provided embedders

Vespa provides several embedders as part of the platform.

Huggingface Embedder

An embedder using any Huggingface tokenizer, including multilingual tokenizers, to produce tokens which is then input to a supplied transformer model in ONNX model format:

<container id="default" version="1.0">
    <component id="hf-embedder" type="hugging-face-embedder">
        <transformer-model path="my-models/model.onnx"/>
        <tokenizer-model path="my-models/tokenizer.json"/>
    </component>
    ...
</container>

The huggingface-embedder supports all Huggingface tokenizer implementations.

Use path to supply the model files from the application package, url to supply them from a remote server, or model-id to use a model supplied by Vespa Cloud, see model config reference.

<container id="default" version="1.0">
    <component id="e5" type="hugging-face-embedder">
        <transformer-model url="https://huggingface.co/intfloat/e5-small-v2/resolve/main/model.onnx"/>
        <tokenizer-model url="https://huggingface.co/intfloat/e5-small-v2/raw/main/tokenizer.json"/>
    </component>
    ...
</container>

See the reference for all configuration parameters.

Huggingface embedder models

The following are examples of text embedding models that can be used with the hugging-face-embedder and their output tensor dimensionality. The resulting tensor type can be float, bfloat16 or using binarized quantization into int8. See blog post Combining matryoshka with binary-quantization for more examples on using the Huggingface embedder with binary quantization.

The following models use pooling-strategy mean, which is the default pooling-strategy:

The following models are useful for binarization and Matryoshka chomping where only the first k values are retained. When enabling binarization with int8 use distance-metric hamming:

  • mxbai-embed-large-v1 produces tensor<float>(x[1024]). This model is also useful for binarization which can be triggered by using destination tensor<int8>(x[128]). Use pooling-strategy cls and normalize true.
  • nomic-embed-text-v1.5 produces tensor<float>(x[768]). This model is also useful for binarization which can be triggered by using destination tensor<int8>(x[96]). Use normalize true.

Snowflake arctic model series:

All of these example text embedding models can be used in combination with Vespa's nearest neighbor search using the appropriate distance-metric. Notice that in order to use the distance-metric: prenormalized-angular, the normalize configuration must be set to true.

Check the Massive Text Embedding Benchmark (MTEB) benchmark and MTEB leaderboard for help with choosing an embedding model.

Bert embedder

An embedder using the WordPiece embedder to produce tokens which are then input to a supplied ONNX model on the form expected by a BERT base model:

<container version="1.0">
  <component id="myBert" type="bert-embedder">
    <transformer-model url="https://huggingface.co/intfloat/e5-small-v2/resolve/main/model.onnx"/>
    <tokenizer-vocab url="https://huggingface.co/intfloat/e5-small-v2/raw/main/vocab.txt"/>
    <max-tokens>128</max-tokens>
  </component>
</container>
  • The transformer-model specifies the embedding model in ONNX format. See exporting models to ONNX, for how to export embedding models from Huggingface to compatible ONNX format.
  • The tokenizer-vocab specifies the Huggingface vocab.txt file, with one valid token per line. Note that the Bert embedder does not support the tokenizer.json formatted tokenizer configuration files. This means that tokenization settings like max tokens should be set explicitly.

The Bert embedder is limited to English (WordPiece) and BERT-styled transformer models with three model inputs (input_ids, attention_mask, token_type_ids). Prefer using the Huggingface Embedder instead of the Bert embedder.

See configuration reference for all configuration options.

ColBERT embedder

An embedder supporting ColBERT models. The ColBERT embedder maps text to token embeddings, representing a text as multiple contextualized embeddings. This produces better quality than reduing all tokens into a single vector.

Read more about ColBERT and the ColBERT embedder in blog post form Announcing the Vespa ColBERT embedder and Announcing Vespa Long-Context ColBERT.

<container version="1.0">
    <component id="colbert" type="colbert-embedder">
      <transformer-model url="https://huggingface.co/colbert-ir/colbertv2.0/resolve/main/model.onnx"/>
      <tokenizer-model url="https://huggingface.co/colbert-ir/colbertv2.0/raw/main/tokenizer.json"/>
      <max-query-tokens>32</max-query-tokens>
      <max-document-tokens>128</max-document-tokens>
    </component>
</container>
  • The transformer-model specifies the ColBERT embedding model in ONNX format. See exporting models to ONNX, for how to export embedding models from Huggingface to compatible ONNX format. The vespa-engine/col-minilm page on the HF model hub has a detailed example of how to export a colbert checkpoint to ONNX format for accelerated inference.
  • The tokenizer-model specifies the Huggingface tokenizer.json formatted file. See HF loading tokenizer from a json file.
  • The max-query-tokens controls the maximum number of query text tokens that are represented as vectors and similarily max-document-tokens controls the document side. These parameters can be used to control resource usage.

See configuration reference for all configuration options and defaults.

The ColBERT token embeddings are represented as a mixed tensor: tensor<float>(token{}, x[dim]) where dim is the vector dimensionality of the contextualized token embeddings. The colbert model checkpoint on Hugging Face hub uses 128 dimensions.

The embedder destination tensor is defined in the schema, and depending on the target tensor cell precision definition the embedder can compress the representation: If the target tensor cell type is int8, the ColBERT embedder compress the token embeddings with binarization for the document to reduce storage to 1-bit per value, reducing the token embedding storage footprint by 32x compared to using float. The query representation is not compressed with binarization. The following demonstrates two ways to use the ColBERT embedder in the document schema to embed a document field.

schema doc {
  document doc {
    field text type string {..}
  }
  field colbert_tokens type tensor<float>(token{}, x[128]) {
    indexing: input text | embed colbert | attribute
  }
  field colbert_tokens_compressed type tensor<int8>(token{}, x[16]) {
    indexing: input text | embed colbert | attribute
  }
}

The first field colbert_tokens store the original representation as the tensor destination cell type is float. The second field, the colbert_tokens_compressed tensor is compressed. When using int8 tensor cell precision, one should divide the original vector size by 8 (128/8 = 16).

You can also use bfloat16 instead of float to reduce storage by 2x compared to float.

field colbert_tokens type tensor<bfloat16>(token{}, x[128]) {
  indexing: input text | embed colbert | attribute
}

You can also use the ColBERT embedder with an array of strings (representing chunks):

schema doc {
  document doc {
    field chunks type array<string> {..}
  }
  field colbert_tokens_compressed type tensor<int8>(chunk{}, token{}, x[16]) {
    indexing: input text | embed colbert chunk | attribute
  }
}

Here, we need a second mapped dimension in the target tensor, and a second argument to embed, telling the ColBERT embedder the name of the tensor dimension to use for the chunks.

Notice that the examples above did not specify the index function for creating a HNSW index. The colbert representation is intended to be used as a ranking model, and not for retrieval with Vespa's nearestNeighbor query operator, where you can e.g. use a document level vector and/or lexical matching.

To reduce memory footprint, use paged attributes.

ColBERT ranking

See sample-applications for how to use ColBERT in ranking with variants of the MaxSim similarity operator expressed using Vespa tensor computation expressions. See: colbert and colbert-long.

SPLADE embedder

An embedder supporting SPLADE models. The SPLADE embedder maps text to mapped tensor, representing a text as a sparse vector of unique tokens and their weights.

<container version="1.0">
    <component id="splade" type="splade-embedder">
      <transformer-model path="models/splade_model.onnx"/>
      <tokenizer-model path="models/tokenizer.json"/>
    </component>
</container>

See configuration reference for all configuration options and defaults.

The splade token weights are represented as a mapped tensor: tensor<float>(token{}).

The embedder destination tensor is defined in the schema. The following demonstrates how to use the SPLADE embedder in the document schema to embed a document field.

  schema doc {
    document doc {
      field text type string {..}
    }
    field splade_tokens type tensor<float>(token{}) {
      indexing: input text | embed splade | attribute
    }
  }
  

You can also use the SPLADE embedder with an array of strings (representing chunks). Here, also using lower tensor cell precision bfloat16:

schema doc {
  document doc {
    field chunks type array<string> {..}
  }
  field splade_tokens type tensor<bfloat16>(chunk{}, token{}) {
    indexing: input text | embed splade chunk | attribute
  }
}

Here, we need a second mapped dimension in the target tensor, and a second argument to embed, telling the splade embedder the name of the tensor dimension to use for the chunks.

To reduce memory footprint, use paged attributes.

SPLADE ranking

See the splade splade sample application for how to use SPLADE in ranking. including also how to use the SPLADE embedder with an array of strings (representing chunks).

Embedder performance

Embedding inference can be resource intensive for larger embedding models. Factors that impacts performance:

  • The embedding model parameters. Larger models are more expensive to evaluate than smaller models.
  • The sequence input length. Transformer models scales quadratic with input length. Since queries are typically shorter than documents, embedding queries is less computionally intensive than embedding documents.
  • The number of inputs to the embed call. When encoding arrays, consider how many inputs a single document can have. For CPU inference, increasing feed timeout settings might be required when documents have many embedinputs.

Using GPU, especially for longer sequence lengths (documents), can dramatically improve performance and reduce cost. See the blog post on GPU-accelerated ML inference in Vespa Cloud. With GPU-accelerated instances, using fp16 models instead of fp32 can increase throughput by as much as 3x compared to fp32.

Metrics

Vespa's built-in embedders emits metrics for computation time and token sequence length. These metrics are prefixed with embedder. and listed in the Container Metrics reference documentation. Third-party embedder implementations may inject the ai.vespa.embedding.Embedder.Runtime component to easily emit the same predefined metrics, although emitting custom metrics is perfectly fine.

Sample applications

These sample applications use embedders:

Tricks and tips

Various tricks that are useful with embedders.

Adding a fixed string to a query text

If you need to add a standard wrapper or a prefix instruction around the input text you want to embed use parameter substitution to supply the text, as in embed(myEmbedderId, @text), and let the parameter (text here) be defined in a query profile, which in turn uses value substitution to place another query request supplied text value within it. The following is a concrete example where queries should have a prefix instruction before embedded to vector representation. The following defines a text input field to search/query-profiles/default.xml:

  <query-profile id="default">
      <field name="text">Represent this sentence for searching relevant passages: %{user_query}</field>
  </query-profile>

Then at query request time, we can pass user_query as a request parameter, this parameter is then used to produce the text value which then is embedded.

  {
    "yql": "select * from doc where userQuery() or ({targetHits: 100}nearestNeighbor(embedding, e))",
    "input.query(e)": "embed(mxai, @text)",
    "user_query": "space contains many suns"
}
The text that is embedded by the embedder is then: Represent this sentence for searching relevant passages: space contains many suns.

Concatenating input fields

You can concatenate values in indexing, using ".", and handle missing field values, using choice to produce a single input for an embedder:

schema doc {

    document doc {

        field title type string {
            indexing: summary | index
        }

        field body type string {
            indexing: summary | index
        }

    }

    field embeddings type tensor<bfloat16>(x[384]) {
        indexing {
            (input title || "") . " " . (input body || "") | embed embedderId | attribute | index
        }
        index: hnsw
    }

}

You can also use concatenation to add a fixed preable to the string to embed.

Combining with foreach

The indexing expression can also use for_each and include other document fields. For example the E5 family of embedding models uses instructions along with the input. The following expression prefixes the input with passage: followed by a concatenation of the title and a text chunk.

schema doc {

    document doc {

        field title type string {
            indexing: summary | index
        }

        field chunks type array<string> {
            indexing: index | summary
        }

    }
    field embedding type tensor<bfloat16>(p{}, x[384]) {
        indexing {
            input chunks |
                for_each {
                    "passage: " . (input title || "") . " " . ( _ || "")
                } | embed e5 | attribute | index
        }
        attribute {
            distance-metric: prenormalized-angular
        }
    }
}

See Indexing language execution value for details.