• [+] expand all

Embedding Reference

Reference configuration for embedders.

Model config reference

Embedder models uses the model type configuration. The model type configuration accepts attributes model-id, url or path, and multiple of these can be specified as a single config value, where one is used depending on the deployment environment:

  • If a model-id is specified and the application is deployed on Vespa Cloud, the model-id is used.
  • Otherwise, if a url is specified, it is used
  • Otherwise, path is used.

When using path, the model files must be supplied in the Vespa application package.

Huggingface Embedder

An embedder using any Huggingface tokenizer, including multilingual tokenizers, to produce tokens which is then input to a supplied transformer model in ONNX model format.

The Huggingface embedder is configured in services.xml, within the container tag:

<container id="default" version="1.0">
    <component id="hf-embedder" type="hugging-face-embedder">
        <transformer-model path="my-models/model.onnx"/>
        <tokenizer-model path="my-models/tokenizer.json"/>
        <prepend>
          <query>query:</query>
          <document>passage:</document>
        </prepend>
    </component>
    ...
</container>

Huggingface embedder reference config

In addition to embedder ONNX parameters:

Name Occurrence Description Type Default
transformer-model One Use to point to the transformer ONNX model file model-type N/A
tokenizer-model One Use to point to the tokenizer.json Huggingface tokenizer configuration file model-type N/A
max-tokens One The maximum number of tokens accepted by the transformer model numeric 512
transformer-input-ids One The name or identifier for the transformer input IDs string input_ids
transformer-attention-mask One The name or identifier for the transformer attention mask string attention_mask
transformer-token-type-ids One The name or identifier for the transformer token type IDs. If the model does not use token_type_ids use <transformer-token-type-ids/> string token_type_ids
transformer-output One The name or identifier for the transformer output string last_hidden_state
pooling-strategy One How the output vectors of the ONNX model is pooled to obtain a single vector representation. Valid values are mean and cls string mean
normalize One A boolean indicating whether to normalize the output embedding vector to unit length (length 1). Useful for prenormalized-angular distance-metric boolean false
prepend Optional Prepend instructions that are prepended to the text input before tokenization and inference. Useful for models that have been trained with specific prompt instructions. The instructions are prepended to the input text.
  • Element <query> - Optional query prepend instruction.
  • Element <document> - Optional document prepend instruction.
    <prepend>
      <query>query:</query>
      <document>passage:</document>
    </prepend>
Optional <query> <document> elements.

Bert embedder

The Bert embedder is configured in services.xml, within the container tag:

<container version="1.0">
  <component id="myBert" type="bert-embedder">
    <transformer-model path="models/e5-small-v2.onnx"/>
    <tokenizer-vocab url="https://huggingface.co/intfloat/e5-small-v2/raw/main/vocab.txt"/>
  </component>
</container>

Bert embedder reference config

In addition to embedder ONNX parameters:

Name Occurrence Description Type Default
transformer-model One Use to point to the transformer ONNX model file model-type N/A
tokenizer-vocab One Use to point to the Huggingface vocab.txt tokenizer file with valid wordpiece tokens. Does not support tokenizer.json format. model-type N/A
max-tokens One The maximum number of tokens allowed in the input integer 384
transformer-input-ids One The name or identifier for the transformer input IDs string input_ids
transformer-attention-mask One The name or identifier for the transformer attention mask string attention_mask
transformer-token-type-ids One The name or identifier for the transformer token type IDs. If the model does not use token_type_ids use <transformer-token-type-ids/> string token_type_ids
transformer-output One The name or identifier for the transformer output string output_0
transformer-start-sequence-token One The start of sequence token numeric 101
transformer-end-sequence-token One The start of sequence token numeric 102
pooling-strategy One How the output vectors of the ONNX model is pooled to obtain a single vector representation. Valid values are mean and cls string mean

colbert embedder

The colbert embedder is configured in services.xml, within the container tag:

<container version="1.0">
  <component id="colbert" type="colbert-embedder">
    <transformer-model path="models/colbertv2.onnx"/>
    <tokenizer-model url="https://huggingface.co/colbert-ir/colbertv2.0/raw/main/tokenizer.json"/>
    <max-query-tokens>32</max-query-tokens>
    <max-document-tokens>256</max-document-tokens>
  </component>
</container>

The Vespa colbert implementation works with default configurations for transformer models that use WordPiece tokenization.

colbert embedder reference config

In addition to embedder ONNX parameters:

Name Occurrence Description Type Default
transformer-model One Use to point to the transformer ColBERT ONNX model file model-type N/A
tokenizer-model One Use to point to the tokenizer.json Huggingface tokenizer configuration file model-type N/A
max-tokens One Max length of token sequence the transformer-model can handle numeric 512
max-query-tokens One The maximum number of ColBERT query token embeddings. Queries are padded to this length. Must be lower than max-tokens numeric 32
max-document-tokens One The maximum number of ColBERT document token embeddings. Documents are not padded. Must be lower than max-tokens numeric 512
transformer-input-ids One The name or identifier for the transformer input IDs string input_ids
transformer-attention-mask One The name or identifier for the transformer attention mask string attention_mask
transformer-mask-token One The mask token id used for ColBERT query padding numeric 103
transformer-start-sequence-token One The start of sequence token id numeric 101
transformer-end-sequence-token One The end of sequence token id numeric 102
transformer-pad-token One The pad sequence token id numeric 0
query-token-id One The colbert query token marker id numeric 1
document-token-id One The colbert document token marker id numeric 2
transformer-output One The name or identifier for the transformer output string contextual

The Vespa colbert-embedder uses [unused0]token id 1 for query-token-id, and [unused1], token id 2 for document-token-iddocument marker. Document punctuation chars are filtered (not configurable). The following characters are removed !"#$%&'()*+,-./:;<=>?@[\]^_`{|}~.

splade embedder reference config

In addition to embedder ONNX parameters:

Name Occurrence Description Type Default
transformer-model One Use to point to the transformer ONNX model file model-type N/A
tokenizer-model One Use to point to the tokenizer.json Huggingface tokenizer configuration file model-type N/A
term-score-threshold One An optional threshold to increase sparseness, tokens/terms with a score lower than this is not retained. numeric N/A
max-tokens One The maximum number of tokens accepted by the transformer model numeric 512
transformer-input-ids One The name or identifier for the transformer input IDs string input_ids
transformer-attention-mask One The name or identifier for the transformer attention mask string attention_mask
transformer-token-type-ids One The name or identifier for the transformer token type IDs. If the model does not use token_type_ids use <transformer-token-type-ids/> string token_type_ids
transformer-output One The name or identifier for the transformer output string logits

Huggingface tokenizer embedder

The Huggingface tokenizer embedder is configured in services.xml, within the container tag:

  <container version="1.0">
    <component id="tokenizer" type="hugging-face-tokenizer">
      <model url="https://huggingface.co/bert-base-uncased/raw/main/tokenizer.json"/>
    </component>
  </container>

Huggingface tokenizer reference config

Name Occurrence Description Type Default
model One To Many Use to point to the tokenizer.json Huggingface tokenizer configuration file. Also supports language, which is only relevant if one wants to tokenize differently based on the document language. Use "unknown" for a model to be used for any language (i.e. by default). model-type N/A

Embedder ONNX reference config

Vespa uses ONNX Runtime to accelerate inference of embedding models. These parameters are valid for both Bert embedder and Huggingface embedder.

Name Occurrence Description Type Default
onnx-execution-mode One Low level ONNX execution model. Valid values are parallel or sequential. Only relevant for inference on CPU. See ONNX runtime documentation on threading. string sequential
onnx-interop-threads One Low level ONNX setting.Only relevant for inference on CPU. numeric 1
onnx-intraop-threads One Low level ONNX setting. Only relevant for inference on CPU. numeric 4
onnx-gpu-device One The GPU device to run the model on. See configuring GPU for Vespa container image. Use -1 to not use GPU for the model, even if the instance has available GPUs. numeric 0

SentencePiece embedder

A native Java implementation of SentencePiece. SentencePiece breaks text into chunks independent of spaces, which is robust to misspellings and works with CJK languages. Prefer the Huggingface tokenizer embedder over this for better compatibility with Huggingface models.

This is suitable to use in conjunction with custom components, or the resulting tensor can be used in ranking.

To use the SentencePiece embedder, add it to services.xml:

  <container version="1.0">
    <component id="mySentencePiece"
             class="com.yahoo.language.sentencepiece.SentencePieceEmbedder"
             bundle="linguistics-components">
      <config name="language.sentencepiece.sentence-piece">;
          <model>
              <item>
                <language>unknown</language>
                <path>model/en.wiki.bpe.vs10000.model</path>
              </item>
          </model>
        </config>
    </component>
  </container>
  

See the options available for configuring SentencePiece in the full configuration definition.

WordPiece embedder

A native Java implementation of WordPiece, which is commonly used with BERT models. Prefer the Huggingface tokenizer embedder over this for better compatibility with Huggingface models.

This is suitable to use in conjunction with custom components, or the resulting tensor can be used in ranking.

To use the WordPiece embedder, add it to services.xml within the container tag:

  <container version="1.0">
    <component id="myWordPiece">
             class="com.yahoo.language.wordpiece.WordPieceEmbedder"
             bundle="linguistics-components">
      <config name="language.wordpiece.word-piece">
        <model>
          <item>
            <language>unknown</language>
            <path>models/bert-base-uncased-vocab.txt</path>
          </item>
        </model>
      </config>
    </component>
  </container>
  

See the options available for configuring WordPiece in the full configuration definition.

WordPiece is suitable to use in conjunction with custom components, or the resulting tensor can be used in ranking.

Using an embedder from Java

When writing custom Java components (such as Searchers or Document processors), use embedders you have configured by having them injected in the constructor, just as any other component:

class MyComponent {
  @Inject
  public MyComponent(ComponentRegistry<Embedder> embedders) {
    // embedders contains all the embedders configured in your services.xml
  }
}

See a concrete example of using an embedder in a custom searcher in LLMSearcher.

Custom Embedders

Vespa provides a Java interface for defining components which can provide embeddings of text: com.yahoo.language.process.Embedder.

To define a custom embedder in an application and make it usable by Vespa (see embedding a query text), implement this interface and add it as a component to services.xml:

<container version="1.0">
    <component id="myEmbedder"
      class="com.example.MyEmbedder"
      bundle="the name in artifactId in pom.xml">
        <config name='com.example.my-embedder'>
            <model model-id="minilm-l6-v2"/>
            <vocab path="files/vocab.txt"/>
            <myValue>foo</myValue>
        </config>
    </component>
</container>