Vespa supports advanced ranking models through its tensor API. If your model is in the ONNX format, Vespa can import and use the model directly. You can use ONNX models with Vespa embedder functionality or in ranking.
Importing ONNX model files
Add the file containing the ONNX models somewhere under the application
package. For instance, if your model file is my_model.onnx, you could
add it to the application package under a files directory, something like
this:
An application package can have multiple onnx models. To download models during deployment,
see deploying remote models.
Ranking with ONNX models
To make the above model available for ranking, you define the model in the
schema, and then you can refer to the model using the onnx (or onnxModel)
ranking feature:
This defines the model called my_onnx_model. It is evaluated using the
onnxranking feature.
This rank feature specifies which model to evaluate in the ranking expression
and, optionally, which output to use from the model.
The onnx-model section defines three things:
The model's location under the applications package
The inputs to use for evaluation and where they should come from
The outputs to use for evaluation
In the example above, the model should be found in files/my_model.onnx. This
model has two inputs. For inputs, the first name specifies the input as
named in the ONNX model file. The source is where the input should
come from. This can be either:
A document field: attribute(field_name)
A query parameter: query(query_param)
A constant: constant(name)
A user-defined function: function_name
For outputs, the output name is the name used in Vespa to specify the output.
If this is omitted, the first output in the ONNX file will be used.
The output of a model is a tensor, however the rank score should result
in a single scalar value. In the example above we use sum to sum all the elements
of the tensor to a single value. You can also slice out parts of
the result using Vespa's tensor API.
For instance, if the output of the example above is a tensor with the two dimensions d0 and d1,
and you want to extract the first value, this can be expressed by:
onnx(my_onnx_model).output_name{d0:0,d1:0}
Note that inputs to the ONNX model must be tensors; scalars are not supported.
The input tensors must have dimension names starting with "d0" for the first
dimension, and increasing for each dimension (i.e. "d1", "d2", etc.). The
result of the evaluation will likewise be a tensor with names "d0", "d1",
etc.
The types of document and input tensors are specified in the schema, as shown above.
You can pass tensors in HTTP requests by using the HTTP parameter
"input.query(myTensor)" (assuming the ranking expression contains "query(myTensor)").
When training your model you will typically have an input that contains a
dimension for batches, for instance an input with sizes [-1, 784]. Here, -1
typically denotes the batch dimension. During ONNX inference in ranking, Vespa uses batch size 1.
Limitations on model size and complexity
Note that in the above rank profile example, the onnx model evaluation
was evaluated in the first phase. In general, evaluating these types of models are
more suitable in the second-phase or global-phase phases.
See phased ranking. Vespa can only import ONNX models
that are self-contained and below 2GB in size (protobuf limitation). Models in which data tensors are split
over multiple files, is currently not supported.
Examples
The transformers
sample application uses a cross-encoder model to
re-rank documents.
The simple-semantic-search sample application
uses onnx models for embedding inference. custom-embeddings
has an example of a PyTorch model that is exported to onnx format for use in re-ranking.
Exporting HF models to ONNX format
Transformer-based models have named inputs and outputs that must be compatible
with the input and output names used by the embedder.
The simple-semantic-search
sample app includes two scripts to export models and vocabulary files using the default expected input and output names
for embedders using ONNX models. The input and output names to the embedder are tunable via the transformer-
parameters in the config of the embedder in question.
Using Optimum to export models to onnx format
We can highly recommend using the Optimum library for exporting
models hosted on Huggingface model hub.
For example, to export BAAI/bge-small-en from the model hub to onnx format:
In many cases, there are also onnx checkpoints available, for example mixedbread-ai/mxbai-embed-large-v1, this models can then be downloaded and used in Vespa.
Vespa defaults to meanpooling-strategy. Consult the model card for the pooling method used.
Note the url pattern above. The url
must point to the actual file, not the model card.
Also, Vespa only supports models that are contained in a single onnx file; if the model is larger than
2GB, the model is split over multiple files, and this is currently not supported in Vespa.
See cross-encoders documentation for examples on how to
export cross-encoder re-rankers using the Optimum library.
Debugging ONNX models
When loading ONNX models for Vespa native embedders,
the model must have correct inputs and output parameters. Vespa offers tools to inspect ONNX model files.
Here, minilm-l6-v2.onnx is in the current working directory:
Vespa embedders implement the pooling strategy over the output vectors (one per sequence length).
If loading models without the expected input and output parameter names, the container service will not start
(check vespa.log in the container running Vespa):
WARNING container Container.com.yahoo.container.di.Container
Caused by: java.lang.IllegalArgumentException: Model does not contain required input: 'input_ids'. Model contains: input
When this happens, a deploy looks like:
$ vespa deploy --wait 300
Uploading application package ... done
Success: Deployed .
Waiting up to 5m0s for query service to become available ...
Error: service 'query' is unavailable: services have not converged
Embedders supports changing the input and output names, consult embedding reference
documentation.
Using vespa-analyze-onnx-model
vespa-analyze-onnx-model
is useful to find model inputs and outputs -
example run on a config server where an application package with a model is deployed to: