Stateless Model Evaluation

Vespa's speciality is evaluating machine-learned models quickly over large numbers of data points. However, it can also be used to evaluate models once on request in stateless containers. By enabling a feature in services.xml, all machine-learned models - TensorFlow, Onnx, XGBoost, LightGBM and Vespa stateless models - added to the models/ directory of the application package, are made available through both a REST API and a Java API where you can compute inferences from your own code.

An example application package can be found at in the model-evaluation system test.

The model evaluation tag

To enable both the REST API and the Java API, add the model-evaluation tag inside the container clusters where it is needed in services.xml:

<container>
    ...
    <model-evaluation/>
    ...
</container>

Model inference using the REST API

The simplest way to evaluate the model is to use the REST API. After enabling it as above, a new API path is made available: /model-evaluation/v1/. To discover and find information about the models (including expected input parameters to the model) in your application package, simply follow the links from this root. To evaluate a model add /eval to the query path:

http://host:port/model-evaluation/v1/<model-name>/<function>/eval?<param1=...>&...
Here <model-name> signifies which model to evaluate as you can deploy multiple models in your application package. The <function> specifies which signature and output to evaluate as a model might have multiple signatures and outputs you can evaluate. If a model only has one function, this can be omitted. Inputs to the model are specified as query parameters for GET requests and they can also be in the body part of the request for POST requests. The expected format for input parameters are tensors as specified with the literal form.

See the model-evaluation sample app for an example of this.

Model inference using Java

While the REST API gives a basic interface to run model inference, the Java interface offers far more control allowing you to for instance implement custom input and output formats.

First, add the following dependency in pom.xml:

<dependency>
    <groupId>com.yahoo.vespa</groupId>
    <artifactId>container</artifactId>
    <scope>provided</scope>
</dependency>
(Or, if you want the minimal dependency, depend on model-evaluation instead of container.)

With the above dependency and the model-evaluation element added to services.xml, you can now have your Java component that should evaluate models take a ai.vespa.models.evaluation.ModelsEvaluator (see ModelsEvaluator.java) instance as a constructor argument (Vespa will automatically inject it).

Use the ModelsEvaluator API (from any thread) to make inferences. Sample code:

import ai.vespa.models.evaluation.ModelsEvaluator;
import ai.vespa.models.evaluation.FunctionEvaluator;
import com.yahoo.tensor.Tensor;

...

// Create evaluator
FunctionEvaluator evaluator = modelsEvaluator.evaluatorOf("myModel", "mySignature", "myOutput"); // Unambiguous args may be skipped

// Get model inputs for instance from query (here we just construct a sample tensor)
Tensor.Builder b = Tensor.Builder.of(new TensorType.Builder().indexed("d0", 3));
b.cell(0.1, 0);
b.cell(0.2, 0);
b.cell(0.3, 0);
Tensor input = b.build();

// Bind inputs to the evaluator
evaluator.bind("myInput", input);

// Evaluate model. Note: Evaluator must be discarded after a single use
Tensor result = evaluator.evaluate());

// Do something with the result
The model-evaluation sample app also has an example of this.

Unit testing model evaluation in Java

When developing your application it can be very helpful to unit test your models and/or your searchers and document processors during development. Vespa provides a ModelsEvaluatorTester which can be constructed from the contents of your "models" directory. This allows for testing that the model works as expected in context of Vespa, and that your searcher or document processor gets the correct results from your models.

The following dependency is needed in pom.xml:

<dependency>
    <groupId>com.yahoo.vespa</groupId>
    <artifactId>container-test</artifactId>
    <scope>test</scope>
</dependency>

With this you can construct a testable ModelsEvaluator:

import com.yahoo.vespa.model.container.ml.ModelsEvaluatorTester;

public class ModelsTest {
    @Test
    public void testModels() {
        ModelsEvaluator modelsEvaluator = ModelsEvaluatorTester.create("src/main/application/models");

        // Test the modelsEvaluator directly or construct a searcher and pass it in

    }
}

The ModelsEvaluator object that is returned contains all models found under the directory pass in. Note that this should only be used in unit testing.

The model-evaluation sample app uses this for testing handlers, searchers, and document processors.