Vespa supports importing Gradient Boosting Decision Tree (GBDT) models trained with XGBoost.
Exporting models from XGBoost
Vespa supports importing XGBoost's JSON model dump, e.g. Python API
xgboost.Booster.dump_model.
When dumping the trained model, XGBoost allows users to set the dump_format to json,
and users can specify the feature names to be used in fmap.
Here is an example of an XGBoost JSON model dump with 2 trees and maximum depth 1:
Notice the split attribute which represents the Vespa feature name. The split feature must resolve to a Vespa
rank feature defined in the document schema. The feature can also
be user defined features (for example using functions).
The above model JSON was produced using the XGBoost Python api with a regression objective:
XGBoost is trained on array or array like data structures
where features are named based on the index in the array as in the example above.
To convert the XGBoost features we need to map feature indexes to actual Vespa features
(native features or custom defined features):
Format of feature-map.txt: <featureid> <featurename> <q or i or int>\n:
"Feature id" must be from 0 to number of features, in sorted order.
"i" means this feature is binary indicator feature
"q" means this feature is a quantitative value, such as age, time, can be missing
"int" means this feature is integer value (when int is hinted, the decision boundary will be integer)
When using pandasDataFrame's with columns names, one does not need to provide feature mappings.
See also a complete example of how to train a ranking function, using learning to rank
with ranking losses, in this
notebook.
Importing XGBoost models
To import the XGBoost model to Vespa, add the directory containing the
model to your application package under a specific directory named models.
For instance, if you would like to call the model above as my_model,
you would add it to the application package resulting in a directory structure like this:
Here, we specify that the model my_model.json is applied to the top ranking documents by the first-phase ranking expression.
The query request must specify prediction as the ranking.profile.
See also Phased ranking on how to control number of data points/documents which is exposed to the model.
Generally the run time complexity is determined by:
The number of documents evaluated per thread / number of nodes and the query filter
The complexity of computing features. For example fieldMatch features are 100x more expensive that nativeFieldMatch/nativeRank.
The number of XGboost trees and the maximum depth per tree
There are six different objective
types that Vespa supports:
Regression reg:squarederror / reg:logistic
Classification binary:logistic
Ranking rank:pairwise, rank:ndcg and rank:map
For reg:logistic and binary:logistic the raw margin tree sum (Sum of all trees)
needs to be passed through the sigmoid function to represent the probability of class 1.
For regular regression the model can be directly imported
but the base_score should be set 0 as the base_score used during the training phase is not dumped with the model.
An example model using the sklearn toy datasets is given below:
Debugging Vespa inference score versus XGBoost predict score
When dumping XGBoost models to a JSON representation some of the model information is lost
(e.g. the base_score or the optimal number of trees if trained with early stopping).
XGBoost also has different predict functions (e.g. predict/predict_proba).
The following XGBoost System Test
demonstrates how to represent different type of XGBoost models in Vespa.
For training, features should be scraped from Vespa, using either match-features or summary-features so
that features from offline training matches the online Vespa computed features.
Dumping features can also help debug any differences by zooming into specific query,document pairs
using recall parameter.
It's also important to use the highest possible precision
when reading Vespa features for training as Vespa outputs features using double precision.
If the training routine rounds features to float or other more compact floating number representations, feature split decisions might differ in Vespa versus XGboost.
In a distributed setting when multiple nodes uses the model, text matching features such as nativeRank, nativFieldMatch, bm25 and fieldMatch
might differ, depending on which node produced the hit. The reason is that all these features use term(n).significance, which is computed locally indexed corpus. The term(n).significance feature
is related to Inverse Document Frequency (IDF). The term(n).significance should be set by a searcher in the container for global correctness as each node will estimate the significance values from the local corpus.