Configuration Servers

The Cloud Config System can be set up with one or more configuration servers (config servers). A config server uses ZooKeeper as a distributed data storage for the configuration system. In addition, each node runs a config proxy to cache configuration data - find an overview at services start.

Tools to access config:


To access config from a node not running the config system (e.g. doing feeding via the Document API), use the environment variable VESPA_CONFIG_SOURCES:


Alternatively, for Java programs, use the system property configsources and set it programmatically or on the command line with the -D option to Java. The syntax for the value is the same as for VESPA_CONFIG_SOURCES.

System requirements

The default heap size for the JVM it runs under is 1.5 Gb (which can be changed with a setting). It writes a transaction log that is regularly purged of old items, so little disk space is required. Note that running on a server that has a lot of disk I/O will adversely effect performance and is not recommended.


The config server RPC port can be changed by setting VESPA_CONFIGSERVER_RPC_PORT on all nodes in the system. Changing HTTP port requires changing the port in $VESPA_HOME/conf/configserver-app/services.xml:

  <server port="12345" id="configserver" />
When deploying, use the -p option, if port is changed from the default.


The config servers are defined in services.xml, hosts.xml and VESPA_CONFIGSERVERS:

# services.xml
<admin version="2.0">
    <configserver hostalias="admin0" />
    <configserver hostalias="admin1" />
    <configserver hostalias="admin2" />

# hosts.xml
  <host name="">
  <host name="">
  <host name="">

# VESPA_CONFIGSERVERS - set on all nodes in the application,,

Refer to the admin model reference. In addition, VESPA_CONFIGSERVERS must be set on all nodes. This is a comma or whitespace-separated list with the hostname of all configservers, like,,

When there are multiple config servers, the config clients (usually the config proxy, if you have not configured the client to use another config source) will pick a config server based on a hash of their hostname and how many config servers there are. This uniformly loads the config servers. The config clients are fault-tolerant and will switch to another config server if it is unavailable or there is an error in the configuration it receives. With only one config server configured, it will continue using that in case of errors.

For the system to tolerate n failures, ZooKeeper by design requires using (2*n)+1 nodes. Consequently, only odd numbers of nodes work, meaning minimum 3 nodes to have a fault tolerant config system.

It is important to remember that even when using one config server, the config system will not fail at once if that server fails. This is because all nodes runs a config proxy that caches every known config, and serves it to the components on that node. However, restarting a node when half or more config servers are unavailable will lead to a failure of that node, since restarting a node means restarting the config proxy.

Scaling Up

Add config server nodes for increased fault tolerance. There is no need to restart Vespa on other nodes during the procedure - this ensures uninterrupted functionality of the application. Procedure:

  1. Install vespa on the new config server nodes
  2. Add config server hosts to VESPA_CONFIGSERVERS on all the nodes
  3. Restart the config server on the old config server hosts and start it on the new ones
  4. Update services.xml and hosts.xml with the new set of config servers, then vespa-deploy prepare and vespa-deploy activate
Note: ZooKeeper will automatically redistribute the application data.

Scaling up by Majority

When increasing from 1 to 3 nodes or 3 to 7, the blank nodes constitutes a majority in the cluster. After restarting the config servers, they will not always contain old application data, because the blank nodes might win the election - depending on restart timing. Get a new correct data set when repating vespa-deploy prepare and vespa-deploy activate. If you do not wish to scratch your old application data like this, for instance to keep the history, the solution is to scale up by minor sets of the nodes - example:

  1. Scale from 1 to 2
  2. Scale from 2 to 3

Scaling down

Remove config servers from a cluster:

  1. remove config server hosts from VESPA_CONFIGSERVERS on all vespa nodes
  2. Restart config servers on the new subset
  3. Verify that these nodes have data, by using vespa-get-config or ls (see below). If they are blank, redo vespa-deploy prepare and vespa-deploy activate. Also see health checks.
  4. Pull removed hosts from production


ZooKeeper handles data consistency across multiple config servers. The config server Java application runs a ZooKeeper server, embedded with an RPC frontend that the other nodes use. ZooKeeper stores data internally in nodes that can have sub-nodes, similar to a file system.

When start/restarting the config server, the configuration file for ZooKeeper, $VESPA_HOME/conf/zookeeper/zookeeper.cfg, is generated based on the contents of VESPA_CONFIGSERVERS. Hence, config server(s) must all be restarted if that changes on a config server node.

At vespa-deploy prepare, the application's files, along with global configurations, are stored in ZooKeeper. The application data is stored under /config/v2/tenants/default/sessions/[sessionid]/userapp. At vespa-deploy activate, the newest application is set live by updating the pointer in /config/v2/tenants/default/FIXME to refer to the active app's timestamp. It is at that point the other nodes get configured.

Use to inspect state, replace with actual session id:

$ ./ ls  /config/v2/tenants/default/sessions/[sessionid]/userapp
$ ./ get /config/v2/tenants/default/sessions/[sessionid]/userapp/services.xml

The ZooKeeper server logs to $VESPA_HOME/logs/vespa/zookeeper.configserver.log

ZooKeeper Recovery

If the config server(s) should experience data corruption, for instance a hardware failure, use the following recovery procedure. One example of such a scenario is if $VESPA_HOME/logs/vespa/zookeeper.configserver.log says Negative seek offset at Method), which indicates ZooKeeper has not been able to recover after a full disk. There is no need to restart Vespa on other nodes during the procedure.

  1. stop cloudconfig_server
  2. vespa-configserver-remove-state
  3. start cloudconfig_server
  4. vespa-deploy prepare <application path>
  5. vespa-deploy activate
This procedure completely cleans out ZooKeeper's internal data snapshots and deploys from scratch.

Note that by default the cluster controller that maintains the state of the content cluster will use the shared same ZooKeeper instance, so the content cluster state is also reset when removing state. Manually set state will be lost (e.g. a node with user state down). It is possible to run cluster-controllers in standalone zookeeper mode - see standalone-zookeeper.

Adjusting ZooKeeper barrier timeout

If the config servers are heavily loaded, or the applications being deployed are big, the internals of the server may time out when synchronizing with the other servers during deploy. To work around, increase the timeout by setting: VESPA_CONFIGSERVER_ZOOKEEPER_BARRIER_TIMEOUT to 600 (seconds) or higher, and restart the config servers.

Set ZooKeeper ports

Set the ZooKeeper ports, prior to starting the config server. This is useful if running multiple instances on the same host:

Note that the two last ones are only used in a multi node config server cluster.


Health checks Verify that a config server is up and running using the Health and Metric APIs, like Metrics are found at Use vespa-model-inspect to find host and port number.
Bad Node If running with more than one config server and one of these goes down or has hardware failure, the cluster will still work and serve config as usual (clients will switch to use one of the good nodes). It is not necessary to remove a bad node from the configuration. Deploying applications will use a long time, since vespa-deploy will not be able to complete a deployment on all servers when one of them is down. If this is troublesome, lower the barrier timeout - (default value is 120 seconds). Note also that if you have not configured cluster controllers explicitly, these will run on the config server nodes and the operation of these might be affected. This is another reason for not trying to manually remove a bad node from the config server setup.
Stuck filedistribution The config system distributes binary files (such as jar bundle files) using file-distribution - refer to troubleshooting if it gets stuck.