# services.xml: content > engine > proton > tuning

tuning configures the proton search core in services.xml - refer to the reference. Read Proton maintenance jobs for background for these settings. tuning is not a required element:

content
engine
proton
tuning
searchnode
search
persearch
summary
flushstrategy
native
total
maxmemorygain
diskbloatfactor
component
maxmemorygain
diskbloatfactor
maxage
transactionlog
maxentries
maxsize
conservative
memory-limit-factor
disk-limit-factor
resizing
initialdocumentcount
initialize
feeding
concurrency
index
io
write
search
attribute
io
write
summary
io
write
store
cache
maxsize
maxsize-percent
compression
type
level
logstore
maxfilesize
chunk
maxsize
maxentries
compression
type
level


## tuning

 Contained in: proton Required: No
Tune settings for the search nodes in a content cluster - sub-element:
ElementRequiredQuantity
searchnode No Zero or one

## searchnode

 Contained in: tuning Required: No
Tune settings for search nodes in a content cluster - sub-elements:
ElementRequiredQuantity
requestthreads No Zero or one
flushstrategy No Zero or one
resizing No Zero or one
initialize No Zero or one
feeding No Zero or one
index No Zero or one
attribute No Zero or one
summary No Zero or one
<tuning>
<searchnode>
<flushstrategy></flushstrategy>
<resizing></resizing>
<initialize></initialize>
<feeding></feeding>
<index></index>
<attribute></attribute>
<summary></summary>
</searchnode>
</tuning>


 Contained in: searchnode Required: No
Tune the number of request threads used on a search node - optional sub-elements:
• persearch: Number of search threads used per search, default 1
• summary: Number of summary threads, default 16
<requestthreads>
<search>64</search>
<persearch>1</persearch>
<summary>16</summary>


## flushstrategy

 Contained in: searchnode Required: No
Tune the native-strategy for flushing components to disk - a smaller number means more frequent flush:
• Memory gain is how much memory can be freed by flushing a component
• Disk gain is how much disk space can be freed by flushing a component (typically by using compaction)
Refer to Proton maintenance jobs. Optional sub-elements:
• native:
• total
• maxmemorygain: The total maximum memory gain (in bytes) for all components before running flush, default 4294967296 (4 GB)
• diskbloatfactor: Trigger flush if the total disk gain (in bytes) for all components is larger than the factor times current total disk usage, default 0.2
• component
• maxmemorygain: The maximum memory gain (in bytes) by a single component before running flush, default 1073741824 (1 GB)
• diskbloatfactor: Trigger flush if the disk gain (in bytes) by a single component is larger than the given factor times the current disk usage by that component, default 0.2
• maxage: The maximum age (in seconds) of unflushed content for a single component before running flush, default 86400 (24h)
• transactionlog
• maxentries: DEPRECATED (use maxsize instead): The maximum number of entries in the transaction log for a document type before running flush, default 1000000 (1 M)
• maxsize: The total maximum size (in bytes) of transaction logs for all document types before running flush, default 21474836480 (20 GB)
• conservative
• memory-limit-factor: When resource-limits for memory is reached, flush more often by downscaling total.maxmemorygain and component.maxmemorygain, default 0.5
• disk-limit-factor: When resource-limits for disk is reached, flush more often by downscaling transactionlog.maxsize, default 0.5
<flushstrategy>
<native>
<total>
<maxmemorygain>4294967296</maxmemorygain>
<diskbloatfactor>0.2</diskbloatfactor>
</total>
<component>
<maxmemorygain>1073741824</maxmemorygain>
<diskbloatfactor>0.2</diskbloatfactor>
<maxage>86400</maxage>
</component>
<transactionlog>
<maxentries>1000000</maxentries>
<maxsize>21474836480</maxsize>
</transactionlog>
<conservative>
<memory-limit-factor>0.5</memory-limit-factor>
<disk-limit-factor>0.5</disk-limit-factor>
</conservative>
</native>
</flushstrategy>


## resizing

 Contained in: searchnode Required: No
Tune settings for data structure resizing to handle more or less documents. Optional sub-elements:
• initialdocumentcount: The data structures used by the search node will be initialized to this number of documents before resizing - default 1024. Setting this value can help speed up the initial feed of documents. As attribute resize keep both current and new version in memory at the same time, peak memory usage more than doubles when growing the attribute. Setting initialdocumentcount higher than expected maximum number of documents per node will prevent a resize, which is useful if memory is the limiting sizing factor.
<resizing>
<initialdocumentcount>1024</initialdocumentcount>
</resizing>


## initialize

 Contained in: searchnode Required: No
Tune settings related to how the search node (proton) is initialized. Optional sub-elements:
• threads: The number of initializer threads used for loading structures from disk at proton startup. The threads are shared between document databases when the value is larger than 0. Default value is the number of document databases + 1.
• When set to larger than 1, document databases are initialized in parallel
• When set to 1, document databases are initialized in sequence
• When set to 0, 1 separate thread is used per document database and they are initialized in parallel.
<initialize>
</initialize>


## feeding

 Contained in: searchnode Required: No
Tune settings related to how the search node (proton) is handling feed operations. Optional sub-elements:
• concurrency: A number between 0.0 and 1.0 that specifies the concurrency when handling feed operations, default 0.5. When set to 1.0 all cores on the cpu is utilized. See feeding.concurrency in proton.def for details on how this setting affects the thread pools used for feed operations.
<feeding>
<concurrency>0.8</concurrency>
</feeding>


## index

 Contained in: searchnode Required: No
Tune various aspect with the handling of disk and memory indexes. Optional sub-elements:
• io
• write: Controls io write options used during index dump and fusion, values={normal,directio}, default directio
• read: Controls io read options used during index dump and fusion, values={normal,directio}, default directio
<index>
<io>
<write>directio</write>
<search>mmap</search>
</io>
</index>


## attribute

 Contained in: searchnode Required: No
Tune various aspect with the handling of attribute vectors. Optional sub-elements:
• io
• write: Controls io write options used during flushing of attribute vectors, values={normal,directio}, default directio
<attribute>
<io>
<write>directio</write>
</io>
</attribute>


## summary

 Contained in: searchnode Required: No

Tune various aspect with the handling of document summary. Refer to proton.def to look for parameter values and defaults. Optional sub-elements:

• io
• write: Controls io write options used during flushing of stored documents. See summary.write.io
• read: Controls io read options used during reading of stored documents. See summary.read.io
• store
• cache: Used to tune the cache used by the document store. Enabled by default, using up to 5% of available memory.
• maxsize: The maximum size of the cache in bytes. If given it takes presedence over maxsize-percent. See summary.cache.maxbytes
• maxsize-percent: The maximum size of the cache in percent of available memory. Default is 5%.
• compression
• type: The compression type of the documents while in the cache. See summary.cache.compression.type
• level: The compression level of the documents while in cache. See summary.cache.compression.level
• logstore: Used to tune the actual document store implementation (log-based).
• maxfilesize: The maximum size (in bytes) per summary file on disk. See summary.log.maxfilesize and document-store-compaction
• chunk
• maxsize: Maximum size (in bytes) of a chunk. See summary.log.chunk.maxbytes
• maxentries: DEPRECATED (use maxsize instead): Maximum number of documents in a chunk. See summary.log.chunk.maxentries
• compression
• type: Compression type of the documents. See summary.log.chunk.compression.type
• level: Compression level of the documents. See summary.log.chunk.compression.level
<summary>
<io>
<write>directio</write>
</io>
<store>
<cache>
<maxsize>0</maxsize>
<compression>
<type>lz4</type>
<level>9</level>
</compression>
</cache>
<logstore>
<maxfilesize>1000000000</maxfilesize>
<chunk>
<maxsize>65536</maxsize>
<maxentries>256</maxentries>
<compression>
<type>lz4</type>
<level>9</level>
</compression>
</chunk>
</logstore>
</store>
</summary>