Index rejections on ElasticSearch

We are using Elasticsearch to store log files which are sent from the rsyslogd emelasticsearch plugin, the configuration of the omelasticsearch is following:

template(name="es_json"
  type="list") {
    constant(value="{")
      constant(value="\"timestamp\":\"")           property(name="timereported" dateFormat="rfc3339")
      constant(value="\",\"host\":\"")             property(name="hostname")
      constant(value="\",\"program\":\"")          property(name="app-name" format="jsonr")
      constant(value="\",\"severity\":\"")         property(name="syslogseverity-text")
      constant(value="\",\"severity_code\":")      property(name="syslogseverity")
      constant(value=",\"facility\":\"")           property(name="syslogfacility-text")
      constant(value="\",\"facility_code\":")      property(name="syslogfacility")
      constant(value=",\"pri\":")                  property(name="pri")
      constant(value=",\"tag\":\"")                property(name="syslogtag" format="jsonr")
      constant(value="\",\"message\":\"")          property(name="msg" format="jsonr")
    constant(value="\"}")
}

# Log all messages to Elasticsearch
action(type="omelasticsearch"
    server="elasticsearch"
    template="es_json"
    searchIndex="logs_index"
    dynSearchIndex="on"
    action.resumeinterval="10"
    queue.type="linkedlist"           # run asynchronously
    queue.filename="rsyslog_queue"    # queue files
    queue.checkpointinterval="100"
    queue.size="40000"
    queue.maxdiskspace="500m"         # space limit on disk
    queue.discardmark="10000"
    queue.discardseverity="4"         # Discard Warning, Notice, Informational and Debug
    queue.highwatermark="20000"
    queue.lowwatermark="14000"
    action.resumeretrycount="-1"      # infinite retries if host is down
    queue.saveonshutdown="on"         # save messages to disk on shutdown
    queue.timeoutenqueue="0"          # Immediately discard after 0ms if it can't be written
    queue.dequeuebatchsize="1024"
    queue.dequeueslowdown="1000")

Here are the thread rejections:

[17:52:54 root@host:~ ]# curl -s -G 'elasticsearch:9200/_cat/thread_pool?v'
host ip bulk.active bulk.queue bulk.rejected index.active index.queue index.rejected search.active search.queue search.rejected
1.1.1.1 1.1.1.1.1 0 0 0 20 26 373757 0 0 0

The version of Elasticsearch:

{
  "name" : "elasticsearch",
  "cluster_name" : "logs",
  "version" : {
    "number" : "2.1.1",
    "build_hash" : "40e2c53a6b6c2972b3d13846e450e66f4375bd71",
    "build_timestamp" : "2015-12-15T13:05:55Z",
    "build_snapshot" : false,
    "lucene_version" : "5.3.1"
  },
  "tagline" : "You Know, for Search"
}

This is the configuration of the elasticsearch:

discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["10.10.0.68"]
discovery.zen.minimum_master_nodes: 1
network.bind_host: 10.10.0.68
network.publish_host: 10.10.0.68
cluster.name: logs
node.name: elasticsearch
bootstrap.mlockall: true
index.number_of_shards: 1
index.number_of_replicas: 0
index.refresh_interval: 30s
index.translog.flush_threshold_ops: 20000
threadpool.search.type: fixed
threadpool.search.size: 20
threadpool.search.queue_size: 100
threadpool.bulk.type: fixed
threadpool.bulk.size: 60
threadpool.bulk.queue_size: 300
threadpool.index.type: fixed
threadpool.index.size: 20
threadpool.index.queue_size: 100
indices.breaker.fielddata.limit: 25%
indices.breaker.request.limit: 40%
indices.breaker.total.limit: 70%
indices.memory.index_buffer_size: 512mb
indices.memory.min_shard_index_buffer_size: 12mb
indices.memory.min_index_buffer_size: 96mb
indices.memory.max_index_buffer_size: 512mb
indices.fielddata.cache.size: 15%
indices.fielddata.cache.filter.size: 15%
path.conf: /etc/elasticsearch
path.logs: /var/log/elasticsearch
path.data: /elasticsearch/data
index.max_result_window: 10000

As you can see we are using indexing operation instead of the bulk operation, I know that the bulk operation would be suggestion but is there anything else we shall focuse on?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.