Elasticsearch not getting the new log updates

Hello Experts,

I have configured a ELK on a Standalone physical Server with 8 CPU & 32GB of ram and it was running fine until now for few months however we have added few more logs on th servers and now on the Kibana portal i see the data not getting reflecting i tried adjusting heap size and restarting the service but no luck, the only thing i see is elasticsearch & java both are taking more than 100% on the Server.

# curl -s -XGET 'localhost:9200/_cat/thread_pool?v'
node_name             name                active queue rejected
noida-elk.efox.com bulk                     8     2        0
noida-elk.efox.com fetch_shard_started      0     0        0
noida-elk.efox.com fetch_shard_store        0     0        0
noida-elk.efox.com flush                    0     0        0
noida-elk.efox.com force_merge              0     0        0
noida-elk.efox.com generic                  0     0        0
noida-elk.efox.com get                      0     0        0
noida-elk.efox.com index                    0     0        0
noida-elk.efox.com listener                 0     0        0
noida-elk.efox.com management               1     0        0
noida-elk.efox.com refresh                  1     0        0
noida-elk.efox.com search                   0     0        0
noida-elk.efox.com snapshot                 0     0        0
noida-elk.efox.com warmer                   0     0        0

# curl -XGET 'localhost:9200/_cluster/health?pretty'
{
  "cluster_name" : "elk",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 156,
  "active_shards" : 156,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 155,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 50.160771704180064


# cat /etc/elasticsearch/elasticsearch.yml
cluster.name: elk
node.name: noida-elk.efox.com
path.data: /scratch/data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
bootstrap.system_call_filter: False
#bootstrap.mlockall: true
#index.number_of_replicas: 0

My logstash pipeline:

logstash-syslog.conf
input {
  file {
    path => [ "/scratch/rsyslog/*/messages.log" ]
    type => "syslog"
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
output {
        #if "automount" in [message] or "ldap" in [message] {
        elasticsearch {
                hosts => "noida-elk:9200"
                index => "syslog-%{+YYYY.MM.dd}"
                document_type => "messages"
        }
   #     stdout {}
#}
}

cat jvm.options

-Xms8g
-Xmx16g
-XX:+UseParNewGC
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+DisableExplicitGC
-Djava.awt.headless=true
-Dfile.encoding=UTF-8
-XX:+HeapDumpOnOutOfMemoryError
-XX:+PrintGCDetails

please suggest or help to identify is anything is wrong there in config..

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.