No active_primary_shards

(Eric Igbinosun) #1

hello guys

i have set up an elasticsearch cluster with 1 master and 2 data nodes. the master run elasticsearch, logstash and filebeat,

now when i run the cmd
" curl localhost:9200/_cluster/health?pretty=true"
i do not see any active shared

  "cluster_name" : "elasticsearch",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 2,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0

this is the tailed logs from the master node

root@master-1:~# tail -n 20 /var/log/elasticsearch/elasticsearch.log
[2018-07-04T05:11:00,390][INFO ][o.e.p.PluginsService     ] [master-1] loaded module [repository-url]
[2018-07-04T05:11:00,390][INFO ][o.e.p.PluginsService     ] [master-1] loaded module [transport-netty4]
[2018-07-04T05:11:00,390][INFO ][o.e.p.PluginsService     ] [master-1] loaded module [tribe]
[2018-07-04T05:11:00,390][INFO ][o.e.p.PluginsService     ] [master-1] no plugins loaded
[2018-07-04T05:11:12,207][INFO ][o.e.d.DiscoveryModule    ] [master-1] using discovery type [zen]
[2018-07-04T05:11:15,211][INFO ][o.e.n.Node               ] [master-1] initialized
[2018-07-04T05:11:15,211][INFO ][o.e.n.Node               ] [master-1] starting ...
[2018-07-04T05:11:16,041][INFO ][o.e.t.TransportService   ] [master-1] publish_address {}, bound_addresses {}, {[::1]:9300}, {}
[2018-07-04T05:11:16,112][INFO ][o.e.b.BootstrapChecks    ] [master-1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-07-04T05:11:19,441][INFO ][o.e.c.s.MasterService    ] [master-1] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {master-1}{ghDIR4IjRHejp7bFBeM1Bw}{z5EQwiCqR1CwBZlhPzJs7w}{}{}
[2018-07-04T05:11:19,491][INFO ][o.e.c.s.ClusterApplierService] [master-1] new_master {master-1}{ghDIR4IjRHejp7bFBeM1Bw}{z5EQwiCqR1CwBZlhPzJs7w}{}{}, reason: apply cluster state (from master [master {master-1}{ghDIR4IjRHejp7bFBeM1Bw}{z5EQwiCqR1CwBZlhPzJs7w}{}{} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2018-07-04T05:11:19,620][INFO ][o.e.g.GatewayService     ] [master-1] recovered [0] indices into cluster_state
[2018-07-04T05:11:19,622][INFO ][o.e.h.n.Netty4HttpServerTransport] [master-1] publish_address {}, bound_addresses {}, {[::1]:9200}, {}
[2018-07-04T05:11:19,622][INFO ][o.e.n.Node               ] [master-1] started
[2018-07-04T14:31:56,802][INFO ][o.e.c.s.MasterService    ] [master-1] zen-disco-node-join[{data-1}{jgpBRJIJTmusXKgsfsM67g}{4Dxj1nBETPGaLIUjw4VUKg}{}{}], reason: added {{data-1}{jgpBRJIJTmusXKgsfsM67g}{4Dxj1nBETPGaLIUjw4VUKg}{}{},}
[2018-07-04T14:31:57,083][INFO ][o.e.c.s.ClusterApplierService] [master-1] added {{data-1}{jgpBRJIJTmusXKgsfsM67g}{4Dxj1nBETPGaLIUjw4VUKg}{}{},}, reason: apply cluster state (from master [master {master-1}{ghDIR4IjRHejp7bFBeM1Bw}{z5EQwiCqR1CwBZlhPzJs7w}{}{} committed version [4] source [zen-disco-node-join[{data-1}{jgpBRJIJTmusXKgsfsM67g}{4Dxj1nBETPGaLIUjw4VUKg}{}{}]]])
[2018-07-04T14:51:24,556][INFO ][o.e.c.s.MasterService    ] [master-1] zen-disco-node-join[{data2}{ovih8koGRH2iJgr-iRHL8g}{6LxXEo7aQg6_635l2vv2cA}{}{}], reason: added {{data2}{ovih8koGRH2iJgr-iRHL8g}{6LxXEo7aQg6_635l2vv2cA}{}{},}
[2018-07-04T14:51:25,251][INFO ][o.e.c.s.ClusterApplierService] [master-1] added {{data2}{ovih8koGRH2iJgr-iRHL8g}{6LxXEo7aQg6_635l2vv2cA}{}{},}, reason: apply cluster state (from master [master {master-1}{ghDIR4IjRHejp7bFBeM1Bw}{z5EQwiCqR1CwBZlhPzJs7w}{}{} committed version [5] source [zen-disco-node-join[{data2}{ovih8koGRH2iJgr-iRHL8g}{6LxXEo7aQg6_635l2vv2cA}{}{}]]])
[2018-07-04T14:52:59,986][WARN ][o.e.m.j.JvmGcMonitorService] [master-1] [gc][young][7365][13] duration [5.6s], collections [1]/[6.3s], total [5.6s]/[6.6s], memory [123.1mb]->[71.8mb]/[1015.6mb], all_pools {[young] [66.3mb]->[797.5kb]/[66.5mb]}{[survivor] [7.9mb]->[4mb]/[8.3mb]}{[old] [48.8mb]->[67mb]/[940.8mb]}
[2018-07-04T14:53:00,022][WARN ][o.e.m.j.JvmGcMonitorService] [master-1] [gc][7365] overhead, spent [5.6s] collecting in the last [6.3s]

this is my filebeat.yml

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
  #  Array of hosts to connect to.
  # hosts: [""]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
  # The Logstash hosts
        hosts: [""]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.

i would be greatful for any help

(Mark Walkom) #2

That's not idea, see for more.

What do your Logstash logs show? What does the config look like for it?

(Eric Igbinosun) #3


i used the default setting but strangely enough i am not seeing any logs and the logstash file is empty i guess there is something wrong with the configuration

i know filebeat is to send logs to logstash in this senario. i dont know why i am not seeing any active shard

(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.