Elasticsearch plugin is red - possible master node problem?

I have installed Elasticsearch 5.51 and Kibana 5.51 and I am now getting an Elasticsearch plugin is red error message

screenshot from 2017-11-29 14-37-18

The Elasticsearch logs mention - not enough master nodes discovered, I have 4 nodes and one master --

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]
[2017-11-29T14:23:41,899][WARN ][o.e.d.z.ZenDiscovery ] [elasticsearch-logging-0] not enough master nodes discovered during pinging (found [[Candidate{node={elasticsearch-logging-0}{2vy6rZTLR7CZIi6wbRZj0Q}{tUNcsm31SVyMKVoZJc7BTA}{100.96.4.6}{100.96.4.6:9300}{ml.enabled=true}, clusterStateVersion=-1}]], but needed [2]), pinging again
[2017-11-29T14:23:43,417][WARN ][r.suppressed ] path: /.reporting-/esqueue/_search, params: {index=.reporting-, type=esqueue, version=true}
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:165) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(ClusterBlocks.java:151) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.action.search.TransportSearchAction.executeSearch(TransportSearchAction.java:255) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:186) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:65) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:170) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:142) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:84) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:83) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:72) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:408) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.client.support.AbstractClient.search(AbstractClient.java:535) ~[elasticsearch-5.5.1.jar:5.5.1]
at org.elasticsearch.rest.action.search.RestSearchAction.lambda$prepareRequest$1(RestSearchAction.java:78) ~[elasticsearch-5.5.1.jar:5.5.1]

Hi Zaddsters,

Apparently this problem is related with discovered during pinging, that is, node data could not find out node master.
What is the configuration file of the nodes ?

Ping between nodes responds ?
Telnet on port 9300 between nodes responds ?

Thanks for getting back - I am able to log into the master and ping other
pods. I have also checked that I can ping from pods. However I can not
telnet to port 9300 or 9200 between nodes.

I have installed Elasticsearch, Fluentd and Kibana - all the resources
appear to be running. There is a service for Elasticsearch and a
loadbalancer for kibana.

This is the error I am getting in Kibanna --

Do you have a standard firewall installed which could be blocking port 9300? I know when I install a fresh node I have to run iptables --flush before it will join the cluster.

Thanks for getting back - I am running kubernetes on AWS so this would be through security groups. I have current set this to permissive so shouldn't be causing a problem.

Can you post the contents of your elasticsearch.yml file to here please? Redact anything sensitive.

This is the elasticsearch.yaml

RBAC authn and authz

apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
rules:

  • apiGroups:
    • ""
      resources:
    • "services"
    • "namespaces"
    • "endpoints"
      verbs:
    • "get"

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: kube-system
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
subjects:

  • kind: ServiceAccount
    name: elasticsearch-logging
    namespace: kube-system
    apiGroup: ""
    roleRef:
    kind: ClusterRole
    name: elasticsearch-logging
    apiGroup: ""

Elasticsearch deployment itself

apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
version: v5.5.1
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
serviceName: elasticsearch-logging
replicas: 1
selector:
matchLabels:
k8s-app: elasticsearch-logging
version: v5.5.1
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: v5.5.1
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: elasticsearch-logging
containers:
- image: gcr.io/google-containers/elasticsearch:v5.5.1-1
name: elasticsearch-logging
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
memory: 2.5Gi
requests:
memory: 1Gi
cpu: 100m
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: es-storage
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "ES_JAVA_OPTS"
value: "-XX:-AssumeMP"
# Elasticsearch requires vm.max_map_count to be at least 262144.
# If your OS already sets up this number to a higher value, feel free
# to remove this init container.
initContainers:
- image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
name: elasticsearch-logging-init
securityContext:
privileged: true
volumeClaimTemplates:

  • metadata:
    name: es-storage
    spec:
    accessModes: [ "ReadWriteOnce" ]
    storageClassName: standard
    resources:
    requests:
    storage: 8Gi

That looks more like the Kubernetes configuration, rather than the actual configuration for the elasticsearch node running inside it. Elastic search is clearly looking for more than just one master node, although you state you only have one master node built. So you either have to tell elasticsearch to look for just one node (which is defined inside the elasticsearch.yml file), or build a couple more master nodes for this cluster. I'm sorry but I'm not familiar with Kubernetes and how it works, i'm used to just plain old ES running on bare metal servers.

Ok - I'll try creating another mater node and hopefully that should solve the problem.

You'll need at least another 2 master nodes, so three in total.

Is there a way to change the setting of elasticsearch so that it only looks for 1 master ?

Have a read of this section, it explians how to set the minimum master count. Not sure whether it allows setting to 1 when you have multiple data nodes, but you can try.

https://www.elastic.co/guide/en/elasticsearch/reference/6.0/important-settings.html#minimum_master_nodes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.