Logstash and elasticsearch

(rachd) #1

Hi there,
i am using EL and LS of the same version 5.4.0 and filbeat version 5.4.1.there is the configuration.
cluster.name: elasticsearch
node.name: node1
node.attr.rack: r1
path.data: /path/to/data
path.logs: /path/to/logs
bootstrap.memory_lock: true
http.port: 9200
discovery.zen.ping.unicast.hosts: ["", ""]
discovery.zen.minimum_master_nodes: 3
gateway.recover_after_nodes: 3
action.destructive_requires_name: true
input_type: log
- c:\logstash-tutorial.log
hosts: ["localhost:5044"]
input {
beats {
port => "5044"
filter {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}"}
geoip {
source => "clientip"
output {
elasticsearch {
hosts =>"localhost:9200"
stdout { codec => rubydebug }
Note:the LS and ES are runing at the port 6900 and 9200 well.but i get those errors:
for filebeat: ERR Connecting error publishing events (retrying): dial tcp connectex: Aucune connexion n’a pu être établie car l’ordinateur cible l’a expressément refusée.
for logstash:error=>"Got response code '503' contacting Elasticsearch at URL 'http://localhost:9200/
Please how can i solve this probleme?

(Christian Dahlqvist) #2

This appear incorrect. The unicast list should hold hostnames and transport port of other nodes in the cluster so that the nodes in the cluster can find each other. It is common to list the host and port of all master eligible nodes in the cluster. Port 9200 is however for HTTP, so this should instead be 9300, which is the default.

How many nodes do you have in the cluster? minimum_master_nodes should be set to floor(N/2)+1, where N is the number of master eligible nodes in the cluster. Setting it to 3 would therefore be appropriate if you have 4 or 5 master eligible nodes.

(rachd) #3

@Christian_Dahlqvist, thank you so match for your answer.the discovery.zen.ping.unicast.hosts: ["", ""] are the hosts are elasticsearch is runing on.for master_nodes excuse me,i am beginner at LS and ES.So what i should to change?

(Christian Dahlqvist) #4

Do you have multiple nodes running on the same host? If so, how many?

If the nodes are running on different hosts, they need to bind to a public IP, not (which is not accessible from other hosts), and it is this public IP that needs to go into the unicast host list.

(rachd) #5

i have just one of ES is runing at

(Christian Dahlqvist) #6

Then you do not need to populate the unicast list as the node has nothing to connect to.

You should also either remove these settings or set the to 1.

(rachd) #7

Thank you so mutch,it is working.but i get this error when i runcurl -XGET "localhost:9200/logstash-$DATE/_search?pretty&q=response=200"
"root_cause" : [
"type" : "index_not_found_exception",
"reason" : "no such index",
"resource.type" : "index_or_alias",
"resource.id" : "logstash-$DATE",
"index_uuid" : "na",
"index" : "logstash-$DATE"

(Christian Dahlqvist) #8

It is apparent from the error message that $DATE does not evaluate to a valid date. What happens if you correctly specify the full index name or perhaps use a wildcard, e.g. log stash-*?

(rachd) #9

It works now,thank you verry much.

(system) #10

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.