Elasticsearch output Plugin cant send data

Hi

i upgraded my local ELK Stack to 5.0.0 . Now Logstash removes the elasticsearch hosturl so it doesnt push data to the Server.

2016-11-02T08:57:56,936][DEBUG][org.apache.http.impl.conn.DefaultHttpClientConnectionOperator] Connecting to /127.0.0.1:9200
[2016-11-02T08:57:56,941][DEBUG][org.apache.http.impl.conn.DefaultHttpClientConnectionOperator] Connection established 127.0.0.1:34486<->127.0.0.1:9200
[2016-11-02T08:57:56,941][DEBUG][org.apache.http.impl.conn.DefaultManagedHttpClientConnection] http-outgoing-0: set socket timeout to 60000
[2016-11-02T08:57:56,941][DEBUG][org.apache.http.impl.execchain.MainClientExec] Executing request GET /_nodes HTTP/1.1
[2016-11-02T08:57:56,941][DEBUG][org.apache.http.impl.execchain.MainClientExec] Target auth state: UNCHALLENGED
[2016-11-02T08:57:56,942][DEBUG][org.apache.http.impl.execchain.MainClientExec] Proxy auth state: UNCHALLENGED
[2016-11-02T08:57:56,946][DEBUG][org.apache.http.headers ] http-outgoing-0 >> GET /_nodes HTTP/1.1
[2016-11-02T08:57:56,946][DEBUG][org.apache.http.headers ] http-outgoing-0 >> Connection: Keep-Alive
[2016-11-02T08:57:56,946][DEBUG][org.apache.http.headers ] http-outgoing-0 >> Content-Length: 0
[2016-11-02T08:57:56,946][DEBUG][org.apache.http.headers ] http-outgoing-0 >> Host: 127.0.0.1:9200
[2016-11-02T08:57:56,946][DEBUG][org.apache.http.headers ] http-outgoing-0 >> User-Agent: Manticore 0.6.0
[2016-11-02T08:57:56,946][DEBUG][org.apache.http.headers ] http-outgoing-0 >> Accept-Encoding: gzip,deflate
[2016-11-02T08:57:56,946][DEBUG][org.apache.http.wire ] http-outgoing-0 >> "GET /_nodes HTTP/1.1[\r][\n]"
[2016-11-02T08:57:56,946][DEBUG][org.apache.http.wire ] http-outgoing-0 >> "Connection: Keep-Alive[\r][\n]"
[2016-11-02T08:57:56,946][DEBUG][org.apache.http.wire ] http-outgoing-0 >> "Content-Length: 0[\r][\n]"
[2016-11-02T08:57:56,946][DEBUG][org.apache.http.wire ] http-outgoing-0 >> "Host: 127.0.0.1:9200[\r][\n]"
[2016-11-02T08:57:56,946][DEBUG][org.apache.http.wire ] http-outgoing-0 >> "User-Agent: Manticore 0.6.0[\r][\n]"
[2016-11-02T08:57:56,949][DEBUG][org.apache.http.wire ] http-outgoing-0 >> "Accept-Encoding: gzip,deflate[\r][\n]"
[2016-11-02T08:57:56,949][DEBUG][org.apache.http.wire ] http-outgoing-0 >> "[\r][\n]"
[2016-11-02T08:57:56,954][DEBUG][org.apache.http.wire ] http-outgoing-0 << "HTTP/1.1 200 OK[\r][\n]"
[2016-11-02T08:57:56,955][DEBUG][org.apache.http.wire ] http-outgoing-0 << "content-type: application/json; charset=UTF-8[\r][\n]"
[2016-11-02T08:57:56,955][DEBUG][org.apache.http.wire ] http-outgoing-0 << "content-encoding: gzip[\r][\n]"
[2016-11-02T08:57:56,956][DEBUG][org.apache.http.wire ] http-outgoing-0 << "transfer-encoding: chunked[\r][\n]"
[2016-11-02T08:57:56,956][DEBUG][org.apache.http.wire ] http-outgoing-0 << "[\r][\n]"
[2016-11-02T08:57:56,956][DEBUG][org.apache.http.wire ] http-outgoing-0 << "6d7[\r][\n]"
[ .... ]
[2016-11-02T08:57:56,956][DEBUG][org.apache.http.wire ] http-outgoing-0 << "a[\r][\n]"
[2016-11-02T08:57:56,956][DEBUG][org.apache.http.wire ] http-outgoing-0 << "[0x3][0x0]M[0xea]sY[0xc2][0x13][0x0][0x0][\r][\n]"
[2016-11-02T08:57:56,956][DEBUG][org.apache.http.wire ] http-outgoing-0 << "0[\r][\n]"
[2016-11-02T08:57:56,956][DEBUG][org.apache.http.wire ] http-outgoing-0 << "[\r][\n]"
[2016-11-02T08:57:56,962][DEBUG][org.apache.http.headers ] http-outgoing-0 << HTTP/1.1 200 OK
[2016-11-02T08:57:56,962][DEBUG][org.apache.http.headers ] http-outgoing-0 << content-type: application/json; charset=UTF-8
[2016-11-02T08:57:56,962][DEBUG][org.apache.http.headers ] http-outgoing-0 << content-encoding: gzip
[2016-11-02T08:57:56,962][DEBUG][org.apache.http.headers ] http-outgoing-0 << transfer-encoding: chunked

An initial Connection is successfull after that it logs out:

[2016-11-02T08:57:56,969][DEBUG][org.apache.http.impl.execchain.MainClientExec] Connection can be kept alive indefinitely
[2016-11-02T08:57:56,985][DEBUG][org.apache.http.impl.conn.PoolingHttpClientConnectionManager] Connection [id: 0][route: {}->http://127.0.0.1:9200] can be kept alive indefinitely
[2016-11-02T08:57:56,985][DEBUG][org.apache.http.impl.conn.PoolingHttpClientConnectionManager] Connection released: [id: 0][route: {}->http://127.0.0.1:9200][total kept alive: 1; route allocated: 1 of 100; total allocated: 1 of 1000]
[2016-11-02T08:57:57,033][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>["http://127.0.0.1:9200"], :added=>[]}}

Can somebody tell me what is going wrong?

What do you mean by this?

What does your config look like?

I mean
i used the debian repository to upgrade logstash elasticsearch and kibana.
Logstash removes the configured url (you can see it in the log)

[2016-11-02T08:57:57,033][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>["http://127.0.0.1:9200"], :added=>[]}}

My Configuration looks like this:

input {
file {
path => "/data/logs/192.168.3.1/*local1.log.2016-10-26"
}
}
filter {
grok {
break_on_match => true
match => { "message" => [ "%{WORD:node} %{WORD:device_name} XTM_5_Series (%{TIMESTAMP_ISO8601:timestamp}) firewall: msg_id=%{QS:msg_id} %{WORD:Action} %{DATA:source_network} %{DATA:dest_network} %{NUMBER:size} (?tcp|udp) %{NUMBER} %{NUMBER:ttl} %{IP:source_addr} %{IP:dest_addr} %{NUMBER:source_port} %{NUMBER:dest_port} offset", "%{WORD:node} %{WORD:device_name} XTM_5_Series (%{TIMESTAMP_ISO8601:timestamp}) http-proxy[%{NUMBER}]: msg_id=%{QS:msg_id} %{WORD:Action} %{DATA:source_network} %{DATA:dest_network} (?tcp|udp) %{IP:source_addr} %{IP:dest_addr} %{NUMBER:source_port} %{NUMBER:dest_port} msg=%{QS:http_msg} proxy_act=%{QS:proxy_act} op=%{QS:http_operation} dstname=%{QS:host} arg=%{QS:URI} sent_bytes="%{NUMBER:sent:int}" rcvd_bytes="%{NUMBER:received:int}" elapsed_time="%{NUMBER:httptime:float} sec(s)"", "%{WORD:node} %{WORD:device_name} XTM_5_Series (%{TIMESTAMP_ISO8601:timestamp}) http-proxy[%{NUMBER}]: msg_id=%{QS:msg_id} %{WORD:Action} %{DATA:source_network} %{DATA:dest_network} (?tcp|udp) %{IP:source_addr} %{IP:dest_addr} %{NUMBER:source_port} %{NUMBER:dest_port} msg=%{QS:http_msg} proxy_act=%{QS:proxy_act} rule_name=%{QS:rule_name} content_type=%{QS:contenttype}" ] }
}
geoip {
source => "source_addr"
target => "source_loc"
}
geoip {
source => "dest_addr"
target => "dest_loc"
}
}
output {
elasticsearch {
hosts => [ "http://127.0.0.1:9200" ]
sniffing => true
ssl => false
ssl_certificate_verification => false
manage_template => false
index => "watchguard-%{+YYY.MM.DD}"
}
}

When the ES output starts, it adds the the URL you specify to an initial list of URLS then it starts sniffing because your config said it should.
During sniffing it queries ES using your URL http://127.0.0.1:9200 using a GET to the _nodes endpoint, it checks the reply by iterating over each node map looking for its http_address key. Any nodes without a value for this key are assumed to "have HTTP disabled" and are skipped.
It then deletes all initial URLS that were not successfully sniffed.

@BADMAN - Questions you need to answer:

  • Do you have an Elasticsearch cluster?
  • If so, what type of nodes do you have in the cluster.
  • if not, disable sniffing and retry.

ok the Test Maschine isnt a cluster, but this config work well with logstash 2.4. So i thought i should work well. So after i changed sniffing to false, logstash seams to work well but it doesnt create any entry/indexes on elasticsearch.

Are you looking to have a 3 digit year or is that a typo?

1 Like

What does the metrics api say? Logstash 5 has a small http server that you can query.
GET http://localhost:9600/_node/_stats/_pipeline -> returns JSON (like ES does)

the 3Y is just a typo.

the full path turn into a 404
http://localhost:9600/_pipeline returns 404 too

http://localhost:9600/_node
returns

{"host":"logserver","version":"5.0.0","http_address":"127.0.0.1:9600","pipeline":{"workers":2,"batch_size":125,"batch_delay":5,"config_reload_automatic":false,"config_reload_interval":3},"os":{"name":"Linux","arch":"amd64","version":"4.4.0-45-generic","available_processors":2},"jvm":{"pid":30470,"version":"1.8.0_91","vm_name":"OpenJDK 64-Bit Server VM","vm_version":"1.8.0_91","vm_vendor":"Oracle Corporation","start_time_in_millis":1478092562095,"mem":{"heap_init_in_bytes":1073741824,"heap_max_in_bytes":1056309248,"non_heap_init_in_bytes":2555904,"non_heap_max_in_bytes":0},"gc_collectors":["ParNew","ConcurrentMarkSweep"]}}

http://localhost:9600/_stats
returns

{"host":"logserver","version":"5.0.0","http_address":"127.0.0.1:9600","events":{"in":null,"filtered":0,"out":0,"duration_in_millis":null},"jvm":{"timestamp":1478092562095,"uptime_in_millis":2768764,"memory":{"heap_used_in_bytes":330004528,"heap_used_percent":15,"heap_committed_in_bytes":2112618496,"heap_max_in_bytes":2112618496,"non_heap_used_in_bytes":182081088,"non_heap_committed_in_bytes":193454080,"pools":{"survivor":{"peak_used_in_bytes":17432576,"used_in_bytes":19162824,"peak_max_in_bytes":17432576,"max_in_bytes":34865152,"committed_in_bytes":34865152},"old":{"peak_used_in_bytes":106121544,"used_in_bytes":165740928,"peak_max_in_bytes":899284992,"max_in_bytes":1798569984,"committed_in_bytes":1798569984},"young":{"peak_used_in_bytes":139591680,"used_in_bytes":145100776,"peak_max_in_bytes":139591680,"max_in_bytes":279183360,"committed_in_bytes":279183360}}}}}

found the problem. Logstash require an active logfile. So is it possible to "import"/use old files?

found the problem. Logstash require an active logfile. So is it possible to "import"/use old files?

Of course. Depending on the version of Logstash you may need to set the file input's ignore_older option to 0 (zero) och you will have to set start_position => beginning. Additionally, make sure the file's position in sincedb doesn't point to the end of the file. Check the file input's documentation.