No data moving into elasticsearch

Hello all, it was suggested I post this on the filebeat page instead of the elasticsearch page.

I've installed elastic stack (elasticsearch, kibana, logstash, filebeat) and everything appears to be configured correctly, although I'm running into a few issues - Namely the fact that no indices are ever created in elasticsearch & logstash appears to not be processing anything. I've had a user on this forum & a few people on /r/elasticsearch look at my configs/logs and nobody can see anything out of the ordinary, it's really strange.

All required info though be in this previous post

Thanks in advance

Hello @jarnod I went through the issue quickly, would you mind to do 2 things:

  • share your filebeat configuration
  • Share the output of starting filebeat with the following flags: ./filebeat -e -v -d "*" -c yourconfig.yml

Config pasted here.

Output from that command is here.

From what I see in the Filebeat output, its read Logstash logs and successfully send events to a Logstash instance.


  "source": "/var/log/logstash/logstash-plain-2018-04-10.log",
  "offset": 477860,
  "message": "[2018-04-10T12:22:47,051][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({\"type\"=\u003e\"security_exception\", \"reason\"=\u003e\"action [indices:admin/create] is unauthorized for user [kibana]\"})"
}
2018-04-11T16:41:23.656+0100    DEBUG   [logstash]      logstash/async.go:142   2048 events out of 2048 events sent to logstash host 127.0.0.1:5044. Continue sending
2018-04-11T16:41:23.676+0100    DEBUG   [logstash]      logstash/async.go:142   2048 events out of 2048 events sent to logstash host 127.0.0.1:5044. Continue sending

From the last events.. is that possible that you have the wrong credentials for your logstash elasticsearch output?

No the credentials are definitely correct - They're the same ones I use to log in to kibana/elasticsearch web portals.

By the elasticsearch web portal I mean hostname:9200/_cat/indices?v - I would expect that to be the same login

If you start logstash with the --log.level debug Do you see any errors?

A lot of 403 unauthorized errors.

It seems weird that it isn't working, yet I can log into the portals fine with the same creds. I also can't kill the logstash process now i've started it with that command - Will running kill -9 break anything?

You can kill it with multiple control-c or kill -9, The user that you are using to connect to Elasticsearch doesn't have the privileges to create new indices, I suggest that you create a Logstash users with the right permission.

Error in the log:
action [indices:admin/create] is unauthorized for user [kibana]

Strangely enough it has worked once, i've got a single filebeat indice in elasticsearch, as indicated by the following line;
yellow open filebeat-6.2.3-2018.03.26 nVBIvoCYTVS8E-BoQzdBsw 3 1 532672 0 57.3mb 57.3mb

Creds would have been the same, I only use the "kibana" account. This is the account auto-generated by x-pack, if these accounts don't have the proper permissions it might be an idea to add that into the documentation?

I'll try creating the account now anyway and seeing if that works.

I took a quick look at https://www.elastic.co/guide/en/x-pack/current/index.html, we don't make mention of Logstash and we should.

Depending on the permission the users you might have the write to create index that respect a certain format like kibana-.

Concerning the Filebeat index, did you run ES and Filebeat before installing x-pack, the getting started experience by default doesn't use credentials?

No I wouldn't have launched it - The documentation I have been linked to by other users is this page where installing x-pack is one of the steps. I was under the impression x-pack was necessary to do monitoring & grok, all that good stuff?

This is weird, however we do mention in the installing x-pack to elastic, I will check if we can make it clearer. Did you solve your permissions?

action.auto_create_index: .security,.monitoring*,.watches,.triggered_watches,.watcher-history*,.ml*

If you are using Logstash or Beats then you will most likely require additional index names in your action.auto_create_index setting, and the exact value will depend on your local configuration. If you are unsure of the correct value for your environment, you may consider setting the value to * which will allow automatic creation of all indices.

I haven't managed to solve the permissions yet no, that just down to me never having used curl before and trying to work out the command for it - I've also only just got back into work after last night.

The value for "action.auto_create_index:" was set to true (I was playing around with settings as suggested by another user who wasn't very familiar", i've just changed that back to * like I had it set before, and now get this when trying to launch elasticsearch;

https://pastebin.com/f9dT9cyk

Edit: i've changed that to .* which gives me the error below;

2018-04-12T09:48:18,385][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>404, :action=>["index", {:_id=>nil, :_index=>"filebeat-6.2.3-2018.04.12", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x27ba0465>], :response=>{"index"=>{"_index"=>"filebeat-6.2.3-2018.04.12", "_type"=>"doc", "_id"=>nil, "status"=>404, "error"=>{"type"=>"index_not_found_exception", "reason"=>"no such index and [action.auto_create_index] ([.*]) doesn't match", "index_uuid"=>"_na_", "index"=>"filebeat-6.2.3-2018.04.12"}}}}

Stopping the elastic search service & starting it up again, regardless of changing settings causes kibana to output the following error;

{"statusCode":500,"error":"Internal Server Error","message":"An internal server error occurred"}

This appears to be resolved by setting the auto_create_index setting to .* instead of -*

Checking the elasticsearch status after stopping the service gives the message
elasticsearch dead but subsys locked

I'm going to have another go assigning those permissions and see where we get to. I'm not sure why elasticsearch is now getting unhappy with being stopped/started - I'm having to delete the elasticsearch file in /var/lock/subsys every time I stop the service or it gets unhappy.

Deleting the auto_create_index line & disabling authentication has resolved the issue - Now i'm having an issue where Filebeat is opening the path to my log file, but doesn't appear to be doing anything with it. Logstash has also not appeared on the monitoring tab of the kibana dashboard. Any ideas?

I think the behavior you are experiencing is the following:

  1. Filebeat already read the log files.
  2. Send the data to LS
  3. Filebeat acknowledged the read on disk into his registry.
  4. Filebeat assumes he doesn't need to read the file again and wait for new content.

Since you had configuration issues with Logstash and the persistent queue in Logstash was not turned on, the events were only in memory, so there were never send to ES and they got lost when you have restarted Logstash.

I think the best course of action is to clean the Filebeat registry and restart Filebeat.

Cleaned the registry by making a backup & deleting the original file, hope that's correct.

Still no logstash index in elasticsearch. I imagine it's something with my logstash filter perhaps? Another user said it looked okay but I'm not sure, mind taking a look for me?

Link

I think we should go by elimination, can you try a config with only the beats input and the elasticsearch output without any filters?

1 Like

Alright, I've set my logstash.conf to the following;

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => "172.19.32.154:9200"
  }
}

Still no logstash index in kibana, nor is there a logstash section on the monitor page.

I've pasted my logstash.yml here

pipelines.yml is as follows;

# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
#   https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

- pipeline.id: "main2"
  path.config: "/etc/logstash/logstash.conf"

Thanks for your help so far, I really appreciate it.

Never mind, I've worked it out. It's because the log I was watching wasn't being updated, filebeat was closing the harvester due to the file being inactive. Thank you again for all of your help!

1 Like