Filebeat detects file but keeps offset in 0

Hey, recently i had a problem with elasticsearch where all shards were with RED flag. So i deleted everything from ES, deleted registry of filebeat and restarted everything.

Right now the data is not being indexed in ES, i checked the registry file of filebeat and it shows all files opened but with offset as 0 in all files.

What can be causing this?

EDIT: Also there is no filebeat log file generated as before.

Dont know why but when i tried to run logstash directly from bin folder instead of running it as a service it worked. This is so weird.

EDIT: Back to square one, i reinstalled ES, LS and FileBeat, started working good, got to a moment where it stopped processing logs. Again, deleted everything, restarted and now back to not processing anything :frowning:

can someone please help me?

EDIT2:
OK, so i found out that if i stop one specific filebeat prospector, everything works ok. This prospector in specific is harvesting a 5GB log file, any chance this is breaking ELK?

Could you share your config file and some log outputs from filebeat? Which version os filebeat, ls, es are you using?

Filebeat 1.3
Logstash 1.4
ElasticSearch 2.4

Filebeat prospector config:

    -
  paths:
    - "/root/tracy/logs/*.out"
  input_type: log

  fields:
    log_type: tracy

  multiline:
    pattern: '^Offset'
    negate: true
    match: after

Filebeat Log:
It doesnt even create a log file

Logstash config:

input {
    beats{
            port => 5044
    }
}
filter{
 if([fields][log_type]=="tracy"){
            grok{
                    match => {"message" => "(?m)%{WORD:check} = %{NUMBER:offset2:int}, %{WORD} = %{WORD:topic2}, %{WORD} = %{WORD:source2}, %{WORD} = %{WORD:type2}, %{WORD} = %{GREEDYDATA:value2}"}
            }
    }
}
output {
     if([fields][log_type]=="tracy"){
            elasticsearch{
                    hosts => [ "localhost:9200" ]
                    index => [ "tracy" ]
            }
}

Logstash Log:

Sending logstash logs to /var/log/logstash/logstash.log.
{:timestamp=>"2016-11-17T00:11:50.044000+0000", :message=>"Pipeline main started"}

Ok, i found the problem.

I was sending the output to ElasticSearch and to a fileoutput, it was that that was slowing down everything.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.