File input not read logstah [SOLVED]

Hi i am new to ELK i have some troubles with reading file.

My conf file
input {
file {
path => "/files/nginx_logs"
start_position => "beginning"
ignore_older => 0
}
}
output {
stdout { }
}

a sample of log nginx
80.91.33.133 - - [04/Jun/2015:07:06:16 +0000] "GET /downloads/product_1 HTTP/1.1" 304 0 "-" "Debian APT-HTTP/1.3 (0.8.16~exp12ubuntu10.16)"
144.76.151.58 - - [04/Jun/2015:07:06:05 +0000] "GET /downloads/product_2 HTTP/1.1" 304 0 "-" "Debian APT-HTTP/1.3 (0.9.7.9)"
79.136.114.202 - - [04/Jun/2015:07:06:35 +0000] "GET /downloads/product_1 HTTP/1.1" 404 334 "-" "Debian APT-HTTP/1.3 (0.8.16~exp12ubuntu10.22)"

What i miss? when i use input with stdin i paste a line of log file it works but with file
nothing appear in output and try the output codec ruby ​​the same.

That says to ignore any files that are more than zero seconds old. It is unlikely that you want that.

By default a file input tails the files that it reads. Do you get any output if you append a line to /files/nginx_logs?

/files/nginx_logs is a file, right? If it is a directory you would need to use "/files/nginx_logs/*".

I tried without ignore_older but nothing change.
Effectively nginx_logs is a file that I created it manually. I appended a line when i launch logstash but nothing appear.
I tried also with sincedb_path => "/dev/null"

I see in log of logstash

[WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: thetypeevent field won't be used to determine the document _type {:es_version=>6}

it is necessary to define type. i am not sure
In add i use logstash with docker with this command
docker run -d -h logstash --name logstash --link elasticsearch:elasticsearch -v /logstash/:/config-dir docker.elastic.co/logstash/logstash:6.5.4 -f /config-dir/01-test.conf

I would suggest enabling --log.level debug and see what filewatch has to say.

thx I see nothing wrong maybe i'm wrong
My configure file now is like that

  input {
                file {
                        path => "/root/ELK/logstash/files/nginx_logs"
                        sincedb_path => "/dev/null"
                        start_position => "beginning"
                        }
        }
        output {
                stdout { codec => rubydebug }
        }

and you can see the entire log on mode debug
logfile logstash

Hmm. Yes, you get

  [2019-02-03T12:57:48,842][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections

but do not get a subsequent get chunk message. Can you re-run with trace level instead of debug? Here is what I would expect

[2019-02-03T08:58:37,963][TRACE][filewatch.sincedbcollection] open: reading from /dev/null
[2019-02-03T08:58:37,981][TRACE][filewatch.sincedbcollection] open: count of keys read: 0
[2019-02-03T08:58:38,036][TRACE][filewatch.discoverer     ] discover_files {"count"=>1}
[2019-02-03T08:58:38,235][TRACE][filewatch.discoverer     ] discover_files handling: {"new discovery"=>true, "watched_file details"=>"<FileWatch::WatchedFile: @filename='foo.txt', @state='watched', @recent_states='[:watched]', @bytes_read='0', @bytes_unread='0', current_size='804', last_stat_size='804', file_open?='false', @initial=true, @sincedb_key='53822 0 51714'>"}
[2019-02-03T08:58:38,290][TRACE][filewatch.sincedbcollection] associate: finding {"inode"=>"53822", "path"=>"/home/ec2-user/t.test/foo.txt"}
[2019-02-03T08:58:38,294][TRACE][filewatch.sincedbcollection] associate: unmatched
[2019-02-03T08:58:38,508][TRACE][filewatch.tailmode.processor] Delayed Delete processing
[2019-02-03T08:58:38,514][TRACE][filewatch.tailmode.processor] Watched + Active restat processing
[2019-02-03T08:58:38,584][TRACE][filewatch.tailmode.processor] Rotation In Progress processing
[2019-02-03T08:58:38,614][TRACE][filewatch.tailmode.processor] Watched processing
[2019-02-03T08:58:38,635][TRACE][filewatch.tailmode.handlers.createinitial] handling: foo.txt
[2019-02-03T08:58:38,657][TRACE][filewatch.tailmode.handlers.createinitial] opening foo.txt
[2019-02-03T08:58:38,681][TRACE][filewatch.tailmode.handlers.createinitial] handle_specifically opened file handle: 78, path: foo.txt
[2019-02-03T08:58:38,746][TRACE][filewatch.tailmode.handlers.createinitial] add_new_value_sincedb_collection {"position"=>0, "watched_file details"=>"<FileWatch::WatchedFile: @filename='foo.txt', @state='active', @recent_states='[:watched, :watched]', @bytes_read='0', @bytes_unread='804', current_size='804', last_stat_size='804', file_open?='true', @initial=true, @sincedb_key='53822 0 51714'>"}
[2019-02-03T08:58:38,775][TRACE][filewatch.tailmode.processor] Active - file grew: foo.txt: new size is 804, bytes read 0
[2019-02-03T08:58:38,778][TRACE][filewatch.tailmode.handlers.grow] handling: foo.txt
[2019-02-03T08:58:38,871][TRACE][filewatch.tailmode.handlers.grow] reading... {"iterations"=>1, "amount"=>804, "filename"=>"foo.txt"}
[2019-02-03T08:58:38,873][DEBUG][filewatch.tailmode.handlers.grow] read_to_eof: get chunk

i see discover_files 0 don't understand

[2019-02-03T14:21:04,138][TRACE][logstash.inputs.file     ] Registering file input {:path=>["/root/ELK/logstash/files/nginx_logs"]}
[2019-02-03T14:21:04,263][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x701132a3 run>"}
[2019-02-03T14:21:04,330][TRACE][logstash.agent           ] Converge results {:success=>true, :failed_actions=>[], :successful_actions=>["id: main, action_type: LogStash::PipelineAction::Create"]}
[2019-02-03T14:21:04,429][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-02-03T14:21:04,462][DEBUG][logstash.config.sourceloader] Adding source {:source=>"#<LogStash::Monitoring::InternalPipelineSource:0x6e2d74a4>"}
[2019-02-03T14:21:04,494][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and sincedb collections
[2019-02-03T14:21:04,613][DEBUG][logstash.agent           ] Starting agent
[2019-02-03T14:21:04,633][TRACE][filewatch.sincedbcollection] open: reading from /dev/null
[2019-02-03T14:21:04,663][TRACE][filewatch.sincedbcollection] open: count of keys read: 0
[2019-02-03T14:21:04,723][DEBUG][logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>["/config-dir/files", "/config-dir/logstash-sample.conf", "/config-dir/logstash.conf"]}
[2019-02-03T14:21:04,730][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/config-dir/01-test.conf"}
[2019-02-03T14:21:04,899][TRACE][filewatch.discoverer     ] discover_files {"count"=>0}
[2019-02-03T14:21:04,931][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>1}
[2019-02-03T14:21:04,952][DEBUG][logstash.agent           ] Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:.monitoring-logstash}

Verify that the user which is running logstash has access to each component of that path. It is not finding the file. If the file exists it pretty much has to be a permissions issue.

i had already modify nginx_logs with permission 777 but i continue to investigate maybe docker needs more permissions

I change all permission to the directory with chmod -R 777 /root/ELK/logstash but nothing change i have always discover files = 0

   > 
    > root@rancher:~/ELK/logstash# ls -al
    > total 24
    > drwxrwxrwx 3 root root 4096 févr.  3 15:54 .
    > drwxr-xr-x 3 root root 4096 févr.  2 13:34 ..
    > -rwxrwxrwx 1 root root  191 févr.  3 15:54 01-test.conf
    > drwxrwxrwx 2 root root 4096 févr.  3 15:48 files
    > -rwxrwxrwx 1 root root   79 janv. 27 15:20 logstash.conf
    > -rwxrwxrwx 1 root root   59 févr.  3 15:00 logstash-sample.conf

Thanks i found the issue It was docker issue. in my conf file the path is the /root/ELK/logstash/files/nginx_logs but in docker is not the same i named config-dir (-v /root/ELK/logstash/:/config-dir)

docker run -it --rm -h logstash --name logstash --link elasticsearch:elasticsearch -v /root/ELK/logstash/:/config-dir docke
r.elastic.co/logstash/logstash:6.5.4 -f /config-dir/01-test.conf

so my configure file is now like that and works perfectly

input {
        file {
                path => "/config-dir/files/nginx_logs"
                sincedb_path => "/dev/null"
                start_position => "beginning"
                }

}
output {
        stdout { codec => rubydebug }
}

Thanks for you help have a good day

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.