Logstash not importing data even after showing absolutely fine logs

(Kashi Pashoria Tonny) #1

I'm using elasticsearch 6.6.0 and same version of logstash to import a csv file. I wrote a config file for logstash as it should be. It shows absolutely fine logs. Ending logs are as follows:

[2019-02-12T19:52:40,089][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x94268e8 run>"}

[2019-02-12T19:52:40,172][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

[2019-02-12T19:52:40,197][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections

[2019-02-12T19:52:40,610][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

After these logs, it just doesn't do anything and doesn't even print a single log.
I need quick help. I've already wasted my 2 days sorting this issue out.

#2

What does the configuration look like? If you are just using file inputs then running with --log.level trace will show you what the filewatchers are doing.

(Kashi Pashoria Tonny) #3

I ran logstash with --debug and is show ending logs as follows:

[2019-02-12T22:44:15,674][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x58a4f041 run>"}

[2019-02-12T22:44:15,754][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

[2019-02-12T22:44:15,767][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections

[2019-02-12T22:44:15,791][DEBUG][logstash.agent ] Starting puma

[2019-02-12T22:44:15,808][DEBUG][logstash.agent ] Trying to start WebServer {:port=>9600}

[2019-02-12T22:44:15,874][DEBUG][logstash.api.service ] [api-service] start

[2019-02-12T22:44:16,128][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

[2019-02-12T22:44:17,102][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu

[2019-02-12T22:44:17,470][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}

[2019-02-12T22:44:17,473][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}

[2019-02-12T22:44:20,691][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x58a4f041 sleep>"}

[2019-02-12T22:44:22,106][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu

[2019-02-12T22:44:22,478][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}

[2019-02-12T22:44:22,478][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}

[2019-02-12T22:44:25,692][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x58a4f041 sleep>"}

[2019-02-12T22:44:27,108][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu

and it just goes on repeating this. Don't get to know about cgroup files or directories.

#4

As I said, use --log.level trace, not --log.level debug. You can ignore the cgroup messages.

(Kashi Pashoria Tonny) #5

yes I figured it out searching against the logs that were printed using --log.

Thanks for your help.

(system) closed #6

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.