One or more required cgroup files or directories not found

ENV:

CentOS release 6.8 (Final) (2.6.32-642.el6.x86_64)
Logstash 6.4.2 

logstash :
INPUT: from kafka 6.4.2
OUTPUT: to Elastashsearch 6.4.2

There is some output to ES when I start logstash using /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/test.conf,but no any output after few minitues.

Change to DEBUG mode,logstash log always output:

[2018-10-19T17:49:59,396][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2018-10-19T17:50:01,349][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-10-19T17:50:01,349][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}

I had the same problem using same version ES and Logstash in Windows 10 environment.

First time loaded the logs in to ES, after that just in the loops

[2018-10-29T17:09:54,044][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-10-29T17:09:54,807][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2018-10-29T17:09:55,205][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-10-29T17:09:55,205][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-10-29T17:09:58,388][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x5d590346 sleep>"}
[2018-10-29T17:09:59,815][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2018-10-29T17:10:00,219][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-10-29T17:10:00,219][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-10-29T17:10:03,404][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x5d590346 sleep>"}
[2018-10-29T17:10:04,818][DEBUG][logstash.instrument.periodicpoller.cgroup] One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu
[2018-10-29T17:10:05,239][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-10-29T17:10:05,239][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-10-29T17:10:08,410][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x5d590346 sleep>"}

My solution is change logstash low version 6.2.4

I can load the logs for the first time. but when i load it again with different index, I had such loops.

I think it is the bug of logstash6.4.2!:slightly_frowning_face:

I suggest you should change it to low version of logstash, eg: 6.2.4

I think it is the bug, I run it successfully in 6.2.4

C:\elastic\logstash-6.2.4>.\bin\logstash -f c:\elastic\asalogs\asa4.conf
Sending Logstash's logs to C:/elastic/logstash-6.2.4/logs which is now configured via log4j2.properties
[2018-10-30T14:54:55,311][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"C:/elastic/logstash-6.2.4/modules/fb_apache/configuration"}
[2018-10-30T14:54:55,342][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"C:/elastic/logstash-6.2.4/modules/netflow/configuration"}
[2018-10-30T14:54:55,608][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-10-30T14:54:56,405][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.4"}
[2018-10-30T14:54:57,280][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-10-30T14:55:01,654][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-10-30T14:55:02,280][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-10-30T14:55:02,280][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-10-30T14:55:02,499][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-10-30T14:55:02,592][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-10-30T14:55:02,592][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2018-10-30T14:55:02,624][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-10-30T14:55:02,655][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-10-30T14:55:02,749][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2018-10-30T14:55:04,873][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x8412446 run>"}
[2018-10-30T14:55:05,127][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}
...................................................................................
[2018-10-30T15:06:52,873][WARN ][logstash.runner ] SIGINT received. Shutting down.
[2018-10-30T15:06:53,884][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x8412446 run>"}
Terminate batch job (Y/N)? y

Same log and same conf, but fail on 6.4.2

C:\elastic\logstash-6.2.4>cd ..

C:\elastic>cd logstash-6.4.2

C:\elastic\logstash-6.4.2>.\bin\logstash -f c:\elastic\asalogs\asa4.conf
Sending Logstash logs to C:/elastic/logstash-6.4.2/logs which is now configured via log4j2.properties
[2018-10-30T15:08:32,524][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-10-30T15:08:33,658][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.4.2"}
[2018-10-30T15:08:38,863][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-10-30T15:08:39,707][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-10-30T15:08:47,503][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-10-30T15:08:47,825][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-10-30T15:08:47,896][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-10-30T15:08:47,906][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2018-10-30T15:08:48,014][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2018-10-30T15:08:48,067][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-10-30T15:08:48,149][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-10-30T15:08:49,710][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"C:/elastic/logstash-6.4.2/data/plugins/inputs/file/.sincedb_d9fd8dd7bea33cd5bb61772c51bf15dc", :path=>["c:/elastic/asalogs/asa.log"]}
[2018-10-30T15:08:49,759][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x653de9e7 run>"}
[2018-10-30T15:08:49,827][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-10-30T15:08:49,837][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2018-10-30T15:08:50,608][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
.

it does not Starting pipeline

[2018-10-30T15:17:04,195][WARN ][logstash.runner ] SIGINT received. Shutting down.
[2018-10-30T15:17:04,427][INFO ][filewatch.observingtail ] QUIT - closing all files and shutting down.
[2018-10-30T15:17:06,348][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x653de9e7 run>"}
Terminate batch job (Y/N)? y

C:\elastic\logstash-6.4.2>

here is my simple asa log conf file

input
{
file {
path => ["c:/elastic/asalogs/asa.log"]
type => "cisco-fw"
start_position => "beginning"
}
}

filter
{
grok {
match => ["message", "%{CISCOTIMESTAMP:timestamp} (%{SYSLOGHOST:sysloghost})? %%{CISCOTAG:ciscotag}: %{GREEDYDATA:cisco_message}"]
}
}

output
{
stdout {
codec => dots
}
elasticsearch {
hosts => ["http://localhost:9200"]
index => "myasa4"
}
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.