Hi,
I am trying to get started with the ELK stack but have run into an issue that is blocking me.
I am using the ELK docker stack "sebp/elk" (latest)
It starts up fine and then I use "goinside" to access the machine to try and insert my data.
/opt/logstash/bin/logstash --version
logstash 6.3.2
I have successfully inserted some test data from the STDIN.
I then try and import some data from CSV.
My logstash.config file:
input {
file {
path => "/var/mydata/pim_products-basic-100.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
separator => ","
columns => ["id","model_id","model","part_id","part","color_id","color","gender_id","gender","category_id","category","type_id","type","productline_id","productline","pillar_id","pillar","theme_id","theme","marketsegment_id","marketsegment","material_id","material","buyingsession_id","buyingsession","permanent","season_id","cites","ecommerce_height","ecommerce_length","ecommerce_depth","ecommerce_heel_height","ecommerce_strap_drop_length","ecommerce_belt_rise_length","dwh_translations","ecommerce_translations","enrichment_translations","enrichment_translations_count","ecommerce_images","ecommerce_images_count","enrichment_images","enrichment_images_count","enrichment_created_at","enrichment_created_by","enrichment_updated_at","enrichment_updated_by","dwh_created_at","dwh_updated_at","collection_id","collection","launch_date","f4_ecommerce_images","f4_ecommerce_images_count","f4_campaign_images","f4_campaign_images_count","available_in_heaven"]
}
}
output {
elasticsearch {
hosts => "localhost"
index => "products"
}
stdout {}
}
My docker-compose.yml
elk:
image: sebp/elk
ports:
- "5601:5601"
- "9200:9200"
- "5044:5044"
volumes:
- /Users/dave/Desktop/logstash-data:/var/mydata
root@d5e39cbd53a8:/var# /opt/logstash/bin/logstash --path.data /var/mydata --debug --log.level debug -f /var/mydata/logstash.config
The process seem to start up fine but then I get a repeating loop
...
...
[2018-08-27T13:50:51,049][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-08-27T13:50:51,050][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-08-27T13:50:51,221][DEBUG][logstash.inputs.file ] _globbed_files: /var/mydata/pim_products-basic-100.csv: glob is: []
[2018-08-27T13:50:51,917][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x6384d4dc sleep>"}
[2018-08-27T13:50:55,866][DEBUG][logstash.instrument.periodicpoller.cgroup] Error, cannot retrieve cgroups information {:exception=>"Errno::ENOENT", :message=>"No such file or directory - /sys/fs/cgroup/cpuacct/docker/d5e39cbd53a86d80091f32508361e58ff0af00894a75976a79c4076627a30230/cpuacct.usage"}
I have tried rebuilding the stack images, restarting, rebooting
But have no idea what it might be.
(Am on a Mac)
Any pointers would be fantastic.
Thanks