Problem with docker-compose and logstash

This error appears when I run a docker-compose with elasticsearch, kibana and logstash... I don't understand how to solve it

logstash | [2020-03-06T07:57:58,574][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \t\r\n], "#", "input", "filter", "output" at line 1, column 1 (byte 1)", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:in compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in block in compile_sources'", "org/jruby/RubyArray.java:2584:in map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:156:in initialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:27:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:326:in block in converge_state'"]}
logstash | [2020-03-06T07:58:00,497][INFO ][org.reflections.Reflections] Reflections took 112 ms to scan 1 urls, producing 20 keys and 40 values

logstash | [2020-03-06T07:57:58,574][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \t\r\n], "#", "input", "filter", "output" at line 1, column 1 (byte 1)", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:41:in compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:49:in compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in block in compile_sources'", "org/jruby/RubyArray.java:2584:in map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:10:in compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:156:in initialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:27:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:36:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:326:in block in converge_state'"]}
logstash | [2020-03-06T07:58:00,497][INFO ][org.reflections.Reflections] Reflections took 112 ms to scan 1 urls, producing 20 keys and 40 values

Hi

Apparently logstash cannot initialize the pipeline because the config file is either empty, misplaced or missing.

Please post your pipelines.yml so we can see where the file should be and go from there.

Hope this helps.

this is the docker-compose, what I want is to simply load my custom logstash configuration files ... logstash.conf, I'm not working with pipeline.yml

version: '2.2'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch
environment:
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
networks:
- elasticstack

kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
container_name: kibana
ports:
- 5601:5601
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
networks:
- elasticstack

logstash:
image: docker.elastic.co/logstash/logstash:7.5.1
container_name: logstash
volumes:
- /home/alex/Escritorio/prueba_docker:/config-dir
command: logstash -f /config-dir
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
# - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
networks:
- elasticstack

volumes:
data01:
driver: local

networks:
elasticstack:
driver: bridge

That means it will concatenate every file in that directory to create the configuration. Apparently it is objecting to the first byte of the first file in the directory.

Hi

It seems you are forcing a different path to the config from the one hardcoded in the docker image you are using (/usr/share/logstash/pipeline).

Please post the contents of you /config-dir directory. Something is wrong with one or more of those .yml files, which both @Badger and myself tried to tell you.

Also, comment out that command: logstash -f /config-dir line and see if logstash starts properly (without your pipeline config, of course). If it does, please post your pipelines.yml, which you'll find in /usr/share/logstash/config inside your logstash container.

Hope this helps

thanks ... I had a problem with the container folder.
Now I have the problem that the logstash container stops, these are the logs.

logstash | [2020-03-10T06:56:52,124][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch bulk_path=>"/_monitoring/bulk?system_id=logstash&system_api_version=7&interval=1s", hosts=>[http://elasticsearch:9200], sniffing=>false, manage_template=>false, id=>"7d7dfa0f023f65240aeb31ebb353da5a42dc782979a2bd7e26e28b7cbd509bb3", document_type=>"%{[@metadata][document_type]}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_0d7208dd-b672-4bd6-83b2-854750f17c9f", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_rollover_alias=>"logstash", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
logstash | [2020-03-10T06:56:52,175][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://elasticsearch:9200/]}}
logstash | [2020-03-10T06:56:52,184][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
logstash | [2020-03-10T06:56:52,207][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
logstash | [2020-03-10T06:56:52,208][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
logstash | [2020-03-10T06:56:52,305][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
logstash | [2020-03-10T06:56:52,321][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x4147a3e4 run>"}
logstash | [2020-03-10T06:56:52,416][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
logstash | [2020-03-10T06:56:52,442][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>}
logstash | [2020-03-10T06:56:53,049][INFO ][logstash.outputs.elasticsearch] Installing ILM policy {"policy"=>{"phases"=>{"hot"=>{"actions"=>{"rollover"=>{"max_size"=>"50gb", "max_age"=>"30d"}}}}}} to _ilm/policy/logstash-policy
logstash | [2020-03-10T06:56:53,183][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
logstash | [2020-03-10T06:56:55,916][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
logstash | [2020-03-10T06:56:56,379][INFO ][logstash.runner ] Logstash shut down.

this is the configuration file

input {
stdin {
}
}

output {
elasticsearch {
hosts => ["http://elasticsearch:9200"]
}
stdout {
codec => rubydebug
}
}

Hi

Your logstash is exepecting input from stdin. If it doesn't get anything, it simply stops, not having anything to do anymore.

Try using, e.g., a csv you have lying around, with the file{} input plugin and commenting out your elasticsearch{} otuput, just leave the stdout{}active to see what you get. The service should stay up listening for changes to your file. Every time you add a new line to the file logstash should see it and give you some output.

If you get the same behaviour, mybe your pipeline is not where you expect it to be. Run the container like this:

docker run -ti logstash bash

This will give you a prompt inside the container and you will be able to explore the filesystem, and check that all files are where they should be and are properly configured. You should check your pipelines.yml and your pipeline files.

You can check the documentation for input plugins here: https://www.elastic.co/guide/en/logstash/current/input-plugins.html

For filter plugins: https://www.elastic.co/guide/en/logstash/current/filter-plugins.html

Output plugins: https://www.elastic.co/guide/en/logstash/current/output-plugins.html

And codec plugins: https://www.elastic.co/guide/en/logstash/current/codec-plugins.html

Hope this helps you get started.

Thanks, the information was very helpful. Now I have this problem, I already had my dashboard created and after restarting the services it doesn't show me anything ...this is the message in the search bar

http://localhost:5601/s/prueba/app/kibana#/dashboard/574522f0-63c4-11ea-a3e3-6b376aab7c6a?_g=(filters:!(),refreshInterval:(pause:!t,value:0),time:(from:now-15y,to:now))

Hi

This seems to be a Kibana issue. I'd suggest you open a new thread in that category.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.