Logstash on docker

Please help me out with this error I am not able to understand where I am going wrong

[gagarwal3@gaurav13]~/tmp% docker run -h logstash --name logstash --link elasticsearch:elasticsearch -it --rm -v "$PWD":/config-dir logstash -f /config-dir/uuuu.conf
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
06:58:12.186 [main] INFO logstash.modules.scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
06:58:12.190 [main] INFO logstash.modules.scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
06:58:12.202 [main] INFO logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
06:58:12.205 [main] INFO logstash.setting.writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/var/lib/logstash/dead_letter_queue"}
06:58:12.234 [LogStash::Runner] INFO logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"af10de3a-e57c-47db-934b-ffcc9df14e33", :path=>"/var/lib/logstash/uuid"}
06:58:13.391 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
06:58:13.393 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
06:58:13.495 [[main]-pipeline-manager] WARN logstash.outputs.elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
06:58:13.497 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - Using mapping template from {:path=>nil}
06:58:13.499 [[main]-pipeline-manager] ERROR logstash.outputs.elasticsearch - Failed to install template. {:message=>"Template file '' could not be found!", :class=>"ArgumentError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:37:in read_template_file'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:23:inget_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/template_manager.rb:7:in install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:58:ininstall_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.4.2-java/lib/logstash/outputs/elasticsearch/common.rb:25:in register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:9:inregister'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:43:in register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:290:inregister_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in register_plugins'", "org/jruby/RubyArray.java:1613:ineach'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:310:instart_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:235:in run'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:408:instart_pipeline'"]}
06:58:13.500 [[main]-pipeline-manager] INFO logstash.outputs.elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
06:58:13.719 [[main]-pipeline-manager] INFO logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
06:58:13.978 [[main]-pipeline-manager] ERROR logstash.pipeline - Error registering plugin {:plugin=>"<LogStash::Inputs::File type=>"mb", path=>["C:\\Users\\gagarwal3\\Downloads\\logstash\\mbs.log"], start_position=>"beginning", sincedb_path=>"/dev/null", id=>"618ba335223b861caf2238250a0f668c6cdc2123-1", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_a0642bb0-5b4e-45cf-b04c-2ce27d428967", enable_metric=>true, charset=>"UTF-8">, stat_interval=>1, discover_interval=>15, sincedb_write_interval=>15, delimiter=>"\n", close_older=>3600>", :error=>"File paths must be absolute, relative path specified: C:\Users\gagarwal3\Downloads\logstash\mbs.log"}
06:58:14.507 [[main]-pipeline-manager] ERROR logstash.agent - Pipeline aborted due to error {:exception=>#<ArgumentError: File paths must be absolute, relative path specified: C:\Users\gagarwal3\Downloads\logstash\mbs.log>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-file-4.0.3/lib/logstash/inputs/file.rb:187:in register'", "org/jruby/RubyArray.java:1613:ineach'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-file-4.0.3/lib/logstash/inputs/file.rb:185:in register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:290:inregister_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in register_plugins'", "org/jruby/RubyArray.java:1613:ineach'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301:in register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:456:instart_inputs'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:348:in start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:235:inrun'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:408:in `start_pipeline'"]}
06:58:14.556 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
06:58:17.526 [LogStash::Runner] WARN logstash.agent - stopping pipeline {:id=>"main"}
[gagarwal3@gaurav13]~/tmp%

The pipeline is terminating without execution

Don't attempt to connect to localhost from within a Docker container. Use the host's name or IP address instead. If you're running ES in another container you can put both containers in the same Docker network and reference the containers via their names, i.e. the Logstash container connect to http://name-of-es-container:9200.

can u tell me a bit in detail I did not understand your answer completely
Can u tell the same with reference to some example

What part are you not understanding? I can elaborate but please be more specific on what's unclear.

I am working on a vm so I am unable to understand how come I am connecting to the local host the thing is while running elasticsearch as a container i used this command
docker run -d -p 9200:9200 -p 9300:9300 -it -h elasticsearch --name elasticsearch elasticsearch
so i was of the notion that since i m working on vm gaurav13 then how come I m getting connected to the localhost
If it is getting connected then what should i do to avoid this

I am working on a vm so I am unable to understand how come I am connecting to the local host

Um, wait. What's the contents of /config-dir/uuuu.conf?

input{
file{
type => "mb"
path => "/remote/users/gagarwal3/tmp/mbs.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
file{
type => "femb"
path => "/remote/users/gagarwal3/tmp/fembs.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
file{
type => "stat"
path => "/remote/users/gagarwal3/tmp/statmbs.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter{
if [type] == "mb"
{
grok
{
match => {"message" => "%{DATESTAMP:time} | %{HOSTNAME:hostname} | %{DATA:application} | %{LOGLEVEL:loglevel} *| 1-%{DATA:thread:int} | %{DATA:class} *| %{DATA:correlationID%{GREEDYDATA:msg}"}
remove_field => ["message"]

}
}
if [type] == "femb"
{
grok{
match => {"message" => "%{DATESTAMP:time} | %{HOSTNAME:hostname} | %{DATA:application} | %{LOGLEVEL:loglevel} | 1-%{DATA:thread:int} | %{DATA:class} |%{DATA:corr:int} |%{GREEDYDATA:msg}"}
remove_field => ["message"]
}
}
if [type] == "stat"
{
grok{
match => {"message" => "%{DATESTAMP:time} \S+ APT_\S+?_APT#1-0 APP STAT <StatisticLogger.java#\d
TID#\d
> %{DATA:THREAD} %{GREEDYDATA:msg}"}
remove_field => ["message"]
}
}
if "_grokparsefailure" not in [tags] {
date {
match => [ "timestamp", "YYYY/MM/dd HH:mm:ss.SSS", "YYYY/MM/dd HH:mm:ss,SSS", "dd/MMM/YYYY:HH:mm:ss +0000", "EEE MMM dd HH:mm:ss YYYY" ]
timezone => "UTC"
}
if [type] == "web_access" or [type] == "web_error" {
mutate {
gsub => [
"referrer",""","",
"agent",""","",
"JSESSIONID",""","",
"APT_SESSIONID",""","",
"correlationId",""","",
"transactionOriginator",""","",
"customerId",""",""
]
remove_field => [ "logline", "timestamp", "BASE10NUM", "INT", "HOSTNAME", "IPV4", "day", "month", "monthday", "time", "year"]
}
}

}

}
output{
elasticsearch{
hosts => "localhost"
index => "logis"
document_type => "remote_logs"
}
stdout{
codec => rubydebug

}
}

This is the content

So your configuration looks like this:

output{
elasticsearch{
hosts => "localhost"
index => "logis"
document_type => "remote_logs"
}

And you're asking this:

I am working on a vm so I am unable to understand how come I am connecting to the local host

Your Logstash is attempting to connect to localhost because that's how you have configured it.

I am sorry I didn't update the config file I dragged dropped it directly

Will codec rubydebug run here and whether it will display the data on the prompt or not?

The stdout output will work, yes. Unless you detach the container with the -d option you should get the output in your terminal.

I think there is some mistake with my path which I have given the document is located in the /remote/users/gagarwal3/tmp folder what should be the path since I m there inside the tmp folder already

I think there is some mistake with my path which I have given the document is located in the /remote/users/gagarwal3/tmp folder

That directory isn't available inside the container, at least not under that name. The only host-mount you're making is -v "$PWD":/config-dir, i.e. whatever directory is the current one when you run docker run will be mouned as /config-dir, but that's it.

Thank you very much
Sorry for the silly mistakes which was due to my negligence.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.