Logstash conf file issues (using docker container)

Hi,
I am new to elk stuff, I am running 3 separate docker containers (i.e. elasticsearch, kibana, and logstash). I need to specify my logstash.conf but its not working out using following docker logstash command ( I am using windows 10 with Docker desktop)

I am trying to use log4net with my webapi project to get logs to process using ELK stack

I ran the following commands to run the images

Elasticsearch:
docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" -it -h elasticsearch:7.9.2 --name elasticsearch elasticsearch:7.9.2

Kibana:
docker run -d -p 5601:5601 -h kibana:7.9.2 --name kibana --link elasticsearch:elasticsearch kibana:7.9.2

Logstash:
docker run -h logstash:7.9.2 --name logstash --link elasticsearch:elasticsearch -it --rm -v /c/Users/username/config-dir logstash:7.9.2 -f /config-dir/mylogstash.conf

(I have created following logstash file @ location c/users/username/config-dir/mylogstash.conf)

this is mylogstash.conf file

'

input {
file {
path => "C:\Testfolder\MyLoggerTest.log"
type => "log4net"
codec => multiline {
pattern => "^(DEBUG|WARN|ERROR|INFO|FATAL)"
negate => true
what => previous
}
}
}

filter {
if [type] == "log4net" {
grok {
match => [ "message", "(?m)%{LOGLEVEL:level} %{DATE:sourceTimestamp} %{DATA:logger} [%{NUMBER:threadId}] [%{IPORHOST:tempHost}] %{GREEDYDATA:tempMessage}" ]
}
mutate {
replace => [ "message" , "%{tempMessage}" ]
replace => [ "host" , "%{tempHost}" ]
remove_field => [ "tempMessage" ]
remove_field => [ "tempHost" ]
}
}
}

output {
elasticsearch {
hosts => "host.docker.internal:9200/"
index => "log4netindx"
}
}

'
I googled a lot but nothing seems to be working! I don't know how to create yaml file or interact with that (as no one shows complete way to do that) Also I am using command prompt in Admin mode.

Thanks in advance

Regards
Saj

Do not use backslash in the path option of a file input, it is treated as an escape. Use /

What do you mean by "not working"?

Thanks Badger for a quick response.

By not working means: I am not able to run logstash to run using mylogstash.conf (which could create index as well)

I have changed the path => "C:/testfolder/myloggertest.log" (all in lower case- just in case)
and after running the logstash docker command getting following log (its a long file so added in smaller text)

shows error/warning like:
set xpack.monitoring.enabled:true in logstash.yml
(I don't know how to set xpack.monitoring.enabled:true in logstash.yml, where does this file resides or how to access it using command prompt etc.)

[logstash.config.source.local.configpathloader] No config files found in path {:path=>"/config-dir/mylogstash.conf"}

Thanks.

Log:

C:\Program Files\Docker\Docker>docker run -h logstash:7.9.2 --name logstash --link elasticsearch:elasticsearch -it --rm -v /c/Users/username/config-dir logstash:7.9.2 -f /config-dir/mylogstash.conf
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby9354540782487593083jopenssl.jar) to field java.security.MessageDigest.provider
WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2020-10-14T20:02:42,960][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.9.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10-LTS on 11.0.8+10-LTS +indy +jit [linux-x86_64]"}
[2020-10-14T20:02:43,021][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2020-10-14T20:02:43,042][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2020-10-14T20:02:43,590][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-10-14T20:02:43,629][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"997a4c0e-f59f-41c9-8038-56938a3f012f", :path=>"/usr/share/logstash/data/uuid"}
[2020-10-14T20:02:44,197][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set xpack.monitoring.enabled: true in logstash.yml
[2020-10-14T20:02:44,201][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.

Please configure Metricbeat to monitor Logstash. Documentation can be found at:


[2020-10-14T20:06:12,691][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://elasticsearch:9200/]}}
[2020-10-14T20:06:13,088][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2020-10-14T20:06:13,258][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>7}
[2020-10-14T20:06:13,268][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
[2020-10-14T20:06:13,573][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2020-10-14T20:06:13,579][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2020-10-14T20:06:13,823][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"/config-dir/mylogstash.conf"}
[2020-10-14T20:06:16,316][INFO ][org.reflections.Reflections] Reflections took 140 ms to scan 1 urls, producing 22 keys and 45 values
[2020-10-14T20:06:16,738][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://elasticsearch:9200/]}}
[2020-10-14T20:06:16,790][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2020-10-14T20:06:16,826][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] ES Output version determined {:es_version=>7}
[2020-10-14T20:06:16,827][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}

final part of the log:

[2020-10-14T20:06:16,954][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://elasticsearch:9200"]}
[2020-10-14T20:06:16,989][WARN ][logstash.javapipeline ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
[2020-10-14T20:06:17,225][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x3b4a2a1a run>"}
[2020-10-14T20:06:18,327][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>1.1}
[2020-10-14T20:06:18,374][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2020-10-14T20:06:18,472][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:".monitoring-logstash"], :non_running_pipelines=>}
[2020-10-14T20:06:18,872][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-10-14T20:06:20,737][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2020-10-14T20:06:20,796][INFO ][logstash.runner ] Logstash shut down.

Are you sure that directory is mounted in your container?

I would expect logstash.yml to be in the /etc/logstash directory, although I have never run logstash using docker, so I do not know if that makes a difference.

I am not sure what you mean by directory mounted in your container?

I ran the docker command to pull the image (from docker hub) like

docker pull logstash:7.9.2 [name of the image : tag]
this downloads the image (you can see in the Docker Desktop app), then run the command as I mentioned above.

Where do you find the "/etc/logstash directory"?
I have googled to see the image path for windows 10 (its not helpful and gone through all the files) but the images do show on Docker Desktop and I can also run
localhost:9200 & 5601 (elasticsearch & kibana) in a browser - both are running

I managed to run logstash without any errors (specifically related to no config files found) by using following command

docker run -h logstash:7.9.2 --name logstash --link elasticsearch:elasticsearch --rm -it -v //c/Users/username/config-dir/:/usr/share/logstash/config-dir/ logstash:7.9.2

this maps to the directory within the image container i.e.
Your Directory path---> //c/Users/username/config-dir/ = /usr/share/logstash/config-dir/

But now it hasn't created/mapped index within elasticsearch & Kibana!!!
My conf file is shown above with changes to path with 'Forward slash' i.e. path => "C:/testfolder/myloggertest.log"

Any suggestions!!! Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.