Hi all,
I played with ELK stack for a week or so...
After installing elasticsearch and kibana (7.17 version), I enabled simple security following this guide.
Later I installed logstash (version 7.17) with security configuration following this guide.
I had some doubts that security would block logstash from interacting with elasticsearch and so I tried some simple pipeline configurations.
I copied logstash configuration (/elk/logstash) in a folder under home directory (data and logs have been forwarded in a directory in the same path too).
I created the logstash_simple.conf file:
input { stdin { } }
output {
elasticsearch {
hosts => ["localhost:9200"]
user => "my_account"
password => "my_password"
}
stdout { codec => rubydebug }
}
After that I started a logstash instance:
sudo /usr/share/logstash/bin/logstash -f /home/user_account/elk-test/logstash/conf.d.test/logstash_simple.conf --path.settings /home/user_account/elk-test/logstash/
On command line I received the following output:
Using bundled JDK: /usr/share/logstash/jdk
Sending Logstash logs to //home/user_account/elk-test/logstash_log/log which is now configured via log4j2.properties
[2022-07-19T13:41:30,367][INFO ][logstash.runner ] Log4j configuration path used is: /home/user_account/elk-test/logstash/log4j2.properties
[2022-07-19T13:41:30,381][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.17.5", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.15+10 on 11.0.15+10 +indy +jit [linux-x86_64]"}
[2022-07-19T13:41:30,384][INFO ][logstash.runner ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseG1GC, -XX:MaxGCPauseMillis=300, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djdk.io.File.enableADS=true, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true]
[2022-07-19T13:41:30,798][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-07-19T13:41:33,873][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2022-07-19T13:41:33,875][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2022-07-19T13:41:34,139][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9601, :ssl_enabled=>false}
[2022-07-19T13:41:35,647][INFO ][org.reflections.Reflections] Reflections took 138 ms to scan 1 urls, producing 119 keys and 419 values
[2022-07-19T13:41:36,475][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://localhost:9200"]}
[2022-07-19T13:41:36,544][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://my_account:xxxxxx@localhost:9200/]}}
[2022-07-19T13:41:36,659][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://my_account:xxxxxx@localhost:9200/"}
[2022-07-19T13:41:36,678][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch version determined (7.17.5) {:es_version=>7}
[2022-07-19T13:41:36,679][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2022-07-19T13:41:36,737][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2022-07-19T13:41:36,795][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://my_account:xxxxxx@localhost:9200/]}}
[2022-07-19T13:41:36,814][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-19T13:41:36,815][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-19T13:41:36,817][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://my_account:xxxxxx@localhost:9200/"}
[2022-07-19T13:41:36,834][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.17.5) {:es_version=>7}
[2022-07-19T13:41:36,835][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2022-07-19T13:41:36,838][WARN ][logstash.javapipeline ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
[2022-07-19T13:41:36,897][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-19T13:41:36,897][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2022-07-19T13:41:36,913][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2022-07-19T13:41:36,943][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/home/user_account/elk-test/logstash/conf.d.test/logstash_simple.conf"], :thread=>"#<Thread:0x2e4fc988 run>"}
[2022-07-19T13:41:36,943][INFO ][logstash.javapipeline ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x222a6146@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:54 run>"}
[2022-07-19T13:41:38,225][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>1.28}
[2022-07-19T13:41:38,357][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.41}
[2022-07-19T13:41:38,380][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.jrubystdinchannel.StdinChannelLibrary$Reader (file:/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/jruby-stdin-channel-0.2.0-java/lib/jruby_stdin_channel/jruby_stdin_channel.jar) to field java.io.FilterInputStream.in
WARNING: Please consider reporting this to the maintainers of com.jrubystdinchannel.StdinChannelLibrary$Reader
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
[2022-07-19T13:41:38,458][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2022-07-19T13:41:38,465][ERROR][logstash.outputs.elasticsearch][main][1d8cb7afb2877b2da8ff0c6176b222185f5c7c38a8567009cea8b66b579083d1] Elasticsearch setup did not complete normally, please review previously logged errors {:message=>"Got response code '403' contacting Elasticsearch at URL 'http://localhost:9200/logstash'", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError}
The stdin plugin is now waiting for input:
[2022-07-19T13:41:38,512][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
[2022-07-19T13:41:43,464][ERROR][logstash.outputs.elasticsearch][main][1d8cb7afb2877b2da8ff0c6176b222185f5c7c38a8567009cea8b66b579083d1] Elasticsearch setup did not complete normally, please review previously logged errors {:message=>"Got response code '403' contacting Elasticsearch at URL 'http://localhost:9200/logstash'", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError}
[2022-07-19T13:41:48,476][ERROR][logstash.outputs.elasticsearch][main][1d8cb7afb2877b2da8ff0c6176b222185f5c7c38a8567009cea8b66b579083d1] Elasticsearch setup did not complete normally, please review previously logged errors {:message=>"Got response code '403' contacting Elasticsearch at URL 'http://localhost:9200/logstash'", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError}
[2022-07-19T13:41:50,451][WARN ][logstash.runner ] SIGINT received. Shutting down.
[2022-07-19T13:41:50,587][ERROR][logstash.outputs.elasticsearch][main][1d8cb7afb2877b2da8ff0c6176b222185f5c7c38a8567009cea8b66b579083d1] Elasticsearch setup did not complete normally, please review previously logged errors {:message=>"Got response code '403' contacting Elasticsearch at URL 'http://localhost:9200/logstash'", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError}
....
[2022-07-19T13:41:50,608][ERROR][logstash.outputs.elasticsearch][main][1d8cb7afb2877b2da8ff0c6176b222185f5c7c38a8567009cea8b66b579083d1] Elasticsearch setup did not complete normally, please review previously logged errors {:message=>"Got response code '403' contacting Elasticsearch at URL 'http://localhost:9200/logstash'", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError}
[2022-07-19T13:41:50,945][INFO ][logstash.javapipeline ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2022-07-19T13:41:51,543][INFO ][logstash.pipelinesregistry] Removed pipeline from registry successfully {:pipeline_id=>:main}
[2022-07-19T13:41:51,782][INFO ][logstash.javapipeline ][.monitoring-logstash] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2022-07-19T13:41:52,557][INFO ][logstash.pipelinesregistry] Removed pipeline from registry successfully {:pipeline_id=>:".monitoring-logstash"}
[2022-07-19T13:41:52,609][INFO ][logstash.runner ] Logstash shut down.
I assigned to my_account the following roles:
GET _security/role/logstash_reader
{
"logstash_reader" : {
"cluster" : [
"manage_logstash_pipelines"
],
"indices" : [
{
"names" : [
"logstash-*"
],
"privileges" : [
"read",
"view_index_metadata"
],
"allow_restricted_indices" : false
}
],
"applications" : [ ],
"run_as" : [ ],
"metadata" : { },
"transient_metadata" : {
"enabled" : true
}
}
}
GET _security/role/logstash_writer
{
"logstash_writer" : {
"cluster" : [
"manage_index_templates",
"monitor",
"manage_ilm",
"all"
],
"indices" : [
{
"names" : [
"logstash-*"
],
"privileges" : [
"write",
"create",
"delete",
"create_index",
"manage",
"manage_ilm"
],
"allow_restricted_indices" : false
}
],
"applications" : [ ],
"run_as" : [ ],
"metadata" : { },
"transient_metadata" : {
"enabled" : true
}
}
}
I'm sure this is a security permissions issue because if I add the index placeholder '*' to the logstash_writer role, everything runs fine.
These are the indexes saved on the environment:
GET _cat/indices
green open .monitoring-logstash-7-2022.07.19 nZ5JTu15Sraj9c9sFop2_A 1 0 242 0 310.3kb 310.3kb
green open .monitoring-logstash-7-2022.07.18 BCOHOl8IRwKtSYIeQbTWcA 1 0 40 0 149.6kb 149.6kb
green open .apm-agent-configuration 1T9NkNXiTWOWuWExEdSOKw 1 0 0 0 226b 226b
green open .kibana_task_manager_7.17.5_001 wjlVB8R3QNu3Swv4rKhzMQ 1 0 18 7729 37.8mb 37.8mb
yellow open logstash-2022.07.18-000001 Y1kgP_SsSkSwrF-icO-rgg 1 1 7 0 32.6kb 32.6kb
green open .tasks UdOajguyTJOoVoQsguy4gw 1 0 12 0 57.6kb 57.6kb
green open .monitoring-es-7-2022.07.18 dDgfnNXKToK9yjDa7HI4sA 1 0 47806 47744 31.5mb 31.5mb
green open .geoip_databases Pi4uNWYFT22zyIPd9IBLrQ 1 0 40 37 37.8mb 37.8mb
green open .kibana_7.17.5_001 dNxTTQJeT9SIJNkLneujqw 1 0 918 52 2.4mb 2.4mb
green open .security-7 4vdN6HvNSFiOfRlwCqeE8Q 1 0 64 6 248.2kb 248.2kb
green open .monitoring-es-7-2022.07.19 TCUGaaBqShS0EX2SHyOoPw 1 0 72830 11632 82.9mb 82.9mb
green open .apm-custom-link n74vZ34RS7eAushLZdpY2w 1 0 0 0 226b 226b
green open .monitoring-kibana-7-2022.07.18 AQ8QXewRR02IABzCjVs6Sw 1 0 5974 0 1.3mb 1.3mb
green open .monitoring-kibana-7-2022.07.19 qHqrdlImRZO0T5f33oopKw 1 0 8048 0 2.9mb 2.9mb
If I add each index separately, I still get the error.
My goal is to carve out the thinnest possible authorization profile for the logstash user.
Thanks in advance, I hope someone can help me.
Rocco