Kibana 7.7.0 Basic version: management tab missing Security panel when started from docker

Christian, I will have two index prefixes. One is for request/response and other for java exceptions. Both will be pushed to Elastic throw FileBeat->Kafka->LogStash. My final goal is separate who can see the request/response dashboard from who can see java exceptions dashboard. In other words, separate Business viewers from Developers.

Here are my complete LogStash:

`
xpack.monitoring.elasticsearch.hosts: ["http://192.168.99.100:9200"]
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: => "l12345"

input{
kafka{
codec => "json"
bootstrap_servers => "kafka1:9092"
topics => ["app_logs","request_logs"]
tags => ["my-app"]
}
}

filter {
if [fields][topic_name] == "app_logs" {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} *%{LOGLEVEL:level} %{DATA:pid} --- *[%{DATA:application}] *%{DATA:class} : %{GREEDYDATA:msglog}" }
tag_on_failure => ["not_date_line"]
}
date {
match => ["timestamp", "ISO8601"]
target => "timestamp"
}
if "_grokparsefailure" in [tags] {
mutate {
add_field => { "level" => "UNKNOWN" }
}
}
} else if [fields][topic_name] == "request_logs" {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} *%{LOGLEVEL:level} %{GREEDYDATA:msglog}" }
}
date {
match => ["timestamp", "ISO8601"]
target => "timestamp"
}
json {
source => "msglog"
target => "parsed_json"
}
if [level]=="INFO" or [level]=="WARN" {
mutate {
add_field => {"appName" => "%{[parsed_json][appName]}"}
add_field => {"logType" => "%{[parsed_json][logType]}"}
... several fields
add_field => {"src" => "%{[parsed_json][src]}"}
add_field => {"transactionId" => "%{[parsed_json][header][x-transaction-id]}"}
remove_field => [ "json", "message" ]
remove_field => [ "json", "parsed_json" ]
}
} else {
mutate {
add_field => {"msgerror" => "%{[parsed_json][message]}"}
remove_field => [ "json", "message" ]
remove_field => [ "json", "parsed_json" ]
}
}
if [transactionId] == "%{[parsed_json][header][x-transaction-id]}" {
mutate {
replace => ["transactionId","UNKNOWN"]
}
}
mutate {
convert => {"requestBytes" => "integer"}
convert => {"responseTime" => "integer"}
}
if "_grokparsefailure" in [tags] {
mutate {
add_field => { "level" => "UNKNOWN" }
}
}
}
}

output {
elasticsearch {
hosts => ["http://192.168.99.100:9200"]
index => "%{[fields][topic_name]}-%{+YYYY.MM.dd}"

}
}`

@Luca_Belluccini and @Christian_Dahlqvist, please, how create a role that can create and manage indices with certain prefix ?

Just add the prefix in the Kibana UI with the prefix followed by *.

For the permissions, you might refer to https://www.elastic.co/guide/en/logstash/current/ls-security.html

I might invite you to prefix indices with logstash-* for easier maintainability.

Luca, thanks. But I am still getting the same error.

I created the role bellow and created an user added to this role.

My logstash.conf out now is:

xpack.monitoring.elasticsearch.hosts: ["http://192.168.99.100:9200"]
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: => "l12345"

input{
kafka{
codec => "json"
bootstrap_servers => "kafka1:9092"
topics => ["app_logs","request_logs"]
tags => ["alcd"]
}
}

filter {
*** removed
}

output {
elasticsearch {
hosts => ["http://192.168.99.100:9200"]
index => "%{[fields][topic_name]}-%{+YYYY.MM.dd}"
user => "userlog"
password => "userlog"
}
}

Any extra idea what I am missing? Any clue how to force logstash connect to an Ip Address (I mean instead of elastic:9200 be my.ip.address.x:9200)? I added a new and specific queston regard this issue in
https://stackoverflow.com/questions/61924438/logstash-unable-to-retrieve-license-information-from-license-response-code-401

You did not follow the instructions detailed at the point one of https://www.elastic.co/guide/en/logstash/current/ls-security.html#ls-http-auth-basic as the role you've created is missing some cluster settings which are required.

Thanks Luca. You are right. I fixed it. But I am still getting same error. Do you know if there is some way to force LogStash to use an Ip Address instead of URL 'http://elasticsearch:9200/?

logstash_1 | WARNING: All illegal access operations will be denied in a future release
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2020-05-21T12:41:12,468][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
logstash_1 | [2020-05-21T12:41:12,488][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.7.0"}
logstash_1 | [2020-05-21T12:41:13,543][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set xpack.monitoring.enabled: true in logstash.yml
logstash_1 | [2020-05-21T12:41:13,548][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash_1 | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash_1 | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash_1 | [2020-05-21T12:41:15,361][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2020-05-21T12:41:15,763][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash_1 | [2020-05-21T12:41:15,861][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
logstash_1 | [2020-05-21T12:41:15,939][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
logstash_1 | [2020-05-21T12:41:16,538][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \t\r\n], "#", "input", "filter", "output" at line 1, column 1 (byte 1)", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:58:in compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:66:in compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:28:in block in compile_sources'", "org/jruby/RubyArray.java:2577:in map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:27:in compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:181:in initialize'", "org/logstash/execution/JavaBasePipelineExt.java:67:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:43:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:342:in block in converge_state'"]}
logstash_1 | [2020-05-21T12:41:17,011][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
logstash_1 | [2020-05-21T12:41:21,818][INFO ][logstash.runner ] Logstash shut down.
dockercomposelogs_logstash_1 exited with code 1
filebeat_1 | 2020-05-21T12:40:54.126Z INFO log/harvester.go:324 File is inactive: /sample-logs/request-2019-10-24.log. Closing because close_inactive of 5m0s reached.
logstash_1 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
logstash_1 | WARNING: An illegal reflective access operation has occurred
logstash_1 | WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.11.1.jar) to method sun.nio.ch.NativeThread.signal(long)
logstash_1 | WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
logstash_1 | WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations

Here is my role updated

And my logstash.conf out is:

output {
elasticsearch {
hosts => ["http://192.168.99.100:9200"]
#index => "%{[fields][topic_name]}-%{+YYYY.MM.dd}"
index => "logstash-{+YYYY.MM.dd}"
user => "userlog"
password => "userlog"
}
}

The error shows there is a syntax error in the Logstash pipeline.

The message just before also shows 401 meaning you have a wrong username or password.

Try to run the following commands and share the output:

curl http://192.168.99.100:9200/_license -u userlog:userlog -vvv

curl -X POST http://192.168.99.100:9200/logstash-test/_doc/1 -d'{"test":1}' -u userlog:userlog -vvv

C:\Users\mycomp>curl http://192.168.99.100:9200/_license -u userlog:userlog -vvv

  • Trying 192.168.99.100...
  • TCP_NODELAY set
  • Connected to 192.168.99.100 (192.168.99.100) port 9200 (#0)
  • Server auth using Basic with user 'userlog'

GET /_license HTTP/1.1
Host: 192.168.99.100:9200
Authorization: Basic dXNlcmxvZzp1c2VybG9n
User-Agent: curl/7.55.1
Accept: /

< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 338
<
{
"license" : {
"status" : "active",
"uid" : "01574148-3044-47d4-8d9e-6ac06615c7a5",
"type" : "basic",
"issue_date" : "2020-05-19T19:29:41.432Z",
"issue_date_in_millis" : 1589916581432,
"max_nodes" : 1000,
"issued_to" : "docker-cluster",
"issuer" : "elasticsearch",
"start_date_in_millis" : -1
}
}

  • Connection #0 to host 192.168.99.100 left intact

C:\Users\mycomp>curl -X POST http://192.168.99.100:9200/logstash-test/_doc/1 -d'{"test":1}' -u userlog:userlog -vvv
Note: Unnecessary use of -X or --request, POST is already inferred.

  • Trying 192.168.99.100...
  • TCP_NODELAY set
  • Connected to 192.168.99.100 (192.168.99.100) port 9200 (#0)
  • Server auth using Basic with user 'userlog'

POST /logstash-test/_doc/1 HTTP/1.1
Host: 192.168.99.100:9200
Authorization: Basic dXNlcmxvZzp1c2VybG9n
User-Agent: curl/7.55.1
Accept: /
Content-Length: 10
Content-Type: application/x-www-form-urlencoded

  • upload completely sent off: 10 out of 10 bytes
    < HTTP/1.1 406 Not Acceptable
    < content-type: application/json; charset=UTF-8
    < content-length: 97
    <
    {"error":"Content-Type header [application/x-www-form-urlencoded] is not supported","status":406}* Connection #0 to host 192.168.99.100 left intact

C:\Users\mycomp>

I tried also from inside Docker Logstash container and I got same exception from second command:

C:\Users\mycomp>docker exec -it dockercomposelogs_logstash_1 bash
bash-4.2$
C:\Users\Cast>docker exec -it dockercomposelogs_logstash_1 bash
bash-4.2$ curl http://192.168.99.100:9200/_license -u userlog:userlog -vvv

  • About to connect() to 192.168.99.100 port 9200 (#0)
  • Trying 192.168.99.100...
  • Connected to 192.168.99.100 (192.168.99.100) port 9200 (#0)
  • Server auth using Basic with user 'userlog'

GET /_license HTTP/1.1
Authorization: Basic dXNlcmxvZzp1c2VybG9n
User-Agent: curl/7.29.0
Host: 192.168.99.100:9200
Accept: /

< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 338
<
{
"license" : {
"status" : "active",
"uid" : "01574148-3044-47d4-8d9e-6ac06615c7a5",
"type" : "basic",
"issue_date" : "2020-05-19T19:29:41.432Z",
"issue_date_in_millis" : 1589916581432,
"max_nodes" : 1000,
"issued_to" : "docker-cluster",
"issuer" : "elasticsearch",
"start_date_in_millis" : -1
}
}

  • Connection #0 to host 192.168.99.100 left intact
    bash-4.2$ curl -X POST http://192.168.99.100:9200/logstash-test/_doc/1 -d'{"test":1}' -u userlog:userlog -vvv
  • About to connect() to 192.168.99.100 port 9200 (#0)
  • Trying 192.168.99.100...
  • Connected to 192.168.99.100 (192.168.99.100) port 9200 (#0)
  • Server auth using Basic with user 'userlog'

POST /logstash-test/_doc/1 HTTP/1.1
Authorization: Basic dXNlcmxvZzp1c2VybG9n
User-Agent: curl/7.29.0
Host: 192.168.99.100:9200
Accept: /
Content-Length: 10
Content-Type: application/x-www-form-urlencoded

  • upload completely sent off: 10 out of 10 bytes
    < HTTP/1.1 406 Not Acceptable
    < content-type: application/json; charset=UTF-8
    < content-length: 97
    <
  • Connection #0 to host 192.168.99.100 left intact
    {"error":"Content-Type header [application/x-www-form-urlencoded] is not supported","status":406}bash-4.2$

I changed a bit your second command adding the header to application/json and I got
mapper_parsing_exception but it seems it did connect successfully because I see

  • Connected to 192.168.99.100 (192.168.99.100) port 9200 (#0)
  • Server auth using Basic with user 'userlog'

curl -X POST -H "Content-Type: application/json" http://192.168.99.100:9200/logstash-test/_doc/1 -d'{"test":1}' -u userlog:userlog -vvv
Note: Unnecessary use of -X or --request, POST is already inferred.

  • Trying 192.168.99.100...
  • TCP_NODELAY set
  • Connected to 192.168.99.100 (192.168.99.100) port 9200 (#0)
  • Server auth using Basic with user 'userlog'

POST /logstash-test/_doc/1 HTTP/1.1
Host: 192.168.99.100:9200
Authorization: Basic dXNlcmxvZzp1c2VybG9n
User-Agent: curl/7.55.1
Accept: /
Content-Type: application/json
Content-Length: 10

  • upload completely sent off: 10 out of 10 bytes
    < HTTP/1.1 400 Bad Request
    < content-type: application/json; charset=UTF-8
    < content-length: 313
    <
    {"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"failed to parse"}],"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"not_x_content_exception","reason":"Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"}},"status":400}* Connection #0 to host 192.168.99.100 left intact

It seems the mapper exception is regard Windows format flavour on command line. I tried from inside Docker Logstash since it is linux and I got a different error.

Does "...
blocked by: [TOO_MANY_REQUESTS/12/index read-only / allow delete (api)];"},"status":429 ..." ring a bell in your mind?

docker exec -it dockercomposelogs_logstash_1 bash
bash-4.2$ curl -X POST -H "Content-Type: application/json" http://192.168.99.100:9200/logstash-test/_doc/1 -d'{"test":1}' -u userlog:userlog -vvv

  • About to connect() to 192.168.99.100 port 9200 (#0)
  • Trying 192.168.99.100...
  • Connected to 192.168.99.100 (192.168.99.100) port 9200 (#0)
  • Server auth using Basic with user 'userlog'

POST /logstash-test/_doc/1 HTTP/1.1
Authorization: Basic dXNlcmxvZzp1c2VybG9n
User-Agent: curl/7.29.0
Host: 192.168.99.100:9200
Accept: /
Content-Type: application/json
Content-Length: 10

  • upload completely sent off: 10 out of 10 bytes
    < HTTP/1.1 429 Too Many Requests
    < content-type: application/json; charset=UTF-8
    < content-length: 319
    <
  • Connection #0 to host 192.168.99.100 left intact
    {"error":{"root_cause":[{"type":"cluster_block_exception","reason":"index [logstash-test] blocked by: [TOO_MANY_REQUESTS/12/index read-only / allow delete (api)];"}],"type":"cluster_block_exception","reason":"index [logstash-test] blocked by: [TOO_MANY_REQUESTS/12/index read-only / allow delete (api)];"},"status":429}bash-4.2$

Given the last response it seems your cluster has no more disk space: the disk flooding stage (95% disk full) kicked in and sets the indices in read only.

Sorry, I didn't understand your suggestion. If I set up indices as read only how can Logsstash write on it? Above it, I am pretty sure if I remove xpack.security logstash will save normally. I can give a try increasing disk space but it is strange to me since there is almost no data save on such elasticsearch and it works without xpack.

PS.: I can execute your second command using elastic user.

>docker exec -it dockercomposelogs_logstash_1 bash
bash-4.2$ curl -X POST -H "Content-Type: application/json" http://192.168.99.100:9200/logstash-test/_doc/1 -d'{"test":1}' -u elastic:e12345 -vvv
* About to connect() to 192.168.99.100 port 9200 (#0)
*   Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 9200 (#0)
* Server auth using Basic with user 'elastic'
> POST /logstash-test/_doc/1 HTTP/1.1
> Authorization: Basic ZWxhc3RpYzplMTIzNDU=
> User-Agent: curl/7.29.0
> Host: 192.168.99.100:9200
> Accept: */*
> Content-Type: application/json
> Content-Length: 10
>
* upload completely sent off: 10 out of 10 bytes

Please, compare last email working properly with Docker Machine IP address and elastic user with this failling when I simply replace the IP Address by container alias.

bash-4.2$ curl -X POST -H "Content-Type: application/json" http://elasticsearch:9200/logstash-test/_doc/1 -d'{"test":2}' -u elastic:e12345 -vvv
* About to connect() to elasticsearch port 9200 (#0)
*   Trying 172.18.0.2...
* Connected to elasticsearch (172.18.0.2) port 9200 (#0)
* Server auth using Basic with user 'elastic'
> POST /logstash-test/_doc/1 HTTP/1.1
> Authorization: Basic ZWxhc3RpYzplMTIzNDU=
> User-Agent: curl/7.29.0
> Host: elasticsearch:9200
> Accept: */*
> Content-Type: application/json
> Content-Length: 10
>
* upload completely sent off: 10 out of 10 bytes
< HTTP/1.1 429 Too Many Requests
< content-type: application/json; charset=UTF-8
< content-length: 319
<
* Connection #0 to host elasticsearch left intact
{"error":{"root_cause":[{"type":"cluster_block_exception","reason":"index [logstash-test] blocked by: [TOO_MANY_REQUESTS/12/index read-only / allow delete (api)];"}],"type":"cluster_block_exception","reason":"index [logstash-test] blocked by: [TOO_MANY_REQUESTS/12/index read-only / allow delete (api)];"},"status":429}bash-4.2$

Kindly, do you know any trick to force logstash to use ip address? I created this question

Hello @jimisdrpc,

I can help but we're encountering several problems.

If your docker is using a single volume both for data and logs, the disk space can fill up quite fast. This is not necessarily related to the indices but, for example, logs.

This is for sure due to the fact Elasticsearch has not enough disk free.
You can check the cat API https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-nodes.html or any other monitoring errors.

I encourage to not cross post with stack overflow... Now where should we follow up?

Please:

  1. Ensure you have enough disk space on Elasticsearch containers
  2. If you're on Elasticsearch 7.4+ if the disk space is freed, the index block will be removed (https://www.elastic.co/guide/en/elasticsearch/reference/current/disk-allocator.html)
  3. Test again the curl requests to ensure the document is written.
  4. Check if the Logstash pipeline configuration is misconfigured as the last logs shown a problem [2020-05-21T12:41:16,538][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \t\r\n], "#", "input", "filter", "output" at line 1, column 1 (byte 1)", :backtrace=> - this is a usual error if you've edited the file using notepad. Ensure the file encoding is correct.

Once we're at this point, we can easily resume investigating any error.

I missed the question about using the up instead of the hostnames. You're using docker and the best approach is to use the service names instead of IPs.

Thank you so much. Honestly, you have already answered the topic question. Now I can see Security panel. I will restart from beggining and post a new question if necessary. As a tip for future readers, there is indeed some limetation because I am using Docker Toolbox with Virtual Box instead of Docker For Windows with HyperV. It is not the first time I face some weird connection issue. Thanks again