Kibana 7.7.0 Basic version: management tab missing Security panel when started from docker

I can't find Security under Kibana 7.7 when I pull it from docker instead of downloading and installing manually Kibana. I posted same question in (https://stackoverflow.com/questions/61900546/kibana-7-7-0-basic-version-management-tab-missing-security-panel-when-started-f) with all details

Can you please check the output of GET _license?

1 Like

Here you are:

{
"license" : {
"status" : "active",
"uid" : "01574148-3044-47d4-8d9e-6ac06615c7a5",
"type" : "basic",
"issue_date" : "2020-05-19T19:29:41.432Z",
"issue_date_in_millis" : 1589916581432,
"max_nodes" : 1000,
"issued_to" : "docker-cluster",
"issuer" : "elasticsearch",
"start_date_in_millis" : -1
}
}

How do you figure out the version if it is Open Source or Basic from this response?
Kindly, run your eyes on https://stackoverflow.com/questions/61900546/kibana-7-7-0-basic-version-management-tab-missing-security-panel-when-started-f?noredirect=1#comment109487828_61900546. I added more details there. Basically, now I can see the Security panel after added xpack.monitoring.elasticsearch but LogStash is failling to connect to ElasticSearch with

logstash_1 | [2020-05-20T13:39:08,008][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}} logstash_1 | [2020-05-20T13:39:08,408][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"} logstash_1 | [2020-05-20T13:39:08,506][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}

Do you know how set LogStash instead of trying http://elasticsearch:9200/_xpack try http://my.ip.address.number:9200/_xpack? I changed logstash.conf to use x.x.x.x but it seems it didn't affect.

Here is the logstash.conf out

output { elasticsearch { index => "%{[fields][topic_name]}-%{+YYYY.MM.dd}" xpack.monitoring.elasticsearch.hosts: ["http://192.168.99.100:9200"] xpack.monitoring.elasticsearch.username: "logstash_system" xpack.monitoring.elasticsearch.password: => "l12345" } }

Thanks for the answer.

You have a basic license installed.
The OSS distribution of Elasticsearch has no license endpoint, so you would not get an answer.

Logstash is rejecting because it gets a 401 error, meaning the credentials you've providing on the X-Pack monitoring and/or on the Elasticsearch output of your pipeline are wrong.

I see you're mixing up the logstash.yml file and the pipeline file.

The logstash.yml file requires the following to send the monitoring stats (see documentation):

xpack.monitoring.elasticsearch.hosts: ["http://192.168.99.100:9200"] 
xpack.monitoring.elasticsearch.username: "logstash_system" 
xpack.monitoring.elasticsearch.password: => "l12345"

Then you have the actual Logstash pipeline, which should be similar to the one in the documentation, e.g.

... your pipeline with input, filters...

output {
  elasticsearch {
    index => "%{[fields][topic_name]}-%{+YYYY.MM.dd}"
    hosts => [ "http://the-target-cluster-node-1:9200", "http://the-target-cluster-node-2:9200" ]
    user => "a user which has the rights to write to indices named as all the possible values of `topic_name`"
    password => "the password"
  }
}

Thank you so much. Please, how do I create "
"a user which has the rights to write to indices named as all the possible values of topic_name"? I went to ...:5601/app/kibana#/management/security/users and try to create an User with such rights and I didn't find how. I went also to ... :5601/app/kibana#/management/security/roles and I didn't find some role to write to "indices named as all the possible values of topic_name". I read the whole documentation it suggested and I didn't find the answer (I guess it is obvious for someone with more experience). Please, just give me the first steps

I guess I have to pick up

indexwrite

below. But what do I type in Index box? Well, there is no index at all yet since LogStash didn't connect and create the index.

How many topic names do you have/expect to have? Do they follow a naming convention?

One way to handle this would be to give these indices a common prefix and create a role that can create and manage indices with that prefix.

Christian, I will have two index prefixes. One is for request/response and other for java exceptions. Both will be pushed to Elastic throw FileBeat->Kafka->LogStash. My final goal is separate who can see the request/response dashboard from who can see java exceptions dashboard. In other words, separate Business viewers from Developers.

Here are my complete LogStash:

`
xpack.monitoring.elasticsearch.hosts: ["http://192.168.99.100:9200"]
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: => "l12345"

input{
kafka{
codec => "json"
bootstrap_servers => "kafka1:9092"
topics => ["app_logs","request_logs"]
tags => ["my-app"]
}
}

filter {
if [fields][topic_name] == "app_logs" {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} *%{LOGLEVEL:level} %{DATA:pid} --- *[%{DATA:application}] *%{DATA:class} : %{GREEDYDATA:msglog}" }
tag_on_failure => ["not_date_line"]
}
date {
match => ["timestamp", "ISO8601"]
target => "timestamp"
}
if "_grokparsefailure" in [tags] {
mutate {
add_field => { "level" => "UNKNOWN" }
}
}
} else if [fields][topic_name] == "request_logs" {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} *%{LOGLEVEL:level} %{GREEDYDATA:msglog}" }
}
date {
match => ["timestamp", "ISO8601"]
target => "timestamp"
}
json {
source => "msglog"
target => "parsed_json"
}
if [level]=="INFO" or [level]=="WARN" {
mutate {
add_field => {"appName" => "%{[parsed_json][appName]}"}
add_field => {"logType" => "%{[parsed_json][logType]}"}
... several fields
add_field => {"src" => "%{[parsed_json][src]}"}
add_field => {"transactionId" => "%{[parsed_json][header][x-transaction-id]}"}
remove_field => [ "json", "message" ]
remove_field => [ "json", "parsed_json" ]
}
} else {
mutate {
add_field => {"msgerror" => "%{[parsed_json][message]}"}
remove_field => [ "json", "message" ]
remove_field => [ "json", "parsed_json" ]
}
}
if [transactionId] == "%{[parsed_json][header][x-transaction-id]}" {
mutate {
replace => ["transactionId","UNKNOWN"]
}
}
mutate {
convert => {"requestBytes" => "integer"}
convert => {"responseTime" => "integer"}
}
if "_grokparsefailure" in [tags] {
mutate {
add_field => { "level" => "UNKNOWN" }
}
}
}
}

output {
elasticsearch {
hosts => ["http://192.168.99.100:9200"]
index => "%{[fields][topic_name]}-%{+YYYY.MM.dd}"

}
}`

@Luca_Belluccini and @Christian_Dahlqvist, please, how create a role that can create and manage indices with certain prefix ?

Just add the prefix in the Kibana UI with the prefix followed by *.

For the permissions, you might refer to https://www.elastic.co/guide/en/logstash/current/ls-security.html

I might invite you to prefix indices with logstash-* for easier maintainability.

Luca, thanks. But I am still getting the same error.

I created the role bellow and created an user added to this role.

My logstash.conf out now is:

xpack.monitoring.elasticsearch.hosts: ["http://192.168.99.100:9200"]
xpack.monitoring.elasticsearch.username: "logstash_system"
xpack.monitoring.elasticsearch.password: => "l12345"

input{
kafka{
codec => "json"
bootstrap_servers => "kafka1:9092"
topics => ["app_logs","request_logs"]
tags => ["alcd"]
}
}

filter {
*** removed
}

output {
elasticsearch {
hosts => ["http://192.168.99.100:9200"]
index => "%{[fields][topic_name]}-%{+YYYY.MM.dd}"
user => "userlog"
password => "userlog"
}
}

Any extra idea what I am missing? Any clue how to force logstash connect to an Ip Address (I mean instead of elastic:9200 be my.ip.address.x:9200)? I added a new and specific queston regard this issue in
https://stackoverflow.com/questions/61924438/logstash-unable-to-retrieve-license-information-from-license-response-code-401

You did not follow the instructions detailed at the point one of https://www.elastic.co/guide/en/logstash/current/ls-security.html#ls-http-auth-basic as the role you've created is missing some cluster settings which are required.

Thanks Luca. You are right. I fixed it. But I am still getting same error. Do you know if there is some way to force LogStash to use an Ip Address instead of URL 'http://elasticsearch:9200/?

logstash_1 | WARNING: All illegal access operations will be denied in a future release
logstash_1 | Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
logstash_1 | [2020-05-21T12:41:12,468][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
logstash_1 | [2020-05-21T12:41:12,488][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.7.0"}
logstash_1 | [2020-05-21T12:41:13,543][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set xpack.monitoring.enabled: true in logstash.yml
logstash_1 | [2020-05-21T12:41:13,548][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
logstash_1 | Please configure Metricbeat to monitor Logstash. Documentation can be found at:
logstash_1 | https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
logstash_1 | [2020-05-21T12:41:15,361][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://elasticsearch:9200/]}}
logstash_1 | [2020-05-21T12:41:15,763][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash_1 | [2020-05-21T12:41:15,861][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/_xpack'"}
logstash_1 | [2020-05-21T12:41:15,939][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
logstash_1 | [2020-05-21T12:41:16,538][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \t\r\n], "#", "input", "filter", "output" at line 1, column 1 (byte 1)", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:58:in compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:66:in compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:28:in block in compile_sources'", "org/jruby/RubyArray.java:2577:in map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:27:in compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:181:in initialize'", "org/logstash/execution/JavaBasePipelineExt.java:67:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:43:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:342:in block in converge_state'"]}
logstash_1 | [2020-05-21T12:41:17,011][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
logstash_1 | [2020-05-21T12:41:21,818][INFO ][logstash.runner ] Logstash shut down.
dockercomposelogs_logstash_1 exited with code 1
filebeat_1 | 2020-05-21T12:40:54.126Z INFO log/harvester.go:324 File is inactive: /sample-logs/request-2019-10-24.log. Closing because close_inactive of 5m0s reached.
logstash_1 | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
logstash_1 | WARNING: An illegal reflective access operation has occurred
logstash_1 | WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.11.1.jar) to method sun.nio.ch.NativeThread.signal(long)
logstash_1 | WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
logstash_1 | WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations

Here is my role updated

And my logstash.conf out is:

output {
elasticsearch {
hosts => ["http://192.168.99.100:9200"]
#index => "%{[fields][topic_name]}-%{+YYYY.MM.dd}"
index => "logstash-{+YYYY.MM.dd}"
user => "userlog"
password => "userlog"
}
}

The error shows there is a syntax error in the Logstash pipeline.

The message just before also shows 401 meaning you have a wrong username or password.

Try to run the following commands and share the output:

curl http://192.168.99.100:9200/_license -u userlog:userlog -vvv

curl -X POST http://192.168.99.100:9200/logstash-test/_doc/1 -d'{"test":1}' -u userlog:userlog -vvv

C:\Users\mycomp>curl http://192.168.99.100:9200/_license -u userlog:userlog -vvv

  • Trying 192.168.99.100...
  • TCP_NODELAY set
  • Connected to 192.168.99.100 (192.168.99.100) port 9200 (#0)
  • Server auth using Basic with user 'userlog'

GET /_license HTTP/1.1
Host: 192.168.99.100:9200
Authorization: Basic dXNlcmxvZzp1c2VybG9n
User-Agent: curl/7.55.1
Accept: /

< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 338
<
{
"license" : {
"status" : "active",
"uid" : "01574148-3044-47d4-8d9e-6ac06615c7a5",
"type" : "basic",
"issue_date" : "2020-05-19T19:29:41.432Z",
"issue_date_in_millis" : 1589916581432,
"max_nodes" : 1000,
"issued_to" : "docker-cluster",
"issuer" : "elasticsearch",
"start_date_in_millis" : -1
}
}

  • Connection #0 to host 192.168.99.100 left intact

C:\Users\mycomp>curl -X POST http://192.168.99.100:9200/logstash-test/_doc/1 -d'{"test":1}' -u userlog:userlog -vvv
Note: Unnecessary use of -X or --request, POST is already inferred.

  • Trying 192.168.99.100...
  • TCP_NODELAY set
  • Connected to 192.168.99.100 (192.168.99.100) port 9200 (#0)
  • Server auth using Basic with user 'userlog'

POST /logstash-test/_doc/1 HTTP/1.1
Host: 192.168.99.100:9200
Authorization: Basic dXNlcmxvZzp1c2VybG9n
User-Agent: curl/7.55.1
Accept: /
Content-Length: 10
Content-Type: application/x-www-form-urlencoded

  • upload completely sent off: 10 out of 10 bytes
    < HTTP/1.1 406 Not Acceptable
    < content-type: application/json; charset=UTF-8
    < content-length: 97
    <
    {"error":"Content-Type header [application/x-www-form-urlencoded] is not supported","status":406}* Connection #0 to host 192.168.99.100 left intact

C:\Users\mycomp>

I tried also from inside Docker Logstash container and I got same exception from second command:

C:\Users\mycomp>docker exec -it dockercomposelogs_logstash_1 bash
bash-4.2$
C:\Users\Cast>docker exec -it dockercomposelogs_logstash_1 bash
bash-4.2$ curl http://192.168.99.100:9200/_license -u userlog:userlog -vvv

  • About to connect() to 192.168.99.100 port 9200 (#0)
  • Trying 192.168.99.100...
  • Connected to 192.168.99.100 (192.168.99.100) port 9200 (#0)
  • Server auth using Basic with user 'userlog'

GET /_license HTTP/1.1
Authorization: Basic dXNlcmxvZzp1c2VybG9n
User-Agent: curl/7.29.0
Host: 192.168.99.100:9200
Accept: /

< HTTP/1.1 200 OK
< content-type: application/json; charset=UTF-8
< content-length: 338
<
{
"license" : {
"status" : "active",
"uid" : "01574148-3044-47d4-8d9e-6ac06615c7a5",
"type" : "basic",
"issue_date" : "2020-05-19T19:29:41.432Z",
"issue_date_in_millis" : 1589916581432,
"max_nodes" : 1000,
"issued_to" : "docker-cluster",
"issuer" : "elasticsearch",
"start_date_in_millis" : -1
}
}

  • Connection #0 to host 192.168.99.100 left intact
    bash-4.2$ curl -X POST http://192.168.99.100:9200/logstash-test/_doc/1 -d'{"test":1}' -u userlog:userlog -vvv
  • About to connect() to 192.168.99.100 port 9200 (#0)
  • Trying 192.168.99.100...
  • Connected to 192.168.99.100 (192.168.99.100) port 9200 (#0)
  • Server auth using Basic with user 'userlog'

POST /logstash-test/_doc/1 HTTP/1.1
Authorization: Basic dXNlcmxvZzp1c2VybG9n
User-Agent: curl/7.29.0
Host: 192.168.99.100:9200
Accept: /
Content-Length: 10
Content-Type: application/x-www-form-urlencoded

  • upload completely sent off: 10 out of 10 bytes
    < HTTP/1.1 406 Not Acceptable
    < content-type: application/json; charset=UTF-8
    < content-length: 97
    <
  • Connection #0 to host 192.168.99.100 left intact
    {"error":"Content-Type header [application/x-www-form-urlencoded] is not supported","status":406}bash-4.2$

I changed a bit your second command adding the header to application/json and I got
mapper_parsing_exception but it seems it did connect successfully because I see

  • Connected to 192.168.99.100 (192.168.99.100) port 9200 (#0)
  • Server auth using Basic with user 'userlog'

curl -X POST -H "Content-Type: application/json" http://192.168.99.100:9200/logstash-test/_doc/1 -d'{"test":1}' -u userlog:userlog -vvv
Note: Unnecessary use of -X or --request, POST is already inferred.

  • Trying 192.168.99.100...
  • TCP_NODELAY set
  • Connected to 192.168.99.100 (192.168.99.100) port 9200 (#0)
  • Server auth using Basic with user 'userlog'

POST /logstash-test/_doc/1 HTTP/1.1
Host: 192.168.99.100:9200
Authorization: Basic dXNlcmxvZzp1c2VybG9n
User-Agent: curl/7.55.1
Accept: /
Content-Type: application/json
Content-Length: 10

  • upload completely sent off: 10 out of 10 bytes
    < HTTP/1.1 400 Bad Request
    < content-type: application/json; charset=UTF-8
    < content-length: 313
    <
    {"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"failed to parse"}],"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"not_x_content_exception","reason":"Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes"}},"status":400}* Connection #0 to host 192.168.99.100 left intact

It seems the mapper exception is regard Windows format flavour on command line. I tried from inside Docker Logstash since it is linux and I got a different error.

Does "...
blocked by: [TOO_MANY_REQUESTS/12/index read-only / allow delete (api)];"},"status":429 ..." ring a bell in your mind?

docker exec -it dockercomposelogs_logstash_1 bash
bash-4.2$ curl -X POST -H "Content-Type: application/json" http://192.168.99.100:9200/logstash-test/_doc/1 -d'{"test":1}' -u userlog:userlog -vvv

  • About to connect() to 192.168.99.100 port 9200 (#0)
  • Trying 192.168.99.100...
  • Connected to 192.168.99.100 (192.168.99.100) port 9200 (#0)
  • Server auth using Basic with user 'userlog'

POST /logstash-test/_doc/1 HTTP/1.1
Authorization: Basic dXNlcmxvZzp1c2VybG9n
User-Agent: curl/7.29.0
Host: 192.168.99.100:9200
Accept: /
Content-Type: application/json
Content-Length: 10

  • upload completely sent off: 10 out of 10 bytes
    < HTTP/1.1 429 Too Many Requests
    < content-type: application/json; charset=UTF-8
    < content-length: 319
    <
  • Connection #0 to host 192.168.99.100 left intact
    {"error":{"root_cause":[{"type":"cluster_block_exception","reason":"index [logstash-test] blocked by: [TOO_MANY_REQUESTS/12/index read-only / allow delete (api)];"}],"type":"cluster_block_exception","reason":"index [logstash-test] blocked by: [TOO_MANY_REQUESTS/12/index read-only / allow delete (api)];"},"status":429}bash-4.2$

Given the last response it seems your cluster has no more disk space: the disk flooding stage (95% disk full) kicked in and sets the indices in read only.