Logstash docker cannot log into elasticsearch docker

I have 2 dockers set up as follows:

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.6.0
    volumes:
      - ./config/elasticsearch/esdata:/usr/share/elasticsearch/data
      - ./config/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
    stdin_open: true # docker run -i
    tty: true        # docker run -t
    ports:
      - "9200:9200"
      - "9300:9300"
    environment:
      ES_JAVA_OPTS: "-Xmx1024m -Xms1024m"
    healthcheck:
       test: curl -s http://elasticsearch:9200 >/dev/null || exit 1
       interval: 30s
       timeout: 10s
       retries: 50
    networks:
      - elk

  logstash:
    image: docker.elastic.co/logstash/logstash:8.6.0
    volumes:
      - ./config/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
      - ./config/logstash/pipeline:/usr/share/logstash/pipeline:ro
      - /var/log/GDPR/myapplication:/var/log/GDPR/myapplication:ro
    ports:
      - "5000:5000"
      - "4320:4320"
      - "4321:4321"
    environment:
      LS_JAVA_OPTS: "-Xmx256m -Xms256m"
    networks:
      - elk
    depends_on:
      elasticsearch:
        condition: service_healthy

    links:
      - elasticsearch
networks:
  elk:
    driver: bridge

and I have a pipeline setup with output as follows:

output {
    elasticsearch {
      hosts => ["elasticsearch:9200"]
      index => "syslog-%{[hostname]}"
      user => "logstash_writer"
      password => "xxxxx"
    }
    rabbitmq {
      host => "rabbitmq"
      exchange => "CloudMapper"
      exchange_type => "fanout"
    }
    stdout { codec => rubydebug }
}

However, I get this error in logstash:

logstash_1       | [2023-03-23T09:51:55,427][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
logstash_1       | [2023-03-23T09:51:55,428][WARN ][logstash.javapipeline    ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
logstash_1       | [2023-03-23T09:51:55,434][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x75ae70aa@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:131 run>"}
logstash_1       | [2023-03-23T09:51:55,655][INFO ][logstash.codecs.jsonlines] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
logstash_1       | [2023-03-23T09:51:56,372][INFO ][logstash.codecs.json     ] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
logstash_1       | [2023-03-23T09:51:56,422][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
logstash_1       | [2023-03-23T09:51:56,448][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1       | [2023-03-23T09:51:56,453][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>1.02}
logstash_1       | [2023-03-23T09:51:56,470][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
logstash_1       | [2023-03-23T09:51:56,464][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://logstash_writer:xxxxxx@elasticsearch:9200/]}}
logstash_1       | [2023-03-23T09:51:57,740][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://logstash_writer:xxxxxx@elasticsearch:9200/"}
logstash_1       | [2023-03-23T09:51:57,753][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.6.0) {:es_version=>8}
logstash_1       | [2023-03-23T09:51:57,753][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
logstash_1       | [2023-03-23T09:51:57,782][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
logstash_1       | [2023-03-23T09:51:57,782][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
logstash_1       | [2023-03-23T09:51:57,782][WARN ][logstash.outputs.elasticsearch][main] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
logstash_1       | [2023-03-23T09:51:57,782][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1       | [2023-03-23T09:51:57,791][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
logstash_1       | [2023-03-23T09:51:57,795][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>8, :ecs_compatibility=>:v8}
logstash_1       | [2023-03-23T09:51:57,819][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash_1       | [2023-03-23T09:51:57,832][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
logstash_1       | [2023-03-23T09:51:57,832][WARN ][logstash.outputs.elasticsearch][main] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
logstash_1       | [2023-03-23T09:51:57,941][INFO ][logstash.outputs.rabbitmq][main] Connected to RabbitMQ {:url=>"amqp://guest:XXXXXX@localhost:5672/"}
logstash_1       | [2023-03-23T09:51:57,979][WARN ][logstash.filters.grok    ][main] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
logstash_1       | [2023-03-23T09:51:58,135][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/pipeline/dlm_json.conf", "/usr/share/logstash/pipeline/tcp_line.conf"], :thread=>"#<Thread:0x40c1b968@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:131 run>"}
logstash_1       | [2023-03-23T09:51:58,853][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.72}
logstash_1       | [2023-03-23T09:51:58,955][INFO ][logstash.inputs.tcp      ][main][400345d7c05181a57b0233ba5138aadb7696643ede494e3146418e21e00c4124] Starting tcp input listener {:address=>"0.0.0.0:4321", :ssl_enable=>false}
logstash_1       | [2023-03-23T09:51:58,961][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
logstash_1       | [2023-03-23T09:51:58,964][INFO ][logstash.inputs.tcp      ][main][67ca90918584d8281f37765ed46ff82e5a3db0a18361dd79579bf9607a3678b8] Starting tcp input listener {:address=>"0.0.0.0:4320", :ssl_enable=>false}
logstash_1       | [2023-03-23T09:51:58,975][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
logstash_1       | [2023-03-23T09:52:02,827][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}
logstash_1       | [2023-03-23T09:52:07,834][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :message=>"Got response code '401' contacting Elasticsearch at URL 'http://elasticsearch:9200/'"}

This is the user setup and his role

When I get onto the logstash docker and do the follow though, I can log in.

$ curl elasticsearch:9200
{"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":["Basic realm=\"security\" charset=\"UTF-8\"","ApiKey"]}}],"type":"security_exception","reason":"missing authentication credentials for REST request [/]","header":{"WWW-Authenticate":["Basic realm=\"security\" charset=\"UTF-8\"","ApiKey"]}},"status":401}$ ^C
$ ^[[A^C
$ curl logstash_writer:xxxxx@elasticsearch:9200
{
  "name" : "e82b58f667db",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "_-n6mq__T7issVWVXUGL9g",
  "version" : {
    "number" : "8.6.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "f67ef2df40237445caa70e2fef79471cc608d70d",
    "build_date" : "2023-01-04T09:35:21.782467981Z",
    "build_snapshot" : false,
    "lucene_version" : "9.4.2",
    "minimum_wire_compatibility_version" : "7.17.0",
    "minimum_index_compatibility_version" : "7.0.0"
  },
  "tagline" : "You Know, for Search"
}

Please help!

To add to it, this is my elasticsearch config:

#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
#cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
#node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#@path.data@
#
# Path to log files:
#
#@path.logs@

#  data: /var/data/elasticsearch
#  logs: /var/log/elasticsearch

path:
  data: /usr/share/elasticsearch/data
  logs: /usr/share/elasticsearch/logs

#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
#network.host: 192.168.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
discovery.type: single-node
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# --------------------------------- Readiness ----------------------------------
#
# Enable an unauthenticated TCP readiness endpoint on localhost
#
#readiness.port: 9399
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 23-01-2023 13:01:54
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: true

xpack.security.enrollment.enabled: false

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: false
    #keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: false
    #verification_mode: certificate
    #keystore.path: certs/transport.p12
    #truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
#cluster.initial_master_nodes: ["aae5f3cd687a"]

# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0

# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0

#----------------------- END SECURITY AUTO CONFIGURATION -------------------------
#
# logger.org.elasticsearch.discovery: DEBUG

Bump. There been no replies? Anyone available to help?

You have to use: curl -u logstash_writer:pass http://elasticsearch:9200

Check your:

  • pass, by curl, if is not working, should reset it
  • roles and privileges, should be similar to "logstash_system" user
  • review the documentation
  • if is still not working, use "elastic" user, temporary

As I mentioned above, this is working, which is more confusing.

Also, thanks for replying. I guess using the '@' method is the same as -u option.

Sorry, my mistake, you have right, it's the same for curl.

The additional privileges which you have set are fine, monitor, view_index_metadata, create_doc, monitor

I have tested in mine environment:
Role logstash_writer:

  • "cluster": ["manage_index_templates", "monitor", "manage_ilm"]
  • "privileges": ["write","create","create_index","manage","manage_ilm"]
  • "indices": ["syslog*]
  • "Run As privileges" - leave empty

Users logstash_writer:

  • "roles" : [ "logstash_writer"] - I think you haven't assigned the role "logstash_writer" to the user: logstash_writer

I wish that was the case :frowning: . Have a look at the image in the first post and you see this is not the case.

I have been leaving it for the last few weeks due to other work, but will revisit it over the next 2 days.

Hopefully revisiting it from some time off will allow me to spot something.

Thanks in the meantime Rios

Try to change role to superuser just temporary.

Is there anything in roles.yml?

Have you checked elasticsearch logs? Should be some traces there

I have found a solution which is a little bit strange. You may try it, but I'm a sceptic.

I did try the superuser role I think.

There was nothing in elasticsearch logs at all, which was strange.

That solution is definitely strange, but so is life at the moment :smiley:

Wait till I report back, and wish me luck. Should be later this week. Our dev server is out of action today, so I hope tomorrow I will get a chance. In the meantime, we are looking at getting everything behind openvpn, which will reduce the needs for password protection.

Hi,

here is page explaining running Logstash in Docker https://www.elastic.co/guide/en/logstash/current/docker-config.html.

However the monitoring parameter is wrong and it should be XPACK_MONITORING_ENABLED. You can disable it with env parameter XPACK_MONITORING_ENABLED=false.
Or open default logstash.yml and find 'xpack.monitoring' section, where you can see all the configuration parameters.

Hope it helps.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.