Error: failed to publish events: write tcp XX.XX.XX.XX:50882->XX.XX.XX.XX:5044: write: broken pipe

I have docker-compose.yml

version: "2.4"                                                            
                                                                          
services:                                                                 
  elasticsearch:                                                          
    image: elasticsearch:7.17.14                                          
    container_name: elasticsearch                                         
    hostname: elasticsearch                                               
    restart: unless-stopped                                               
    ports:                                                                
      - "9200:9200"                                                       
      - "9300:9300"                                                       
    volumes:                                                              
      - type: volume                                                      
        source: elasticsearch_data                                        
        target: /usr/share/elasticsearch/data                             
    environment:                                                          
      - "node.name=elasticsearch"                                         
      - "bootstrap.memory_lock=true"                                      
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"                                      
      - "discovery.type=single-node"                   
    networks:                                                             
      - elk_stack                                                         
                                                                          
                                                                          
  logstash:                                                               
    image: logstash:7.17.14                                               
    ports:                                                                
      - "5044:5044"                                                       
    container_name: logstash                                              
    restart: unless-stopped                                               
    command: logstash -f /etc/logstash/conf.d/logstash.conf               
    volumes:                                                              
      - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf        
      - ./logstash.conf:/etc/logstash/conf.d/logstash.conf:ro             
    depends_on:                                                           
      - elasticsearch                                                     
    networks:                                                             
      - elk_stack                                                         
                                                                          
                                                                          


  filebeat:                                                                                    
    build: filebeat                                                                            
    container_name: filebeat                                                                                                                                             
    user: root                                                                                 
    volumes:                                                                                   
      - ./filebeat.yml:/usr/share/filebeat/filebeat.yml                                        
      - /var/run/docker.sock:/var/run/docker.sock                                              
      - /var/lib/docker/containers:/var/lib/docker/containers:ro                               
    command: ["filebeat", "-e", "-strict.perms=false"]                                         
    restart: unless-stopped                                                                    
    hostname: mydockerhost                                                                     
    environment:                                                                               
      - ELASTICSEARCH_HOSTS=elasticsearch:9200                                                 
    labels:                                                                                    
      co.elastic.logs/enabled: "true"                                                          
    networks:                                                                                  
      - elk_stack                                                                              
                                                                                               
                                                                                               
  kibana:                                                                                      
    image: kibana:7.17.14                                                                      
    container_name: kibana                                                                     
    restart: unless-stopped                                                                    
    environment:                                                                               
      - 'ELASTICSEARCH_HOSTS=["http://elasticsearch:9200"]'                                    
      - "SERVER_NAME=localhost"                                                                
      - "XPACK_MONITORING_ENABLED=false"                                                       
                                                                                                                                
    ports:                                                                                     
      - "5601:5601"                                                                            
    networks:                                                                          
      - elk_stack                                                                          
                                                                                               
                                                                                               
networks:                                                                                      
  elk_stack:                                                                                   
    driver: bridge                                                                             
                                                                                               
volumes:                                                                                       
  elasticsearch_data:                                                                          

My filebeat:


filebeat.inputs:
- type: log
  paths:
    - /var/lib/docker/containers/*/*.log
  json.keys_under_root: true
  json.add_error_key: true
  json.overwrite_keys: true
  multiline.pattern: '^[[:space:]]'
  multiline.negate: false
  multiline.match: after
  json.message_key: log.message.                            
output.logstash:
  hosts: ["XX.XX.XX.XX:5044"]  - my ip host                  
  pipeline.batch.size: 500                                  
     

My logstash.conf:

input {                                                                      
  file {                                                                     
    path => "/var/lib/docker/containers/*/*.log"                             
    sincedb_path => "/dev/null"                                              
    exclude => "*.gz"                                                        
    start_position => "beginning"                                            
    codec => json                                                            
    type => "docker"                                                         
  }                                                                          
}                                                                            
                                                                             
filter {                                                                     
  if [type] == "docker" {                                                    
  }                                                                          
}                                                                            
                                                                             
output {                                                                     
  if [type] == "docker" {                                                    
    elasticsearch {                                                          
      hosts => "XX.XX.XX.XX:9200"                                           
      index => "docker-%{+YYYY.MM.dd}"                                       
      document_id => "%{[@metadata][docker][container][id]}"                 
    }                                                                        
  }                                                                          

My filebeat can't see logstash, this causes an error in the logs:

INFO    [publisher_pipeline_output]     pipeline/output.go:151  Connection to backoff(async(tcp://XX.XX.XX.XX:5044)) established
ERROR   [logstash]      logstash/async.go:280   Failed to publish events caused by: write tcp XX.XX.XX.XX:50890->XX.XX.XX.XX:5044: write: broken pipe
INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
INFO    [publisher]     pipeline/retry.go:223     done

I don't understand why this error occurs and how to fix it
I went into the container with filebeat to check the connection, did url logstash:5044 but I got "Connection refused"

[root@mydockerhost filebeat]# curl logstash:5044
curl: (7) Failed to connect to logstash:5044; Connection refused

Hi,

Assuming your configuration is right, we are missing logstash errors logs ?

You didnt provide any proof that logstash is even running on the container.

Look for logstash host listenning on the correct port, if that's not the case you should take a look at the logs.

Hi. All 4 containers themselves are running. I apologize for not providing logstash logs, I will provide them now:

docker logs -f logstash                                                                                                                                                                                      
sing bundled JDK: /usr/share/logstash/jdk                                                                                                                                                                                                                                                                                   
penJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.                                                                                                                                      
ending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties                                                               
2023-11-10T08:47:39,197][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties                                                                                                                                     
2023-11-10T08:47:39,213][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.17.14", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.20+8 on 11.0.20+8 +indy +jit [linux-x86_64]"}                                                                                                                                               
2023-11-10T08:47:39,226][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djdk.io.File.enableADS=true, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOut
ritable=true, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/]                                                                                                                                                                                                                                         
2023-11-10T08:47:39,289][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}                                                                                                                                                                                                                                            
2023-11-10T08:47:39,314][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}                                                                                                                                                     
2023-11-10T08:47:40,530][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified                                                                                                                                                                                                                                 
2023-11-10T08:47:40,615][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"7abca504-8d03-4932-ba44-ddd434d12c7f", :path=>"/usr/share/logstash/data/uuid"}                                                                                                                                                                                      
2023-11-10T08:47:44,206][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml                                                                                                                                            
2023-11-10T08:47:44,210][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and may be removed in a future release.                                                                                                                     
lease configure Metricbeat to monitor Logstash. Documentation can be found at:                                                                                                               
ttps://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html                                                                                                                                                                                                                                             
2023-11-10T08:47:45,265][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.                                                                                   
2023-11-10T08:47:45,413][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.                                                                                     
2023-11-10T08:47:46,496][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}                                                                     
2023-11-10T08:47:47,341][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}                                                                                                                                                                                                                                             
2023-11-10T08:47:47,369][INFO ][logstash.licensechecker.licensereader] Elasticsearch version determined (7.17.14) {:es_version=>7}                           
2023-11-10T08:47:47,383][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}                                                                                                                              
2023-11-10T08:47:47,836][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK                                                                                                                                                                                                                           
2023-11-10T08:47:47,839][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.                                                  
2023-11-10T08:47:48,562][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}                                                     
2023-11-10T08:47:53,356][INFO ][org.reflections.Reflections] Reflections took 268 ms to scan 1 urls, producing 119 keys and 419 values                                                                                                                                                                                                                                                     
2023-11-10T08:47:55,592][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid
unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.                                                   
2023-11-10T08:47:55,689][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.                                                                                   
2023-11-10T08:47:55,681][WARN ][deprecation.logstash.codecs.json] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.                                                                                    
2023-11-10T08:47:55,764][WARN ][deprecation.logstash.outputs.elasticsearchmonitoring] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.                                                                           
2023-11-10T08:47:55,790][WARN ][deprecation.logstash.inputs.file] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.                                                                                               
2023-11-10T08:47:55,893][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.                                                                                              
2023-11-10T08:47:55,949][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.                                                                                     
2023-11-10T08:47:56,016][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://elasticsearch:9200"]}
2023-11-10T08:47:56,036][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//XX.XX.XX.XX:9200"]}                                                                                                                                                                                                               
2023-11-10T08:47:56,092][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://XX.XX.XX.XX:9200/]}}     
2023-11-10T08:47:56,099][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
2023-11-10T08:47:56,165][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}                                                                     
2023-11-10T08:47:56,178][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch version determined (7.17.14) {:es_version=>7}                                                                                 
2023-11-10T08:47:56,179][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
2023-11-10T08:47:56,181][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://XX.XX.XX.XX:9200/"}                                                                                                
2023-11-10T08:47:56,205][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.17.14) {:es_version=>7}                                                                                                           
2023-11-10T08:47:56,205][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}                                               
2023-11-10T08:47:56,397][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`                                                                                  
2023-11-10T08:47:56,401][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Config is not compliant with data streams. `data_stream => auto` resolved to `false`                                                        
2023-11-10T08:47:56,413][WARN ][logstash.javapipeline    ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
2023-11-10T08:47:56,501][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}                                                                                      
2023-11-10T08:47:56,608][INFO ][logstash.outputs.elasticsearch][main] Installing Elasticsearch template {:name=>"logstash"}                                                                                                                 
2023-11-10T08:47:56,619][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x475910d7 run>"}                                                                  
2023-11-10T08:47:56,625][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/logstash.conf"], :thread=>"#<Thread:0x5b72aaaa@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:54 run>"}
2023-11-10T08:47:58,455][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>1.83}                                                                                              
2023-11-10T08:47:58,545][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>1.92}                                                                                                              
2023-11-10T08:47:58,560][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}                                                                                                   
2023-11-10T08:47:58,659][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}            
2023-11-10T08:47:58,740][INFO ][filewatch.observingtail  ][main][0e5494bdd1e7279ae277602eb33f80c6d8fc0cc62044e644c27fc301803a50d9] START, creating Discoverer, Watch with file and sincedb collections

You do not have any beats input configured to listen on port 5044, that's the reason, the configuration you shared just have a file input.

You need a beats input listening on port 5044 as explained in the documentation.

Thanks for the documentation. I tried to fix the configuration.
Modified logstash.yml

input {
  file {
    path => "/var/lib/docker/containers/*/*.log"
    sincedb_path => "/dev/null"
    exclude => "*.gz"
    start_position => "beginning"
    codec => json
    type => "docker"
  }
  beats {
    port => 5044
  }
}

filter {
  if [type] == "docker" {
  }
}

output {
  if [type] == "docker" {
    elasticsearch {
      hosts => "XX.XX.XX.XX:9200"
      index => "%{[@metadata][beat]}-%{[@metadata][version]}".
      document_id => "%{[@metadata][docker][container][id]}"
    }
  }
}

I decided to try again to check the port via url through the filebeat container

root@mydockerhost:/usr/share/filebeat# curl logstash:5044
curl: (56) Recv failure: Connection reset by peer

And now I saw a new error in the logs logstash

[2023-11-10T13:55:08,732][WARN ][io.netty.channel.DefaultChannelPipeline][main][a8be4f97d4322e44f18af66bb6b22e38e6052b99db12c56c4ff8b0da621713dc] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 69
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:477) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:404) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:371) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:354) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.access$300(AbstractChannelHandlerContext.java:61) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext$4.run(AbstractChannelHandlerContext.java:253) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 69
        at org.logstash.beats.Protocol.version(Protocol.java:22) ~[logstash-input-beats-6.2.6.jar:?]
        at org.logstash.beats.BeatsParser.decode(BeatsParser.java:62) ~[logstash-input-beats-6.2.6.jar:?]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        ... 11 more

I added separately to the docker-compose plugin

  logstash:
    image: logstash:7.17.14
    ports:
      - "5044:5044"
    container_name: logstash
    restart: unless-stopped
    command: >
      sh -c '
        logstash-plugin install logstash-input-beats &&.
        logstash -f /etc/logstash/conf.d/logstash.conf'
    volumes:
      - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf
      - ./logstash.conf:/etc/logstash/conf.d/logstash.conf:ro
    depends_on:
      - elasticsearch
    networks:
      - elk_stack

I see in the logs that the plugin has accepted

Installing logstash-input-beats
Installation successful

But unfortunately it didn't work for me
In the logs filebeat:

2023-11-10T14:11:20.842Z        ERROR   [logstash]      logstash/async.go:280   Failed to publish events caused by: write tcp XX.XX.XX.XX:56788->XX.XX.XX.XX:5044: write: broken pipe
2023-11-10T14:11:21.899Z        ERROR   [publisher_pipeline_output]     pipeline/output.go:180  failed to publish events: write tcp XX.XX.XX.XX:56788->XX.XX.XX.XX:5044: write: broken p

This is not needed, the beats plugin is already bundled in logstash, remove this from your docker compose as this does nothing.

Please restart your containers and share the entire Logstash startup logs to show if it opened the port 5044 or not.

I restarted the build docker-compose down -v and docker-compose up -d and deleted logstash-plugin from command
All my logstash logs:

docker logs -f logstash
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2023-11-10T16:02:32,291][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2023-11-10T16:02:32,312][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.17.14", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.20+8 on 11.0.20+8 +indy +jit [linux-x86_64]"}
[2023-11-10T16:02:32,319][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djdk.io.File.enableADS=true, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/]
[2023-11-10T16:02:32,393][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2023-11-10T16:02:32,444][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2023-11-10T16:02:33,175][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2023-11-10T16:02:33,230][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"0ff37d9c-0ab5-45c5-b828-31d59e5bf0a0", :path=>"/usr/share/logstash/data/uuid"}
[2023-11-10T16:02:36,382][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2023-11-10T16:02:36,386][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and may be removed in a future release.
Please configure Metricbeat to monitor Logstash. Documentation can be found at:
https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
[2023-11-10T16:02:37,292][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-10T16:02:37,484][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-10T16:02:38,188][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2023-11-10T16:02:39,091][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2023-11-10T16:02:39,133][INFO ][logstash.licensechecker.licensereader] Elasticsearch version determined (7.17.14) {:es_version=>7}
[2023-11-10T16:02:39,144][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2023-11-10T16:02:39,778][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2023-11-10T16:02:39,802][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2023-11-10T16:02:40,632][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2023-11-10T16:02:45,366][INFO ][org.reflections.Reflections] Reflections took 244 ms to scan 1 urls, producing 119 keys and 419 values
[2023-11-10T16:02:47,944][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-10T16:02:48,047][WARN ][deprecation.logstash.codecs.json] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-10T16:02:48,107][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-10T16:02:48,141][WARN ][deprecation.logstash.inputs.file] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-10T16:02:48,193][WARN ][deprecation.logstash.outputs.elasticsearchmonitoring] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-10T16:02:48,332][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-10T16:02:48,363][WARN ][deprecation.logstash.inputs.beats] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-10T16:02:48,390][ERROR][logstash.javapipeline    ][.monitoring-logstash] Pipeline error {:pipeline_id=>".monitoring-logstash", :exception=>#<RuntimeError: LogStash::Outputs::ElasticSearchMonitoring#register must be overidden>, :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:91:in `register'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:131:in `register'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:68:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:233:in `block in register_plugins'", "org/jruby/RubyArray.java:1821:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:232:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:598:in `maybe_setup_out_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:245:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:190:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:142:in `block in start'"], "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x25a8f94 run>"}
[2023-11-10T16:02:48,391][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-10T16:02:48,399][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
[2023-11-10T16:02:48,432][ERROR][logstash.agent           ] Failed to execute action {:id=>:".monitoring-logstash", :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<.monitoring-logstash>, action_result: false", :backtrace=>nil}
[2023-11-10T16:02:48,443][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-10T16:02:48,570][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//XX.XX.XX.XX:9200"]}
[2023-11-10T16:02:48,626][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://XX.XX.XX.XX:9200/]}}
[2023-11-10T16:02:48,679][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://XX.XX.XX.XX:9200/"}
[2023-11-10T16:02:48,712][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.17.14) {:es_version=>7}
[2023-11-10T16:02:48,714][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2023-11-10T16:02:48,896][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2023-11-10T16:02:48,983][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2023-11-10T16:02:49,054][INFO ][logstash.outputs.elasticsearch][main] Installing Elasticsearch template {:name=>"logstash"}
[2023-11-10T16:02:49,101][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/logstash.conf"], :thread=>"#<Thread:0x6045855 run>"}
[2023-11-10T16:02:51,148][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>2.04}
[2023-11-10T16:02:51,272][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2023-11-10T16:02:51,303][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2023-11-10T16:02:51,505][INFO ][filewatch.observingtail  ][main][0d62c7fa1a4034138d17976751163a7e8da4ededebb898896b66a7a7dc42a425] START, creating Discoverer, Watch with file and sincedb collections
[2023-11-10T16:02:51,661][INFO ][org.logstash.beats.Server][main][a8be4f97d4322e44f18af66bb6b22e38e6052b99db12c56c4ff8b0da621713dc] Starting server on port: 5044
[2023-11-10T16:02:52,506][WARN ][deprecation.logstash.codecs.plain][main][a8be4f97d4322e44f18af66bb6b22e38e6052b99db12c56c4ff8b0da621713dc] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-10T16:07:09,509][WARN ][deprecation.logstash.codecs.plain][main][a8be4f97d4322e44f18af66bb6b22e38e6052b99db12c56c4ff8b0da621713dc] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-10T16:07:09,565][INFO ][org.logstash.beats.BeatsHandler][main][a8be4f97d4322e44f18af66bb6b22e38e6052b99db12c56c4ff8b0da621713dc] [local: 192.168.48.5:5044, remote: 192.168.48.2:54196] Handling exception: io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 71 (caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 71)
docker exec -it filebeat bash
after curl logstash:5044
[2023-11-10T16:07:09,566][WARN ][io.netty.channel.DefaultChannelPipeline][main][a8be4f97d4322e44f18af66bb6b22e38e6052b99db12c56c4ff8b0da621713dc] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 71
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:477) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.access$600(AbstractChannelHandlerContext.java:61) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext$7.run(AbstractChannelHandlerContext.java:370) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 71
        at org.logstash.beats.Protocol.version(Protocol.java:22) ~[logstash-input-beats-6.2.6.jar:?]
        at org.logstash.beats.BeatsParser.decode(BeatsParser.java:62) ~[logstash-input-beats-6.2.6.jar:?]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        ... 9 more
[2023-11-10T16:07:09,601][INFO ][org.logstash.beats.BeatsHandler][main][a8be4f97d4322e44f18af66bb6b22e38e6052b99db12c56c4ff8b0da621713dc] [local: 192.168.48.5:5044, remote: 192.168.48.2:54196] Handling exception: io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 69 (caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 69)
[2023-11-10T16:07:09,601][WARN ][io.netty.channel.DefaultChannelPipeline][main][a8be4f97d4322e44f18af66bb6b22e38e6052b99db12c56c4ff8b0da621713dc] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
io.netty.handler.codec.DecoderException: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 69
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:477) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:404) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed(ByteToMessageDecoder.java:371) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInactive(ByteToMessageDecoder.java:354) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelInactive(AbstractChannelHandlerContext.java:262) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext.access$300(AbstractChannelHandlerContext.java:61) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.channel.AbstractChannelHandlerContext$4.run(AbstractChannelHandlerContext.java:253) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.DefaultEventExecutor.run(DefaultEventExecutor.java:66) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-all-4.1.65.Final.jar:4.1.65.Final]
        at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: org.logstash.beats.InvalidFrameProtocolException: Invalid version of beats protocol: 69
        at org.logstash.beats.Protocol.version(Protocol.java:22) ~[logstash-input-beats-6.2.6.jar:?]
        at org.logstash.beats.BeatsParser.decode(BeatsParser.java:62) ~[logstash-input-beats-6.2.6.jar:?]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:507) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:446) ~[netty-all-4.1.65.Final.jar:4.1.65.Final]
        ... 11 more

So, your Logstash is correctly listening on port 5044 as confirmed by those lines:

[2023-11-10T16:02:51,272][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2023-11-10T16:02:51,661][INFO ][org.logstash.beats.Server][main][a8be4f97d4322e44f18af66bb6b22e38e6052b99db12c56c4ff8b0da621713dc] Starting server on port: 5044

So the beats input is correct.

This error is another issue, this means that something is sending data to the logstash on por 5044, but not using TCP.

What did you configure here? This needs to be just the IP address and the port. Did you for some reason configure it as http://ip:5044 ?

Another issue you have is this.

Your output is filtering on events where the type is equal to docker, you need to have the same approach for the beats.

Add a type => beats on your beats input and another filter in your output to send it to another index.

The reason why I added # pipeline.batch.size: 500 is because I don't understand how logstash works and I saw a bug with query sizes earlier, so I added it.

Otherwise, it's just my virtual machine's IP with port 5044.

Can I digress a bit from the code and ask if my logic is correct? I'm just new to this and don't really understand how elk works.

First filebeat selects where to collect log data from, in my case paths: - /var/lib/docker/docker/containers//.log , where all logs of all docker containers will be collected. Then, everything that filebeat has collected is collected by logstash and filtered based on it.

In logstash I need to customize in "input" all that I sent to filebeat - beats and log path and filter there as well. And the filtered output is already sent to elasticsearch, I understand correctly?

I have set the output and input to logstash beast

  file {
    path => "/var/lib/docker/docker/containers/*/*.log"
    sincedb_path => "/dev/null"
    exclude => "*.gz"
    start_position => "beginning"
    codec => json
    type => "docker"
  }
  beats {
    port => 5044
    type => "beats"
  }
}


filter {
  if [type] == "docker" {
  }
  if [type] == "beats" {
  }
}


output {
  if [type] == "docker" { }
    elasticsearch {
      hosts => "XX.XX.XX.XX:9200"
      index => "%{[@metadata][beat]}-%{[@metadata][version]}"
      document_id => "%{[@metadata][docker][container][id]}"
    }
  }
  if [type] == "beats" {
    elasticsearch {
      hosts => "XX.XX.XX.XX:9200"
      index => "beats-index-%{+yyyyy.MM.dd}"
    }
  }
}

Now I have an error in logstash logs "Could not index event to Elasticsearch.", I don't quite understand what exactly I have to set for it to see the event, should there be some type conversion?

docker logs -f logstash

Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2023-11-11T04:54:42,076][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2023-11-11T04:54:42,092][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.17.14", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.20+8 on 11.0.20+8 +indy +jit [linux-x86_64]"}
[2023-11-11T04:54:42,096][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djdk.io.File.enableADS=true, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/]
[2023-11-11T04:54:42,140][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2023-11-11T04:54:42,160][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2023-11-11T04:54:42,634][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2023-11-11T04:54:42,677][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"3440109a-bdf6-4a5c-8a9d-f4ba1575fd56", :path=>"/usr/share/logstash/data/uuid"}
[2023-11-11T04:54:45,295][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2023-11-11T04:54:45,305][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and may be removed in a future release.
Please configure Metricbeat to monitor Logstash. Documentation can be found at:
https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
[2023-11-11T04:54:46,186][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T04:54:46,367][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T04:54:47,252][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2023-11-11T04:54:47,681][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2023-11-11T04:54:47,726][INFO ][logstash.licensechecker.licensereader] Elasticsearch version determined (7.17.14) {:es_version=>7}
[2023-11-11T04:54:47,733][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2023-11-11T04:54:47,930][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2023-11-11T04:54:47,931][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2023-11-11T04:54:48,385][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2023-11-11T04:54:53,166][INFO ][org.reflections.Reflections] Reflections took 179 ms to scan 1 urls, producing 119 keys and 419 values
[2023-11-11T04:54:55,888][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T04:54:55,930][WARN ][deprecation.logstash.codecs.json] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T04:54:56,076][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T04:54:56,086][WARN ][deprecation.logstash.inputs.file] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T04:54:56,201][WARN ][deprecation.logstash.outputs.elasticsearchmonitoring] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T04:54:56,424][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T04:54:56,452][WARN ][deprecation.logstash.inputs.beats] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T04:54:56,486][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://elasticsearch:9200"]}
[2023-11-11T04:54:56,503][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T04:54:56,540][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T04:54:56,565][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T04:54:56,587][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2023-11-11T04:54:56,599][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T04:54:56,630][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//XX.XX.XX.XX:9200"]}
[2023-11-11T04:54:56,675][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2023-11-11T04:54:56,687][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://XX.XX.XX.XX:9200/]}}
[2023-11-11T04:54:56,707][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch version determined (7.17.14) {:es_version=>7}
[2023-11-11T04:54:56,708][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2023-11-11T04:54:56,714][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://XX.XX.XX.XX:9200/"}
[2023-11-11T04:54:56,733][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.17.14) {:es_version=>7}
[2023-11-11T04:54:56,733][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2023-11-11T04:54:56,893][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2023-11-11T04:54:56,899][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2023-11-11T04:54:56,904][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//XX.XX.XX.XX:9200"]}
[2023-11-11T04:54:56,934][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://XX.XX.XX.XX:9200/]}}
[2023-11-11T04:54:56,972][WARN ][logstash.javapipeline    ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
[2023-11-11T04:54:57,011][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2023-11-11T04:54:57,032][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://XX.XX.XX.XX:9200/"}
[2023-11-11T04:54:57,061][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.17.14) {:es_version=>7}
[2023-11-11T04:54:57,062][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2023-11-11T04:54:57,127][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2023-11-11T04:54:57,160][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2023-11-11T04:54:57,187][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x473ee52@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:54 run>"}
[2023-11-11T04:54:57,203][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/logstash.conf"], :thread=>"#<Thread:0x9b4ada1 run>"}
[2023-11-11T04:54:59,149][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>1.95}
[2023-11-11T04:54:59,312][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2023-11-11T04:54:59,496][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>2.29}
[2023-11-11T04:54:59,572][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2023-11-11T04:54:59,589][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2023-11-11T04:54:59,737][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
[2023-11-11T04:54:59,824][INFO ][filewatch.observingtail  ][main][6016543dc1571e18c8a0880019ea9a84b2044945a34b48ee32e67fca45b6b6ea] START, creating Discoverer, Watch with file and sincedb collections
[2023-11-11T04:54:59,853][INFO ][org.logstash.beats.Server][main][8782f6d63fbd8e3d2598e6290b6ef0d12016ed085a5515b0cff1a8835e80636a] Starting server on port: 5044
[2023-11-11T04:55:00,493][WARN ][deprecation.logstash.codecs.plain][main][8782f6d63fbd8e3d2598e6290b6ef0d12016ed085a5515b0cff1a8835e80636a] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T04:55:02,525][WARN ][logstash.outputs.elasticsearch][main][f9cb81730ee5855b0191837c55dec4a3d878a9f2f7bdadf6e40e84414b9649bf] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"beats-index-2023.11.11", :routing=>nil}, {"ecs"=>{"version"=>"1.12.0"}, "stream"=>"stdout", "type"=>"beats", "input"=>{"type"=>"log"}, "time"=>"2023-11-10T10:54:13.5554537Z", "@version"=>"1", "host"=>{"name"=>"mydockerhost"}, "@timestamp"=>2023-11-11T04:53:46.505Z, "agent"=>{"type"=>"filebeat", "name"=>"mydockerhost", "version"=>"7.17.14", "ephemeral_id"=>"f833a79a-6b4c-4e90-a2c0-54b8e3c038f3", "id"=>"6b43701e-d4f1-42ff-8804-f8539f175fba", "hostname"=>"mydockerhost"}, "fields"=>{"type"=>"docker"}, "log"=>"\e[40m\e[32minfo\e[39m\e[22m\e[49m: Worker.Services.Data.RabbitMq.ExecutionTaskConsumer[0]\n", "tags"=>["beats_input_raw_event"], "log.message"=>"", "error"=>{"type"=>"json", "message"=>"Key 'log.message' not found"}}], :response=>{"index"=>{"_index"=>"beats-index-2023.11.11", "_type"=>"_doc", "_id"=>"9gG7vIsBIJGDQM8jupG-", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"can't merge a non object mapping [log] with an object mapping"}}}}
[2023-11-11T04:55:02,528][WARN ][logstash.outputs.elasticsearch][main][f9cb81730ee5855b0191837c55dec4a3d878a9f2f7bdadf6e40e84414b9649bf] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"beats-index-2023.11.11", :routing=>nil}, {"log.message"=>"", "stream"=>"stdout", "type"=>"beats", "input"=>{"type"=>"log"}, "time"=>"2023-11-04T13:59:16.968618336Z", "@version"=>"1", "host"=>{"name"=>"mydockerhost"}, "@timestamp"=>2023-11-11T04:53:46.510Z, "agent"=>{"type"=>"filebeat", "name"=>"mydockerhost", "version"=>"7.17.14", "ephemeral_id"=>"f833a79a-6b4c-4e90-a2c0-54b8e3c038f3", "id"=>"6b43701e-d4f1-42ff-8804-f8539f175fba", "hostname"=>"mydockerhost"}, "fields"=>{"type"=>"docker"}, "log"=>"\e[40m\e[32minfo\e[39m\e[22m\e[49m: Worker.Services.Services.WorkerService[0]\n", "tags"=>["beats_input_raw_event"], "ecs"=>{"version"=>"1.12.0"}, "error"=>{"type"=>"json", "message"=>"Key 'log.message' not found"}}], :response=>{"index"=>{"_index"=>"beats-index-2023.11.11", "_type"=>"_doc", "_id"=>"9wG7vIsBIJGDQM8jupG-", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [log] cannot be changed from type [text] to [ObjectMapper]"}}}}
[2023-11-11T04:55:02,531][WARN ][logstash.outputs.elasticsearch][main][f9cb81730ee5855b0191837c55dec4a3d878a9f2f7bdadf6e40e84414b9649bf] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"beats-index-2023.11.11", :routing=>nil}, {"ecs"=>{"version"=>"1.12.0"}, "stream"=>"stdout", "type"=>"beats", "input"=>{"type"=>"log"}, "time"=>"2023-11-04T18:47:20.706457242Z", "@version"=>"1", "host"=>{"name"=>"mydockerhost"}, "@timestamp"=>2023-11-11T04:53:46.505Z, "agent"=>{"type"=>"filebeat", "name"=>"mydockerhost", "version"=>"7.17.14", "ephemeral_id"=>"f833a79a-6b4c-4e90-a2c0-54b8e3c038f3", "id"=>"6b43701e-d4f1-42ff-8804-f8539f175fba", "hostname"=>"mydockerhost"}, "fields"=>{"type"=>"docker"}, "log"=>"[18:47:20 INF] EventCode:production-onHeatEvent : Сохранение данных вызова экспорт-события\n", "tags"=>["beats_input_raw_event"], "log.message"=>"", "error"=>{"message"=>"Key 'log.message' not found", "type"=>"json"}}], :response=>{"index"=>{"_index"=>"beats-index-2023.11.11", "_type"=>"_doc", "_id"=>"-AG7vIsBIJGDQM8jupG_", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"can't merge a non object mapping [log] with an object mapping"}}}}
[2023-11-11T04:55:02,557][WARN ][logstash.outputs.elasticsearch][main][f9cb81730ee5855b0191837c55dec4a3d878a9f2f7bdadf6e40e84414b9649bf] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"beats-index-2023.11.11", :routing=>nil}, {"ecs"=>{"version"=>"1.12.0"}, "stream"=>"stdout", "type"=>"beats", "input"=>{"type"=>"log"}, "time"=>"2023-11-10T10:54:13.85497052Z", "@version"=>"1", "host"=>{"name"=>"mydockerhost"}, "@timestamp"=>2023-11-11T04:53:46.511Z, "agent"=>{"type"=>"filebeat", "name"=>"mydockerhost", "version"=>"7.17.14", "ephemeral_id"=>"f833a79a-6b4c-4e90-a2c0-54b8e3c038f3", "id"=>"6b43701e-d4f1-42ff-8804-f8539f175fba", "hostname"=>"mydockerhost"}, "fields"=>{"type"=>"docker"}, "log"=>"      Application started. Press Ctrl+C to shut down.\n", "tags"=>["beats_input_raw_event"], "log.message"=>"", "error"=>{"message"=>"Key 'log.message' not found", "type"=>"json"}}], :response=>{"index"=>{"_index"=>"beats-index-2023.11.11", "_type"=>"_doc", "_id"=>"-QG7vIsBIJGDQM8jupG_", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"can't merge a non object mapping [log] with an object mapping"}}}}
[2023-11-11T04:55:02,559][WARN ][logstash.outputs.elasticsearch][main][f9cb81730ee5855b0191837c55dec4a3d878a9f2f7bdadf6e40e84414b9649bf] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"beats-index-2023.11.11", :routing=>nil}, {"ecs"=>{"version"=>"1.12.0"}, "stream"=>"stdout", "type"=>"beats", "input"=>{"type"=>"log"}, "time"=>"2023-11-10T06:18:11.042871924Z", "@version"=>"1", "host"=>{"name"=>"mydockerhost"}, "@timestamp"=>2023-11-11T04:53:46.505Z, "agent"=>{"type"=>"filebeat", "name"=>"mydockerhost", "version"=>"7.17.14", "ephemeral_id"=>"f833a79a-6b4c-4e90-a2c0-54b8e3c038f3", "id"=>"6b43701e-d4f1-42ff-8804-f8539f175fba", "hostname"=>"mydockerhost"}, "fields"=>{"type"=>"docker"}, "log"=>"      No XML encryptor configured. Key {01ce42ad-d666-4498-88ae-eab230f0079c} may be persisted to storage in unencrypted form.\n", "tags"=>["beats_input_raw_event"], "log.message"=>"", "error"=>{"message"=>"Key 'log.message' not found", "type"=>"json"}}], :response=>{"index"=>{"_index"=>"beats-index-2023.11.11", "_type"=>"_doc", "_id"=>"-gG7vIsBIJGDQM8jupHA", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"can't merge a non object mapping [log] with an object mapping"}}}}
[2023-11-11T04:55:02,563][WARN ][logstash.outputs.elasticsearch][main][f9cb81730ee5855b0191837c55dec4a3d878a9f2f7bdadf6e40e84414b9649bf] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"beats-index-2023.11.11", :routing=>nil}, {"log.message"=>"", "stream"=>"stdout", "type"=>"beats", "input"=>{"type"=>"log"}, "time"=>"2023-11-10T06:18:11.895263425Z", "@version"=>"1", "host"=>{"name"=>"mydockerhost"}, "@timestamp"=>2023-11-11T04:53:46.513Z, "agent"=>{"type"=>"filebeat", "name"=>"mydockerhost", "version"=>"7.17.14", "ephemeral_id"=>"f833a79a-6b4c-4e90-a2c0-54b8e3c038f3", "id"=>"6b43701e-d4f1-42ff-8804-f8539f175fba", "hostname"=>"mydockerhost"}, "fields"=>{"type"=>"docker"}, "log"=>"      Application started. Press Ctrl+C to shut down.\n", "tags"=>["beats_input_raw_event"], "ecs"=>{"version"=>"1.12.0"}, "error"=>{"message"=>"Key 'log.message' not found", "type"=>"json"}}], :response=>{"index"=>{"_index"=>"beats-index-2023.11.11", "_type"=>"_doc", "_id"=>"-wG7vIsBIJGDQM8jupHA", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [log] cannot be changed from type [text] to [ObjectMapper]"}}}}
[2023-11-11T04:55:02,564][WARN ][logstash.outputs.elasticsearch][main][f9cb81730ee5855b0191837c55dec4a3d878a9f2f7bdadf6e40e84414b9649bf] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"beats-index-2023.11.11", :routing=>nil}, {"log.message"=>"", "stream"=>"stdout", "type"=>"beats", "input"=>{"type"=>"log"}, "time"=>"2023-11-10T10:54:13.855521516Z", "@version"=>"1", "host"=>{"name"=>"mydockerhost"}, "@timestamp"=>2023-11-11T04:53:46.511Z, "agent"=>{"type"=>"filebeat", "name"=>"mydockerhost", "version"=>"7.17.14", "ephemeral_id"=>"f833a79a-6b4c-4e90-a2c0-54b8e3c038f3", "id"=>"6b43701e-d4f1-42ff-8804-f8539f175fba", "hostname"=>"mydockerhost"}, "fields"=>{"type"=>"docker"}, "log"=>"\e[40m\e[32minfo\e[39m\e[22m\e[49m: Microsoft.Hosting.Lifetime[0]\n", "tags"=>["beats_input_raw_event"], "ecs"=>{"version"=>"1.12.0"}, "error"=>{"type"=>"json", "message"=>"Key 'log.message' not found"}}], :response=>{"index"=>{"_index"=>"beats-index-2023.11.11", "_type"=>"_doc", "_id"=>"_AG7vIsBIJGDQM8jupHA", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [log] cannot be changed from type [text] to [ObjectMapper]"}}}}


Yeah, this doesn't make any difference, I just asked what you had in the hosts configuration of Filebeat, because you didn't share what you had and you had some errors that indicate a wrong configuration, but it seems that something was changed since you didn't share that errors again.

[2023-11-11T04:55:02,525][WARN ][logstash.outputs.elasticsearch][main][f9cb81730ee5855b0191837c55dec4a3d878a9f2f7bdadf6e40e84414b9649bf] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"beats-index-2023.11.11", :routing=>nil}, {"ecs"=>{"version"=>"1.12.0"}, "stream"=>"stdout", "type"=>"beats", "input"=>{"type"=>"log"}, "time"=>"2023-11-10T10:54:13.5554537Z", "@version"=>"1", "host"=>{"name"=>"mydockerhost"}, "@timestamp"=>2023-11-11T04:53:46.505Z, "agent"=>{"type"=>"filebeat", "name"=>"mydockerhost", "version"=>"7.17.14", "ephemeral_id"=>"f833a79a-6b4c-4e90-a2c0-54b8e3c038f3", "id"=>"6b43701e-d4f1-42ff-8804-f8539f175fba", "hostname"=>"mydockerhost"}, "fields"=>{"type"=>"docker"}, "log"=>"\e[40m\e[32minfo\e[39m\e[22m\e[49m: Worker.Services.Data.RabbitMq.ExecutionTaskConsumer[0]\n", "tags"=>["beats_input_raw_event"], "log.message"=>"", "error"=>{"type"=>"json", "message"=>"Key 'log.message' not found"}}], :response=>{"index"=>{"_index"=>"beats-index-2023.11.11", "_type"=>"_doc", "_id"=>"9gG7vIsBIJGDQM8jupG-", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"can't merge a non object mapping [log] with an object mapping"}}}}
[2023-11-11T04:55:02,528][WARN ][logstash.outputs.elasticsearch][main][f9cb81730ee5855b0191837c55dec4a3d878a9f2f7bdadf6e40e84414b9649bf] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"beats-index-2023.11.11", :routing=>nil}, {"log.message"=>"", "stream"=>"stdout", "type"=>"beats", "input"=>{"type"=>"log"}, "time"=>"2023-11-04T13:59:16.968618336Z", "@version"=>"1", "host"=>{"name"=>"mydockerhost"}, "@timestamp"=>2023-11-11T04:53:46.510Z, "agent"=>{"type"=>"filebeat", "name"=>"mydockerhost", "version"=>"7.17.14", "ephemeral_id"=>"f833a79a-6b4c-4e90-a2c0-54b8e3c038f3", "id"=>"6b43701e-d4f1-42ff-8804-f8539f175fba", "hostname"=>"mydockerhost"}, "fields"=>{"type"=>"docker"}, "log"=>"\e[40m\e[32minfo\e[39m\e[22m\e[49m: Worker.Services.Services.WorkerService[0]\n", "tags"=>["beats_input_raw_event"], "ecs"=>{"version"=>"1.12.0"}, "error"=>{"type"=>"json", "message"=>"Key 'log.message' not found"}}], :response=>{"index"=>{"_index"=>"beats-index-2023.11.11", "_type"=>"_doc", "_id"=>"9wG7vIsBIJGDQM8jupG-", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [log] cannot be changed from type [text] to [ObjectMapper]"}}}}
[2023-11-11T04:55:02,531][WARN ][logstash.outputs.elasticsearch][main][f9cb81730ee5855b0191837c55dec4a3d878a9f2f7bdadf6e40e84414b9649bf] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"beats-index-2023.11.11", :routing=>nil}, {"ecs"=>{"version"=>"1.12.0"}, "stream"=>"stdout", "type"=>"beats", "input"=>{"type"=>"log"}, "time"=>"2023-11-04T18:47:20.706457242Z", "@version"=>"1", "host"=>{"name"=>"mydockerhost"}, "@timestamp"=>2023-11-11T04:53:46.505Z, "agent"=>{"type"=>"filebeat", "name"=>"mydockerhost", "version"=>"7.17.14", "ephemeral_id"=>"f833a79a-6b4c-4e90-a2c0-54b8e3c038f3", "id"=>"6b43701e-d4f1-42ff-8804-f8539f175fba", "hostname"=>"mydockerhost"}, "fields"=>{"type"=>"docker"}, "log"=>"[18:47:20 INF] EventCode:production-onHeatEvent : Сохранение данных вызова экспорт-события\n", "tags"=>["beats_input_raw_event"], "log.message"=>"", "error"=>{"message"=>"Key 'log.message' not found", "type"=>"json"}}], :response=>{"index"=>{"_index"=>"beats-index-2023.11.11", "_type"=>"_doc", "_id"=>"-AG7vIsBIJGDQM8jupG_", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"can't merge a non object mapping [log] with an object mapping"}}}}
[2023-11-11T04:55:02,557][WARN ][logstash.outputs.elasticsearch][main][f9cb81730ee5855b0191837c55dec4a3d878a9f2f7bdadf6e40e84414b9649bf] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"beats-index-2023.11.11", :routing=>nil}, {"ecs"=>{"version"=>"1.12.0"}, "stream"=>"stdout", "type"=>"beats", "input"=>{"type"=>"log"}, "time"=>"2023-11-10T10:54:13.85497052Z", "@version"=>"1", "host"=>{"name"=>"mydockerhost"}, "@timestamp"=>2023-11-11T04:53:46.511Z, "agent"=>{"type"=>"filebeat", "name"=>"mydockerhost", "version"=>"7.17.14", "ephemeral_id"=>"f833a79a-6b4c-4e90-a2c0-54b8e3c038f3", "id"=>"6b43701e-d4f1-42ff-8804-f8539f175fba", "hostname"=>"mydockerhost"}, "fields"=>{"type"=>"docker"}, "log"=>"      Application started. Press Ctrl+C to shut down.\n", "tags"=>["beats_input_raw_event"], "log.message"=>"", "error"=>{"message"=>"Key 'log.message' not found", "type"=>"json"}}], :response=>{"index"=>{"_index"=>"beats-index-2023.11.11", "_type"=>"_doc", "_id"=>"-QG7vIsBIJGDQM8jupG_", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"can't merge a non object mapping [log] with an object mapping"}}}}
[2023-11-11T04:55:02,559][WARN ][logstash.outputs.elasticsearch][main][f9cb81730ee5855b0191837c55dec4a3d878a9f2f7bdadf6e40e84414b9649bf] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"beats-index-2023.11.11", :routing=>nil}, {"ecs"=>{"version"=>"1.12.0"}, "stream"=>"stdout", "type"=>"beats", "input"=>{"type"=>"log"}, "time"=>"2023-11-10T06:18:11.042871924Z", "@version"=>"1", "host"=>{"name"=>"mydockerhost"}, "@timestamp"=>2023-11-11T04:53:46.505Z, "agent"=>{"type"=>"filebeat", "name"=>"mydockerhost", "version"=>"7.17.14", "ephemeral_id"=>"f833a79a-6b4c-4e90-a2c0-54b8e3c038f3", "id"=>"6b43701e-d4f1-42ff-8804-f8539f175fba", "hostname"=>"mydockerhost"}, "fields"=>{"type"=>"docker"}, "log"=>"      No XML encryptor configured. Key {01ce42ad-d666-4498-88ae-eab230f0079c} may be persisted to storage in unencrypted form.\n", "tags"=>["beats_input_raw_event"], "log.message"=>"", "error"=>{"message"=>"Key 'log.message' not found", "type"=>"json"}}], :response=>{"index"=>{"_index"=>"beats-index-2023.11.11", "_type"=>"_doc", "_id"=>"-gG7vIsBIJGDQM8jupHA", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"can't merge a non object mapping [log] with an object mapping"}}}}
[2023-11-11T04:55:02,563][WARN ][logstash.outputs.elasticsearch][main][f9cb81730ee5855b0191837c55dec4a3d878a9f2f7bdadf6e40e84414b9649bf] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"beats-index-2023.11.11", :routing=>nil}, {"log.message"=>"", "stream"=>"stdout", "type"=>"beats", "input"=>{"type"=>"log"}, "time"=>"2023-11-10T06:18:11.895263425Z", "@version"=>"1", "host"=>{"name"=>"mydockerhost"}, "@timestamp"=>2023-11-11T04:53:46.513Z, "agent"=>{"type"=>"filebeat", "name"=>"mydockerhost", "version"=>"7.17.14", "ephemeral_id"=>"f833a79a-6b4c-4e90-a2c0-54b8e3c038f3", "id"=>"6b43701e-d4f1-42ff-8804-f8539f175fba", "hostname"=>"mydockerhost"}, "fields"=>{"type"=>"docker"}, "log"=>"      Application started. Press Ctrl+C to shut down.\n", "tags"=>["beats_input_raw_event"], "ecs"=>{"version"=>"1.12.0"}, "error"=>{"message"=>"Key 'log.message' not found", "type"=>"json"}}], :response=>{"index"=>{"_index"=>"beats-index-2023.11.11", "_type"=>"_doc", "_id"=>"-wG7vIsBIJGDQM8jupHA", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [log] cannot be changed from type [text] to [ObjectMapper]"}}}}
[2023-11-11T04:55:02,564][WARN ][logstash.outputs.elasticsearch][main][f9cb81730ee5855b0191837c55dec4a3d878a9f2f7bdadf6e40e84414b9649bf] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"beats-index-2023.11.11", :routing=>nil}, {"log.message"=>"", "stream"=>"stdout", "type"=>"beats", "input"=>{"type"=>"log"}, "time"=>"2023-11-10T10:54:13.855521516Z", "@version"=>"1", "host"=>{"name"=>"mydockerhost"}, "@timestamp"=>2023-11-11T04:53:46.511Z, "agent"=>{"type"=>"filebeat", "name"=>"mydockerhost", "version"=>"7.17.14", "ephemeral_id"=>"f833a79a-6b4c-4e90-a2c0-54b8e3c038f3", "id"=>"6b43701e-d4f1-42ff-8804-f8539f175fba", "hostname"=>"mydockerhost"}, "fields"=>{"type"=>"docker"}, "log"=>"\e[40m\e[32minfo\e[39m\e[22m\e[49m: Microsoft.Hosting.Lifetime[0]\n", "tags"=>["beats_input_raw_event"], "ecs"=>{"version"=>"1.12.0"}, "error"=>{"type"=>"json", "message"=>"Key 'log.message' not found"}}], :response=>{"index"=>{"_index"=>"beats-index-2023.11.11", "_type"=>"_doc", "_id"=>"_AG7vIsBIJGDQM8jupHA", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"mapper [log] cannot be changed from type [text] to [ObjectMapper]"}}}}

These logs seems to be a mapping error, your message has both a field named log.message and a field named log and this is not supported.

A couple of things to try to fix your pipeline, first, your Logstash is running on a container, so your file input will not work as your log files are not available on the container, you may remove anything related to that file input and leave only the beats input.

So, your pipeline should look like this:

input {
  beats {
    port => 5044
    type => "beats"
  }
}

filter {
  if [type] == "beats" {
  }
}


output {
  if [type] == "beats" {
    elasticsearch {
      hosts => "XX.XX.XX.XX:9200"
      index => "beats-index-%{+yyyyy.MM.dd}"
    }
  }
}

Do you have anything inside this that you didn't share? If you do not have any filter inside that conditional you may remove the entire filter block as this will do nothing, if you have any filter you need to share it.

filter {
  if [type] == "beats" {
  }
}

I'm not sure where the log.message field is coming from, but I would suggest that you remove this configuration from your filebeat an test again.

json.message_key: log.message.  

I did what you said. In my filebeat I removed the line with logs, now my filebeat looks like this:

filebeat.inputs:
- type: log
  paths:
    - /var/lib/docker/docker/containers/*/*.log
  json.keys_under_root: true
  json.add_error_key: true
  json.overwrite_keys: true
  multiline.pattern: '^[[[:space:]]]'
  multiline.negate: false
  multiline.match: after

output.logstash:
  hosts: ["XX:XX:XX:XX:XX:XX:XX:5044"]

I also fixed my logstash.conf

input {
  beats {
    port => 5044
    type => "beats"
  }
}

output {
  if [type] == "beats" {
    elasticsearch {
      hosts => "XX.XX.XX.XX:9200"
      index => "beats-index-%{+yyyyy.MM.dd}"
    }
  }

It's just that in the filter lines I thought I could then set the output of what exactly I wanted from the docker container logs (which would be the container name, ports, the log text, the image name and its id).

And I wanted to let you know that I'm missing a previous error in the filebeat logs

2023-11-11T15:11:18.195Z        INFO    instance/beat.go:698    Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs] Hostfs Path: [/]
2023-11-11T15:11:18.204Z        INFO    instance/beat.go:706    Beat ID: 0898f8ad-6a97-48e9-85c1-746b2b1d9a44
2023-11-11T15:11:18.204Z        INFO    [seccomp]       seccomp/seccomp.go:124  Syscall filter successfully installed
2023-11-11T15:11:18.205Z        INFO    [beat]  instance/beat.go:1052   Beat info       {"system_info": {"beat": {"path": {"config": "/usr/share/filebeat", "data": "/usr/share/filebeat/data", "home": "/usr/share/filebeat", "logs": "/usr/share/filebeat/logs"}, "type": "filebeat", "uuid": "0898f8ad-6a97-48e9-85c1-746b2b1d9a44"}}}
2023-11-11T15:11:18.205Z        INFO    [beat]  instance/beat.go:1061   Build info      {"system_info": {"build": {"commit": "57698bed51958971cf7298131cf3469fb98058ec", "libbeat": "7.17.14", "time": "2023-10-05T19:22:02.000Z", "version": "7.17.14"}}}
2023-11-11T15:11:18.205Z        INFO    [beat]  instance/beat.go:1064   Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":4,"version":"go1.19.12"}}}
2023-11-11T15:11:18.206Z        INFO    [beat]  instance/beat.go:1070   Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2022-09-03T20:26:57Z","containerized":true,"name":"mydockerhost","ip":["127.0.0.1","192.168.80.2"],"kernel_version":"5.4.0-125-generic","mac":["02:42:c0:a8:50:02"],"os":{"type":"linux","family":"debian","platform":"ubuntu","name":"Ubuntu","version":"20.04.6 LTS (Focal Fossa)","major":20,"minor":4,"patch":6,"codename":"focal"},"timezone":"UTC","timezone_offset_sec":0}}}
2023-11-11T15:11:18.206Z        INFO    [beat]  instance/beat.go:1099   Process info    {"system_info": {"process": {"capabilities": {"inheritable":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"permitted":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"effective":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"bounding":["chown","dac_override","fowner","fsetid","kill","setgid","setuid","setpcap","net_bind_service","net_raw","sys_chroot","mknod","audit_write","setfcap"],"ambient":null}, "cwd": "/usr/share/filebeat", "exe": "/usr/share/filebeat/filebeat", "name": "filebeat", "pid": 8, "ppid": 1, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2023-11-11T15:11:16.780Z"}}}
2023-11-11T15:11:18.207Z        INFO    instance/beat.go:292    Setup Beat: filebeat; Version: 7.17.14
2023-11-11T15:11:18.207Z        INFO    [publisher]     pipeline/module.go:113  Beat name: mydockerhost
2023-11-11T15:11:18.209Z        WARN    beater/filebeat.go:202  Filebeat is unable to load the ingest pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the ingest pipelines or are using Logstash pipelines, you can ignore this warning.
2023-11-11T15:11:18.209Z        INFO    [monitoring]    log/log.go:142  Starting metrics logging every 30s
2023-11-11T15:11:18.212Z        INFO    instance/beat.go:457    filebeat start running.
2023-11-11T15:11:18.219Z        INFO    memlog/store.go:119     Loading data file of '/usr/share/filebeat/data/registry/filebeat' succeeded. Active transaction id=0
2023-11-11T15:11:18.219Z        INFO    memlog/store.go:124     Finished loading transaction log file for '/usr/share/filebeat/data/registry/filebeat'. Active transaction id=0
2023-11-11T15:11:18.219Z        WARN    beater/filebeat.go:411  Filebeat is unable to load the ingest pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the ingest pipelines or are using Logstash pipelines, you can ignore this warning.
2023-11-11T15:11:18.219Z        INFO    [registrar]     registrar/registrar.go:109      States Loaded from registrar: 0
2023-11-11T15:11:18.219Z        INFO    [crawler]       beater/crawler.go:71    Loading Inputs: 1
2023-11-11T15:11:18.219Z        INFO    [crawler]       beater/crawler.go:117   starting input, keys present on the config: [filebeat.inputs.0.json.add_error_key filebeat.inputs.0.json.keys_under_root filebeat.inputs.0.json.overwrite_keys filebeat.inputs.0.multiline.match filebeat.inputs.0.multiline.negate filebeat.inputs.0.multiline.pattern filebeat.inputs.0.paths.0 filebeat.inputs.0.type]
2023-11-11T15:11:18.219Z        WARN    [cfgwarn]       log/input.go:89 DEPRECATED: Log input. Use Filestream input instead.
2023-11-11T15:11:18.220Z        INFO    beater/crawler.go:155   Stopping Crawler
2023-11-11T15:11:18.220Z        INFO    beater/crawler.go:165   Stopping 0 inputs
2023-11-11T15:11:18.220Z        INFO    beater/crawler.go:185   Crawler stopped
2023-11-11T15:11:18.220Z        INFO    [registrar]     registrar/registrar.go:132      Stopping Registrar
2023-11-11T15:11:18.220Z        INFO    [registrar]     registrar/registrar.go:166      Ending Registrar
2023-11-11T15:11:18.220Z        INFO    [registrar]     registrar/registrar.go:137      Registrar stopped
2023-11-11T15:11:48.216Z        INFO    [monitoring]    log/log.go:184  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"cpu":{"cfs":{"period":{"us":100000}},"id":"/"},"cpuacct":{"id":"/","total":{"ns":688129074}},"memory":{"id":"/","mem":{"limit":{"bytes":9223372036854771712},"usage":{"bytes":36687872}}}},"cpu":{"system":{"ticks":90,"time":{"ms":92}},"total":{"ticks":340,"time":{"ms":343},"value":340},"user":{"ticks":250,"time":{"ms":251}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":9},"info":{"ephemeral_id":"3dc3b98f-98d6-4b34-af44-f914f21cf24f","uptime":{"ms":30391},"version":"7.17.14"},"memstats":{"gc_next":20295320,"memory_alloc":10522576,"memory_sys":49919240,"memory_total":54798192,"rss":101810176},"runtime":{"goroutines":18}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0},"type":"logstash"},"pipeline":{"clients":0,"events":{"active":0},"queue":{"max_events":4096}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":4},"load":{"1":7.18,"15":2.83,"5":3.87,"norm":{"1":1.795,"15":0.7075,"5":0.9675}}}}}}
2023-11-11T15:12:18.217Z        INFO    [monitoring]    log/log.go:184  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"cpuacct":{"total":{"ns":10939056}},"memory":{"mem":{"usage":{"bytes":188416}}}},"cpu":{"system":{"ticks":90,"time":{"ms":7}},"total":{"ticks":340,"time":{"ms":9},"value":340},"user":{"ticks":250,"time":{"ms":2}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":9},"info":{"ephemeral_id":"3dc3b98f-98d6-4b34-af44-f914f21cf24f","uptime":{"ms":60391},"version":"7.17.14"},"memstats":{"gc_next":20295320,"memory_alloc":11212824,"memory_total":55488440,"rss":101810176},"runtime":{"goroutines":18}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":7.36,"15":2.98,"5":4.23,"norm":{"1":1.84,"15":0.745,"5":1.0575}}}}}}
2023-11-11T15:12:48.221Z        INFO    [monitoring]    log/log.go:184  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"cpuacct":{"total":{"ns":8217078}},"memory":{"mem":{"usage":{"bytes":57344}}}},"cpu":{"system":{"ticks":110,"time":{"ms":13}},"total":{"ticks":360,"time":{"ms":18},"value":360},"user":{"ticks":250,"time":{"ms":5}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":9},"info":{"ephemeral_id":"3dc3b98f-98d6-4b34-af44-f914f21cf24f","uptime":{"ms":90397},"version":"7.17.14"},"memstats":{"gc_next":20295320,"memory_alloc":12192024,"memory_total":56467640,"rss":101810176},"runtime":{"goroutines":18}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":5.48,"15":2.98,"5":4.1,"norm":{"1":1.37,"15":0.745,"5":1.025}}}}}}
2023-11-11T15:13:18.223Z        INFO    [monitoring]    log/log.go:184  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"cpuacct":{"total":{"ns":34129883}},"memory":{"mem":{"usage":{"bytes":40960}}}},"cpu":{"system":{"ticks":120,"time":{"ms":9}},"total":{"ticks":380,"time":{"ms":17},"value":380},"user":{"ticks":260,"time":{"ms":8}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":9},"info":{"ephemeral_id":"3dc3b98f-98d6-4b34-af44-f914f21cf24f","uptime":{"ms":120391},"version":"7.17.14"},"memstats":{"gc_next":20295320,"memory_alloc":12546912,"memory_total":56822528,"rss":101810176},"runtime":{"goroutines":18}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":3.69,"15":2.91,"5":3.8,"norm":{"1":0.9225,"15":0.7275,"5":0.95}}}}}}
2023-11-11T15:13:48.219Z        INFO    [monitoring]    log/log.go:184  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"cpuacct":{"total":{"ns":23414937}},"memory":{"mem":{"usage":{"bytes":-4116480}}}},"cpu":{"system":{"ticks":130,"time":{"ms":14}},"total":{"ticks":400,"time":{"ms":26},"value":400},"user":{"ticks":270,"time":{"ms":12}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":9},"info":{"ephemeral_id":"3dc3b98f-98d6-4b34-af44-f914f21cf24f","uptime":{"ms":150390},"version":"7.17.14"},"memstats":{"gc_next":20515072,"memory_alloc":10087384,"memory_total":57451592,"rss":98537472},"runtime":{"goroutines":18}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":2.81,"15":2.87,"5":3.58,"norm":{"1":0.7025,"15":0.7175,"5":0.895}}}}}}
2023-11-11T15:14:18.222Z        INFO    [monitoring]    log/log.go:184  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"cpuacct":{"total":{"ns":11649481}},"memory":{"mem":{"usage":{"bytes":-102400}}}},"cpu":{"system":{"ticks":140,"time":{"ms":7}},"total":{"ticks":420,"time":{"ms":18},"value":420},"user":{"ticks":280,"time":{"ms":11}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":9},"info":{"ephemeral_id":"3dc3b98f-98d6-4b34-af44-f914f21cf24f","uptime":{"ms":180397},"version":"7.17.14"},"memstats":{"gc_next":20515072,"memory_alloc":11026920,"memory_total":58391128,"rss":98537472},"runtime":{"goroutines":18}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":2.3,"15":2.82,"5":3.38,"norm":{"1":0.575,"15":0.705,"5":0.845}}}}}}
2023-11-11T15:14:48.221Z        INFO    [monitoring]    log/log.go:184  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cgroup":{"cpuacct":{"total":{"ns":21664722}},"memory":{"mem":{"usage":{"bytes":172032}}}},"cpu":{"system":{"ticks":140,"time":{"ms":4}},"total":{"ticks":430,"time":{"ms":11},"value":430},"user":{"ticks":290,"time":{"ms":7}}},"handles":{"limit":{"hard":1048576,"soft":1048576},"open":9},"info":{"ephemeral_id":"3dc3b98f-98d6-4b34-af44-f914f21cf24f","uptime":{"ms":210392},"version":"7.17.14"},"memstats":{"gc_next":20515072,"memory_alloc":11385368,"memory_sys":262144,"memory_total":58749576,"rss":98537472},"runtime":{"goroutines":18}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"active":0}},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"load":{"1":1.8,"15":2.76,"5":3.15,"norm":{"1":0.45,"15":0.69,"5":0.7875}}}}}}

I don't see any previous errors in logstash.conf either

Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2023-11-11T15:12:11,854][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2023-11-11T15:12:11,892][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.17.14", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.20+8 on 11.0.20+8 +indy +jit [linux-x86_64]"}
[2023-11-11T15:12:11,896][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djdk.io.File.enableADS=true, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/]
[2023-11-11T15:12:11,980][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2023-11-11T15:12:12,025][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2023-11-11T15:12:12,807][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2023-11-11T15:12:12,882][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"3641e8bb-1335-43d5-ab96-7635818bf635", :path=>"/usr/share/logstash/data/uuid"}
[2023-11-11T15:12:15,244][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2023-11-11T15:12:15,249][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and may be removed in a future release.
Please configure Metricbeat to monitor Logstash. Documentation can be found at:
https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
[2023-11-11T15:12:16,055][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T15:12:16,224][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T15:12:17,002][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2023-11-11T15:12:17,787][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2023-11-11T15:12:17,834][INFO ][logstash.licensechecker.licensereader] Elasticsearch version determined (7.17.14) {:es_version=>7}
[2023-11-11T15:12:17,836][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2023-11-11T15:12:18,242][INFO ][logstash.monitoring.internalpipelinesource] Monitoring License OK
[2023-11-11T15:12:18,247][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
[2023-11-11T15:12:19,103][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2023-11-11T15:12:23,986][INFO ][org.reflections.Reflections] Reflections took 219 ms to scan 1 urls, producing 119 keys and 419 values
[2023-11-11T15:12:25,816][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T15:12:25,821][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T15:12:25,867][WARN ][deprecation.logstash.inputs.beats] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T15:12:25,910][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T15:12:25,919][WARN ][deprecation.logstash.codecs.plain] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T15:12:25,997][WARN ][deprecation.logstash.outputs.elasticsearchmonitoring] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T15:12:26,005][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2023-11-11T15:12:26,185][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearchMonitoring", :hosts=>["http://elasticsearch:9200"]}
[2023-11-11T15:12:26,186][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//XX.XX.XX.XX:9200"]}
[2023-11-11T15:12:26,239][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2023-11-11T15:12:26,240][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://XX.XX.XX.XX:9200/]}}
[2023-11-11T15:12:26,279][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://XX.XX.XX.XX:9200/"}
[2023-11-11T15:12:26,281][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Restored connection to ES instance {:url=>"http://elasticsearch:9200/"}
[2023-11-11T15:12:26,293][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Elasticsearch version determined (7.17.14) {:es_version=>7}
[2023-11-11T15:12:26,294][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.17.14) {:es_version=>7}
[2023-11-11T15:12:26,295][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2023-11-11T15:12:26,295][WARN ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2023-11-11T15:12:26,522][INFO ][logstash.outputs.elasticsearch][main] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2023-11-11T15:12:26,528][INFO ][logstash.outputs.elasticsearchmonitoring][.monitoring-logstash] Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[2023-11-11T15:12:26,559][WARN ][logstash.javapipeline    ][.monitoring-logstash] 'pipeline.ordered' is enabled and is likely less efficient, consider disabling if preserving event order is not necessary
[2023-11-11T15:12:26,625][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2023-11-11T15:12:26,729][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x31106ae3@/usr/share/logstash/logstash-core/lib/logstash/pipelines_registry.rb:159 run>"}
[2023-11-11T15:12:26,731][INFO ][logstash.outputs.elasticsearch][main] Installing Elasticsearch template {:name=>"logstash"}
[2023-11-11T15:12:26,734][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/logstash.conf"], :thread=>"#<Thread:0x78d2ddba@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:130 run>"}
[2023-11-11T15:12:28,114][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>1.38}
[2023-11-11T15:12:28,211][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>1.46}
[2023-11-11T15:12:28,258][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2023-11-11T15:12:28,276][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
[2023-11-11T15:12:28,320][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2023-11-11T15:12:28,508][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:main, :".monitoring-logstash"], :non_running_pipelines=>[]}
[2023-11-11T15:12:28,645][INFO ][org.logstash.beats.Server][main][7426d6c7834d350eb2014b52cc2f1927a7d0594fe26f98bb989f9afcae58c292] Starting server on port: 5044

However, I assume I should have had an appear in the logstash itself? I don't see it, at what stage is it compiled?

I'm trying to still find ways how I can verify the receipt of the indexes, I looked that you can send a " GET _cat/indices " request and verify them
I did send it, but unfortunately I didn't find the indexes I was looking for

#! Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See https://www.elastic.co/guide/en/elasticsearch/reference/7.17/security-minimal-setup.html to enable security.
green open .geoip_databases 9vmtSoTnRgSdg-9ITr0gdQ 1 0 40 0 37.6mb 37.6mb 37.6mb
green open .apm-custom-link 23q_XKKKWRHKbzrSaTHXFng 1 0 0 0 0 227b 227b 227b
green open .apm-agent-configuration XrT0A4OgAvab4I0fTJDhvA 1 0 0 0 0 227b 227b 227b
green open .kibana_task_manager_7.17.14_001 gPBK9wqAR5edhZgNQrW9yQ 1 0 17 3771 471.4kb 471.4kb
green open .kibana_7.17.14_001 TkbeHWTvSriZnJ2C6pclxg 1 0 34 21 2.7mb 2.7mb 2.7mb

Hey,

You need to make sure every bit of the chain is working.

Read the logs go to documentation and adapt.

This tells you to use filestream input :

  1. Filebeat

Remove warnings, check for logstash connection in log file.

enable debug if necesssary


logging.level: debug
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat.log
  1. Logstash

Make sure first logstash is receiving, check for output to stdout, or file.

You can also enable debug mode for better loggging.

  1. Elasticsearch

You need to create a index pattern covering the index you want to create in your case beats-index-*

Also make sure to safely setup permissions & roles.

Dont forget to revert unecessary debug changes, and read the documentation please !

Please give back debug logs, and new configuration ( with connection attempts for each components )

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.