Logstash input configuration

Hello, I have a question how should Logstash input config look like in case I have Fleet server, multiple agents/endpoints, checkpoint firewall etc logging to Logstash. What I am trying to ask is, should every agent have own input config, or is enough to have one input config for lets sate 500 endpoints under one fleet policy? I do understand I need new input for every unigue log source, but is this ment by technology/vendor or even same type of logsource on different hosts?
For example I had fleet server logging to Logstash, then I did add two policies with one agent under each. One policy is for Linux servers, one policy is for Windows servers but now it seems like the fleet server stopped log into Logstash and in fleet logs I see:
IP 10.212.25.201 is fleet server
IP 10.212.25.202 is Logstash

17:17:21.665
elastic_agent.filebeat
rror","@timestamp":"2022-10-10T15:17:11.946Z","log.logger":"publisher_pipeline_output","log.origin":{"file.name":"pipeline/client_worker.go","file.line":176},"message":"failed to publish events: write tcp 10.212.25.201:38524->10.212.25.202:5044: write: connection reset by peer","service.name":"filebeat","ecs.version":"1.6.0"}
20:48:23.423
elastic_agent.filebeat
[elastic_agent.filebeat][error] Failed to publish events caused by: write tcp 10.212.25.201:59670->10.212.25.202:5044: write: connection reset by peer
20:48:24.638
elastic_agent.filebeat
[elastic_agent.filebeat][error] failed to publish events: write tcp 10.212.25.201:59670->10.212.25.202:5044: write: connection reset by peer
22:15:14.023
elastic_agent.filebeat
[elastic_agent.filebeat][error] Failed to publish events caused by: write tcp 10.212.25.201:38174->10.212.25.202:5044: write: connection reset by peer
22:15:14.023
elastic_agent.filebeat
[elastic_agent.filebeat][error] Failed to publish events caused by: write tcp 10.212.25.201:38174->10.212.25.202:5044: write: connection reset by peer
22:15:15.902
elastic_agent.filebeat
[elastic_agent.filebeat][error] failed to publish events: write tcp 10.212.25.201:38174->10.212.25.202:5044: write: connection reset by peer
22:15:15.902
elastic_agent.filebeat
[elastic_agent.filebeat][error] failed to publish events: write tcp 10.212.25.201:38174->10.212.25.202:5044: write: connection reset by peer
22:17:08.075
elastic_agent.filebeat
[elastic_agent.filebeat][error] Failed to publish events caused by: write tcp 10.212.25.201:51386->10.212.25.202:5044: write: connection reset by peer
22:17:08.075
elastic_agent.filebeat
[elastic_agent.filebeat][error] Failed to publish events caused by: write tcp 10.212.25.201:51386->10.212.25.202:5044: write: connection reset by peer
22:17:09.770
elastic_agent.filebeat
[elastic_agent.filebeat][error] failed to publish events: write tcp 10.212.25.201:51386->10.212.25.202:5044: write: connection reset by peer
22:17:09.770
elastic_agent.filebeat
[elastic_agent.filebeat][error] failed to publish events: write tcp 10.212.25.201:51386->10.212.25.202:5044: write: connection reset by peer
22:50:38.459
elastic_agent.filebeat
[elastic_agent.filebeat][error] Failed to publish events caused by: write tcp 10.212.25.201:37490->10.212.25.202:5044: write: connection reset by peer
22:50:40.335
elastic_agent.filebeat
[elastic_agent.filebeat][error] failed to publish events: write tcp 10.212.25.201:37490->10.212.25.202:5044: write: connection reset by peer
23:01:30.183
elastic_agent.filebeat
[elastic_agent.filebeat][error] failed to publish events: write tcp 10.212.25.201:50672->10.212.25.202:5044: write: connection reset by peer
23:17:08.778
elastic_agent.filebeat
[elastic_agent.filebeat][error] Failed to publish events caused by: write tcp 10.212.25.201:52740->10.212.25.202:5044: write: connection reset by peer
23:17:10.446
elastic_agent.filebeat
[elastic_agent.filebeat][error] failed to publish events: write tc

So I would like to understand how much log sources of same type can log to same input, or how many inputs I need in elastic-agent-pipeline.conf

Thank you very much for any info!
Tomas

how much log sources of same type can log to same input?

Depend on amount of data. In your case, you can connect several Eagents to LS server.
It seems firewall is active on port 5044 and host .202 or LS process is not active. Check if logstash process active.

Can you show your LS conf and explain what do you want achive?

Hi Rios, I am preparing in virtual enviroment so I know what I am doing when we are going use the ES stack in production in near future. I would like to understand how the input configuration for Logstash should be done properly in case of multiple same and also different log sources is logging to Logstash. In my .conf file I have right now only "elastic_agent" input. This input is now used by agent on fleet server, agent on Linux server, agent on Windows server, so 3 agents. Seems like since the linux and windows servers start using the logstash, the fleet agent have problem. So the main goal of this post should be to understand how many inputs in .conf file I need in case I have multiple log sources of same type. For example I do understand I will need new input for checkpoint firewall, but what if I have 50 endpoints with 2 types of OS all sending syslog to Logstash?

      input {
        elastic_agent {
          port => 5044
          ssl => true
          ssl_certificate_authorities => ["/etc/logstash/certs/ca.crt"]
          ssl_certificate => "/etc/logstash/certs/logstash-server.crt"
          ssl_key => "logstash.pkcs8.key"
          ssl_verify_mode => "force_peer"
        }
      }

      output {
        elasticsearch {
          hosts => "https://10.212.25.197:9200"
          api_key => "EcYlooMBCn_3s4GuJxBC:9dZij6aSQ_yNjPAoosrKQQ"
          data_stream => true
          ssl => true
          # cacert => "/etc/logstash/certs/http_ca.crt"
        }
      }

Logstash service running:

Last login: Mon Oct 10 07:32:54 2022 from 10.212.25.203
logstash@logstash:~$ sudo systemctl status logstash.service 
[sudo] password for logstash: 
● logstash.service - logstash
     Loaded: loaded (/lib/systemd/system/logstash.service; disabled; vendor preset: enabled)
     Active: active (running) since Mon 2022-10-10 07:35:31 UTC; 1 day 2h ago
   Main PID: 1054 (java)
      Tasks: 65 (limit: 2236)
     Memory: 1.3G
        CPU: 58min 46.502s
     CGroup: /system.slice/logstash.service
             └─1054 /usr/share/logstash/jdk/bin/java -Xms1g -Xmx1g -Djava.awt.headless=true -

Log from the start of Logstash service:

[2022-10-10T07:36:10,601][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.4.3", "jruby.version"=>"jruby 9.3.8.0 (2.6.8) 2022-09-13 98d69c9461 OpenJDK 64-Bit Server VM 17.0.4+8 on 17.0.4+8 +indy +jit [x86_64-linux]"}
[2022-10-10T07:36:10,607][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[2022-10-10T07:36:15,360][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2022-10-10T07:36:16,738][INFO ][org.reflections.Reflections] Reflections took 310 ms to scan 1 urls, producing 125 keys and 434 values
[2022-10-10T07:36:18,197][INFO ][logstash.javapipeline    ] Pipeline `elastic-agent-pipeline` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2022-10-10T07:36:18,333][INFO ][logstash.outputs.elasticsearch][elastic-agent-pipeline] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://10.212.25.197:9200"]}
[2022-10-10T07:36:19,025][INFO ][logstash.outputs.elasticsearch][elastic-agent-pipeline] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://10.212.25.197:9200/]}}
[2022-10-10T07:36:19,879][WARN ][logstash.outputs.elasticsearch][elastic-agent-pipeline] Restored connection to ES instance {:url=>"https://10.212.25.197:9200/"}
[2022-10-10T07:36:19,915][INFO ][logstash.outputs.elasticsearch][elastic-agent-pipeline] Elasticsearch version determined (8.4.3) {:es_version=>8}
[2022-10-10T07:36:19,920][WARN ][logstash.outputs.elasticsearch][elastic-agent-pipeline] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2022-10-10T07:36:20,037][WARN ][logstash.outputs.elasticsearch][elastic-agent-pipeline] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
[2022-10-10T07:36:20,187][INFO ][logstash.outputs.elasticsearch][elastic-agent-pipeline] Using a default mapping template {:es_version=>8, :ecs_compatibility=>:v8}
[2022-10-10T07:36:20,264][INFO ][logstash.javapipeline    ][elastic-agent-pipeline] Starting pipeline {:pipeline_id=>"elastic-agent-pipeline", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/elastic-agent-pipeline.conf"], :thread=>"#<Thread:0x3bfc492a run>"}
[2022-10-10T07:36:20,383][INFO ][logstash.outputs.elasticsearch][elastic-agent-pipeline] Installing Elasticsearch template {:name=>"ecs-logstash"}
[2022-10-10T07:36:20,613][ERROR][logstash.outputs.elasticsearch][elastic-agent-pipeline] Failed to install template {:message=>"Got response code '403' contacting Elasticsearch at URL 'https://10.212.25.197:9200/_index_template/ecs-logstash'", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.6.0-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:84:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.6.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:324:in `perform_request_to_url'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.6.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:311:in `block in perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.6.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:398:in `with_connection'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.6.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:310:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.6.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:318:in `block in Pool'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.6.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:408:in `template_put'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.6.0-java/lib/logstash/outputs/elasticsearch/http_client.rb:85:in `template_install'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.6.0-java/lib/logstash/outputs/elasticsearch/template_manager.rb:29:in `install'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.6.0-java/lib/logstash/outputs/elasticsearch/template_manager.rb:17:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.6.0-java/lib/logstash/outputs/elasticsearch.rb:494:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.6.0-java/lib/logstash/outputs/elasticsearch.rb:318:in `finish_register'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.6.0-java/lib/logstash/outputs/elasticsearch.rb:283:in `block in register'", "/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-elasticsearch-11.6.0-java/lib/logstash/plugin_mixins/elasticsearch/common.rb:154:in `block in after_successful_connection'"]}
[2022-10-10T07:36:21,516][INFO ][logstash.javapipeline    ][elastic-agent-pipeline] Pipeline Java execution initialization time {"seconds"=>1.24}
[2022-10-10T07:36:21,585][INFO ][logstash.inputs.beats    ][elastic-agent-pipeline] Starting input listener {:address=>"0.0.0.0:5044"}
[2022-10-10T07:36:22,270][INFO ][logstash.javapipeline    ][elastic-agent-pipeline] Pipeline started {"pipeline.id"=>"elastic-agent-pipeline"}
[2022-10-10T07:36:22,445][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:"elastic-agent-pipeline"], :non_running_pipelines=>[]}
[2022-10-10T07:36:22,456][INFO ][org.logstash.beats.Server][elastic-agent-pipeline][3a63b4a33009aa5163e89e8b23bb456296f3aa72847c9f6d4ddbade08dcd6dcb] Starting server on port: 5044

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.