How to integrate syslog input plugin

Hi,

I have installed full stack ELK (version 7.17.13) and now I want to integrate syslog input plugin. Need some directions on the same on how to setup.

Also when I tried with some changes in logstash.conf but I am facing issue like logstash service is not coming up. Please looking for the correct installation procedure.

image

Thanks,
Ravi

Hello, and welcome,

Please, do not share screenshots, sometimes they are pretty hard to read and some people may not even be able to see them.

The one you shared is impossible to read, it is too small, please share your error as plain text, not screenshots.

Hi Leandro

Thanks for the reply. This is the latest one from the logstash logs.

[2023-12-11T13:17:52,383][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2023-12-11T13:17:54,179][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \t\r\n], "#", "if", [A-Za-z0-9_-], '"', "'", "}" at line 19, column 1 (byte 406) after output {\n elasticsearch {\n hosts => "localhost:9200"\n user => elastic\n password => changeme\n manage_template => false\n index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"\n document_type => "%{[@metadata][type]}"\n }\n # for debug purpose of pipeline with command: ./logstash -f /etc/logstash/conf.d/logstash.conf\n # stdout { codec => rubydebug }\n\n", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:189:in initialize'", "org/logstash/execution/JavaBasePipelineExt.java:72:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:48:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:392:in block in converge_state'"]}
[2023-12-11T13:17:54,289][INFO ][logstash.runner ] Logstash shut down.
[2023-12-11T13:17:54,298][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:747) ~[jruby-complete-9.2.20.1.jar:?]
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:710) ~[jruby-complete-9.2.20.1.jar:?]
at usr.share.logstash.lib.bootstrap.environment.(/usr/share/logstash/lib/bootstrap/environment.rb:94) ~[?:?]

Thanks,
Ravi

You have a configuration error, you need to check your logstash.conf file, something is not correct.

Please share this file using the preformatted text option, the </> button.

Hi Leandro,

Please see the logstash.conf file below.

input {
  syslog {
    port => 514
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "syslog-%{+YYYY.MM.dd}"
  }
}

That is clearly not the logstash.conf that logstash is using. The error message says it does not like whatever comes after

output {
    elasticsearch {
        hosts => "localhost:9200"
        user => elastic
        password => changeme

How are you setting path.config? In logstash.yml? Using -f on the command line? And what value are you setting it to?

Hello Badger,

Thanks for the reply. As per your suggestion I have set it as below in logstash.conf.

vim /etc/logstash/conf.d/logstash.conf

input {
  syslog {
    port => 514
  }
}

output {
    elasticsearch {
        hosts => "localhost:9200"
        user => elastic
        password => changeme

And when I try to run the Logstash with the specific configuration above.Below were the logs from the same console.


# bin/logstash -f /etc/logstash/conf.d/logstash.conf
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2023-12-12 04:56:23.612 [main] runner - Starting Logstash {"logstash.version"=>"7.17.13", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.20+8 on 11.0.20+8 +indy +jit [linux-x86_64]"}
[INFO ] 2023-12-12 04:56:23.626 [main] runner - JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djdk.io.File.enableADS=true, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true]
[WARN ] 2023-12-12 04:56:24.227 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2023-12-12 04:56:26.629 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[ERROR] 2023-12-12 04:56:28.024 [Converge PipelineAction::Create<main>] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \\t\\r\\n], \"#\", \"{\", [A-Za-z0-9_-], '\"', \"'\", \"}\" at line 12, column 1 (byte 159) after output {\n    elasticsearch {\n        hosts => \"localhost:9200\"\n        user => elastic\n        password => changeme\n", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:32:in `compile_imperative'", "org/logstash/execution/AbstractPipelineExt.java:189:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:72:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:48:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:52:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:392:in `block in converge_state'"]}
[INFO ] 2023-12-12 04:56:28.180 [LogStash::Runner] runner - Logstash shut down.

Thanks,

You are missing 2 closing braces }

1 Like

Hi Stephen,

Thanks for the reply. I have made the changes in logstash.conf as per your suggestions.

input {
  syslog {
    port => 514
  }
}

output {
    elasticsearch {
        hosts => "localhost:9200"
        user => elastic
        password => changeme
  }
}

Below were the console logs when I tried to run with new conf

# bin/logstash -f /etc/logstash/conf.d/logstash.conf
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2023-12-12 07:02:38.727 [main] runner - Starting Logstash {"logstash.version"=>"7.17.13", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.20+8 on 11.0.20+8 +indy +jit [linux-x86_64]"}
[INFO ] 2023-12-12 07:02:38.734 [main] runner - JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djdk.io.File.enableADS=true, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true]
[WARN ] 2023-12-12 07:02:39.103 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2023-12-12 07:02:40.398 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9601, :ssl_enabled=>false}
[INFO ] 2023-12-12 07:02:41.704 [Converge PipelineAction::Create<main>] Reflections - Reflections took 102 ms to scan 1 urls, producing 119 keys and 419 values 
[WARN ] 2023-12-12 07:02:42.483 [Converge PipelineAction::Create<main>] plain - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2023-12-12 07:02:42.515 [Converge PipelineAction::Create<main>] syslog - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2023-12-12 07:02:42.661 [Converge PipelineAction::Create<main>] plain - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2023-12-12 07:02:42.723 [Converge PipelineAction::Create<main>] elasticsearch - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[INFO ] 2023-12-12 07:02:42.805 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[INFO ] 2023-12-12 07:02:43.141 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@localhost:9200/]}}
[WARN ] 2023-12-12 07:02:43.500 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://elastic:xxxxxx@localhost:9200/"}
[INFO ] 2023-12-12 07:02:43.517 [[main]-pipeline-manager] elasticsearch - Elasticsearch version determined (7.17.13) {:es_version=>7}
[WARN ] 2023-12-12 07:02:43.519 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
**[INFO ] 2023-12-12 07:02:43.603 [[main]-pipeline-manager] elasticsearch - Config is not compliant with data streams. `data_stream => auto` resolved to `false`**
**[INFO ] 2023-12-12 07:02:43.631 [Ruby-0-Thread-10: :1] elasticsearch - Config is not compliant with data streams. `data_stream => auto` resolved to `false`**
[INFO ] 2023-12-12 07:02:43.711 [Ruby-0-Thread-10: :1] elasticsearch - Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[INFO ] 2023-12-12 07:02:43.762 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/etc/logstash/conf.d/logstash.conf"], :thread=>"#<Thread:0x4ee1bf02 run>"}
[INFO ] 2023-12-12 07:02:44.577 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>0.81}
[INFO ] 2023-12-12 07:02:44.839 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2023-12-12 07:02:44.878 [Ruby-0-Thread-17: :1] syslog - Starting syslog udp listener {:address=>"0.0.0.0:514"}
[INFO ] 2023-12-12 07:02:44.885 [Ruby-0-Thread-19: :1] syslog - Starting syslog tcp listener {:address=>"0.0.0.0:514"}
[INFO ] 2023-12-12 07:02:44.901 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

In the above I could see these.

[INFO ] 2023-12-12 07:02:43.603 [[main]-pipeline-manager] elasticsearch - Config is not compliant with data streams. data_stream => auto resolved to false
[INFO ] 2023-12-12 07:02:43.631 [Ruby-0-Thread-10: :1] elasticsearch - Config is not compliant with data streams. data_stream => auto resolved to false

Please suggest.

Thanks,

Those were info and warning, LS has been started and listening on port 514.
[INFO ] 2023-12-12 07:02:44.878 [Ruby-0-Thread-17: :1] syslog - Starting syslog udp listener {:address=>"0.0.0.0:514"}
[INFO ] 2023-12-12 07:02:44.885 [Ruby-0-Thread-19: :1] syslog - Starting syslog tcp listener {:address=>"0.0.0.0:514"}

Note: For your case, you have to:
a) use port >1023 because of Linux and LS run as the "logstash" user
b) use root user or start from the cmd line as a background process, start with & at the end, and you can use port 514

Hi Rios,

Thanks. Please can you let me know how to verify it from the Kibana GUI on the syslog and what all changes needs to be done at the GUI end.

Thanks,

You can simple add stdout in output:

output {
 stdout {
       codec => rubydebug{}
    }
    elasticsearch {....

Connect with telnet to LS host via port 514 and test.

Hello Rios

I have modified the LS conf as per the above. But I didn't follow on the respective changes at the Kibana GUI on where I can see these related post to the below changes.
Please need some guidance on the GUI related configurations as well.

input {
  syslog {
    port => 514
  }
}

output {
 stdout {
       codec => rubydebug{}
    }
    elasticsearch {
        hosts => "localhost:9200"
        user => elastic
        password => changeme
  }
}

After loading with the new LS conf.

# bin/logstash -f /etc/logstash/conf.d/logstash.conf
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2023-12-13 04:32:07.734 [main] runner - Starting Logstash {"logstash.version"=>"7.17.13", "jruby.version"=>"jruby 9.2.20.1 (2.5.8) 2021-11-30 2a2962fbd1 OpenJDK 64-Bit Server VM 11.0.20+8 on 11.0.20+8 +indy +jit [linux-x86_64]"}
[INFO ] 2023-12-13 04:32:07.740 [main] runner - JVM bootstrap flags: [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djdk.io.File.enableADS=true, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -Djruby.regexp.interruptible=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true]
[WARN ] 2023-12-13 04:32:08.056 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2023-12-13 04:32:09.309 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9601, :ssl_enabled=>false}
[INFO ] 2023-12-13 04:32:10.480 [Converge PipelineAction::Create<main>] Reflections - Reflections took 79 ms to scan 1 urls, producing 119 keys and 419 values 
[WARN ] 2023-12-13 04:32:11.289 [Converge PipelineAction::Create<main>] plain - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2023-12-13 04:32:11.324 [Converge PipelineAction::Create<main>] syslog - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2023-12-13 04:32:11.809 [Converge PipelineAction::Create<main>] plain - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[WARN ] 2023-12-13 04:32:11.880 [Converge PipelineAction::Create<main>] elasticsearch - Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[INFO ] 2023-12-13 04:32:11.978 [[main]-pipeline-manager] elasticsearch - New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[INFO ] 2023-12-13 04:32:12.358 [[main]-pipeline-manager] elasticsearch - Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@localhost:9200/]}}
[WARN ] 2023-12-13 04:32:12.662 [[main]-pipeline-manager] elasticsearch - Restored connection to ES instance {:url=>"http://elastic:xxxxxx@localhost:9200/"}
[INFO ] 2023-12-13 04:32:12.688 [[main]-pipeline-manager] elasticsearch - Elasticsearch version determined (7.17.13) {:es_version=>7}
[WARN ] 2023-12-13 04:32:12.690 [[main]-pipeline-manager] elasticsearch - Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[INFO ] 2023-12-13 04:32:12.770 [[main]-pipeline-manager] elasticsearch - Config is not compliant with data streams. `data_stream => auto` resolved to `false`
[INFO ] 2023-12-13 04:32:12.861 [Ruby-0-Thread-10: :1] elasticsearch - Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[INFO ] 2023-12-13 04:32:12.915 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/etc/logstash/conf.d/logstash.conf"], :thread=>"#<Thread:0x11ab6fba run>"}
[INFO ] 2023-12-13 04:32:13.838 [[main]-pipeline-manager] javapipeline - Pipeline Java execution initialization time {"seconds"=>0.91}
[INFO ] 2023-12-13 04:32:13.986 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[INFO ] 2023-12-13 04:32:14.038 [Ruby-0-Thread-16: :1] syslog - Starting syslog udp listener {:address=>"0.0.0.0:514"}
[INFO ] 2023-12-13 04:32:14.040 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2023-12-13 04:32:14.050 [Ruby-0-Thread-17: :1] syslog - Starting syslog tcp listener {:address=>"0.0.0.0:514"}

/var/log/syslog

[2023-12-13T04:25:53,917][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2023-12-13T04:25:58,440][INFO ][org.reflections.Reflections] Reflections took 202 ms to scan 1 urls, producing 119 keys and 419 values
[2023-12-13T04:26:01,451][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch index=>"syslog-%{+YYYY.MM.dd}", id=>"c06041a63704c44c0c09686a6b4a81fd3ca515d47aa640d929831cb0ac3e1a2f", hosts=>[//localhost:9200], data_stream=>"auto", document_type=>"syslog", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_f48b5b85-136e-49d1-9c91-b88d81cf1e5d", enable_metric=>true, charset=>"UTF-8">, workers=>1, ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false, retry_initial_interval=>2, retry_max_interval=>64, data_stream_type=>"logs", data_stream_dataset=>"generic", data_stream_namespace=>"default", data_stream_sync_fields=>true, data_stream_auto_routing=>true, manage_template=>true, template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy">}
[2023-12-13T04:26:02,166][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9201"]}
[2023-12-13T04:26:02,968][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@localhost:9201/]}}
[2023-12-13T04:26:03,400][INFO ][logstash.outputs.elasticsearch][main] Failed to perform request {:message=>"Connect to localhost:9201 [localhost/127.0.0.1] failed: Connection refused (Connection refused)", :exception=>Manticore::SocketException, :cause=>org.apache.http.conn.HttpHostConnectException: Connect to localhost:9201 [localhost/127.0.0.1] failed: Connection refused (Connection refused)}
[2023-12-13T04:26:03,419][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elastic:xxxxxx@localhost:9201/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://localhost:9201/][Manticore::SocketException] Connect to localhost:9201 [localhost/127.0.0.1] failed: Connection refused (Connection refused)"}

You are using port 9201 instead of 9200

Hi

In the LS config file it is 9200 only.

input {
  syslog {
    port => 514
  }
}

output {
 stdout {
       codec => rubydebug{}
    }
    elasticsearch {
        hosts => "localhost:9200"
        user => elastic
        password => changeme
  }
}

OK, LS is up and running, waiting for (test) messages.

If you still haven't receive any messages, an issue might be related to (local) firewall.
Check with tcpdump does the traffic is arriving.

Hello

I have allowed it for port 9200 but still no luck on how this integration can be verified in the Kibana GUI. Please suggest.

# sudo ufw allow 9200
Rules updated
Rules updated (v6)

However when run using curl command to know about the elasticsearch reachiblity

Another question is whether the syslog input plugin works for ELK stack version 7.17.13?

curl -X GET "localhost:9200"
{
  "name" : "ELK-test",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "8HvYlQx1TiGPOkGwFqVVgQ",
  "version" : {
    "number" : "7.17.13",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "2b211dbb8bfdecaf7f5b44d356bdfe54b1050c13",
    "build_date" : "2023-08-31T17:33:19.958690787Z",
    "build_snapshot" : false,
    "lucene_version" : "8.11.1",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

Hello,

Please need some inputs as I am facing with the below 2 issues after implementing syslog input plugin using Logstash.

# bin/logstash-plugin list --verbose | grep logstash-input-syslog

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.

logstash-input-syslog (3.7.0)

  1. Currently facing challenges w.r.t the GUI where the syslog data should be populated but is not happening.
  2. New index pattern for the Logstash was created from the Stack Management in Kibana GUI. But still no data/logs being pushed

The related configuration were done in the logstash.conf (below for reference).

input {
 syslog {
   port => 5144
 }
}

output {
stdout {
      codec => rubydebug{}
   }
   elasticsearch {
       hosts => ["http://localhost:9200"]
       user => elastic
       password => changeme
 }
}

var/log/syslog

Dec 13 11:42:03 ELK-test logstash[400]: [2023-12-13T11:42:03,827][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
Dec 13 11:42:03 ELK-test logstash[400]: [2023-12-13T11:42:03,931][INFO ][logstash.inputs.syslog   ][main][dacc1ecb8970fa9f3ef090c85ab3dc03c3e00fcd20c6
a7a731b20a466e80bf89] Starting syslog udp listener {:address=>"0.0.0.0:5144"}
Dec 13 11:42:03 ELK-test logstash[400]: [2023-12-13T11:42:03,934][INFO ][logstash.inputs.syslog   ][main][dacc1ecb8970fa9f3ef090c85ab3dc03c3e00fcd20c6
a7a731b20a466e80bf89] Starting syslog tcp listener {:address=>"0.0.0.0:5144"}
Dec 13 11:42:04 ELK-test logstash[400]: [2023-12-13T11:42:04,098][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>
[:main], :non_running_pipelines=>[]}
Dec 13 11:42:04 ELK-test logstash[400]: [2023-12-13T11:42:04,146][INFO ][org.logstash.beats.Server][main][8126dc8cc7380abf418fc63cc8318aced31909e211a9
c38062d2f4c3ae69315f] Starting server on port: 5044

Thanks,

Hello,

Question: Do we need to have Logstash installed on the client servers as well when there is a Logstash service running on the main ELK server?

The main purpose is to ship the syslog from the client server to the main ELK where Logstash is running.

I have implemented syslog input plugin with Logstash.

Currently for testing when I telnet the main ELK server and push some logs data in that case only I am seeing logs in Kibana GUI under Discover tab (by selecting logstash from the drop down).

Thanks,

Question: Do we need to have Logstash installed on the client servers as well when there is a Logstash service running on the main ELK server?

No, you don't need Logstash on the client servers.
What is on the client server side? Linux, Windows or network devices?

I have implemented syslog input plugin with Logstash.

Yes it's visible from the log: Starting syslog tcp listener {:address=>"0.0.0.0:5144"}

Currently for testing when I telnet the main ELK server and push some logs data in that case only I am seeing logs in Kibana GUI under Discover tab (by selecting logstash from the drop down).

Add index name and data will go to your index.

   elasticsearch {
       hosts => ["http://localhost:9200"]
       index => "yourname_%{+YYYYMM}" # will be in format: yourname_202312
       user => elastic
       password => changeme
 }