How to send data via Logstash

yes. This should be very common but I cannot send data to Elasticsearch-or I cannot find it there. Either way I am haven trouble using the data.

What I am doing: Filebeat sends a collection of Log-Files to Logstash. Currently 1 Log-File that gets passed to Logstash and is then transformed(I will paste the .conf later on)

I use two commands for this:

filebeat -e #in the first shell
/usr/share/logstash/bin/logstash #in the second shell

Filebeat connects to logstash and logstash verifies that the server is started on port x. After that nothing happens.

I used the filebeat index in the .conf, but the Dashboard does not show any data.
Here is the .conf:

input {
  beats {
    host => "IP"
    port => 5044
    ssl => true
    ssl_key => '/etc/logstash/pkcs8.key'
    ssl_certificate => '/etc/logstash/elastic4.crt'

        if "login" not in [log_message]{
                drop {} #Drop everything that does not include a WARN

output {
  elasticsearch {
    hosts => ["IP:9200"]
    cacert => '/etc/logstash/ca.crt'
    user => '?'
    password => ?
    index => "filebeat-%{[@metadata][target_index]}"

The Problem is: I sent the data to Elasticsearch via Filebeat first. That worked just fine and I can therefore the see field name "log_message". Logstash doesn't work.

Please let me know if you need more Information.

Can you show us your filebeat.yml file


Please take a look at this link it talks about how to properly configure


You need to run filebeat setup correctly and then use a properly configured logstash.conf if you want it all to work together.

Thank you @stephenb . I followed this Instruction Tutorial but I will delete the indices and then set it up again.
I get a error message if want to run a filebeat setup while having the logstash output enabled, so do I have to enable the Elasticsearch each time or is there another way?

And is there a filter where I can make Logstash differ between incoming files? Or can I only filter the content of a file?

I found the "file" filter for the logstash input plugin. Could I just make a input filter for each logfile and therefore deactivate filebeat?

- type: log
  enabled: false
    - /var/log/*.log
    - /var/log/elasticsearch/sysops.log
    - /var/log/kibana/kibana.log
- type: filestream
  enabled: false
    - /var/log/*.log
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
  index.number_of_shards: 1
  host: "https://IP:5601"
  hosts: ["IP:5044"]
  ssl.certificate_authorities: ["/etc/filebeat/ca.crt"]
  ssl.certificate: "/etc/filebeat/elastic.crt"
  ssl.key: "/etc/filebeat/elastic.key"
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

I tested it now and it is stil not working.

input {
  file {
         path => "/var/log/storage.log"

filter {
        add_field => { "token" => "aaWTINmMspBUetRoGUrxEApzQkkoMWMn" }

output {
  elasticsearch {
    hosts => "https://IP:9200"
    cacert => '/etc/logstash/ca.crt'
    user => 'elastic'
    password => ?
    index => "logstash-%{+yyyy.MM.dd}"

Where the token is just for test purpose.
After running:

/usr/share/logstash/bin/logstash -f sample.conf 

I get the following output:

2021-11-25T14:29:55,723][INFO ][logstash.javapipeline    ][.monitoring-logstash] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, "pipeline.sources"=>["monitoring pipeline"], :thread=>"#<Thread:0x5ac8bcc4@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:54 run>"}
[2021-11-25T14:29:55,770][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/sample.conf"], :thread=>"#<Thread:0x6fe8d3fc run>"}
[2021-11-25T14:29:56,392][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.62}
[2021-11-25T14:29:56,392][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline Java execution initialization time {"seconds"=>0.66}
[2021-11-25T14:29:56,412][INFO ][logstash.javapipeline    ][.monitoring-logstash] Pipeline started {""=>".monitoring-logstash"}
[2021-11-25T14:29:56,447][INFO ][logstash.inputs.file     ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_3f716e4906133c5c4a844e6e2fd683aa", :path=>["/var/log/storage.log"]}
[2021-11-25T14:29:56,457][INFO ][logstash.javapipeline    ][main] Pipeline started {""=>"main"}
[2021-11-25T14:29:56,481][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:".monitoring-logstash", :main], :non_running_pipelines=>[]}
[2021-11-25T14:29:56,500][INFO ][filewatch.observingtail  ][main][575211374a3c9605522338c6a0b111b10d9f521c826d4a838154354dc24f2feb] START, creating Discoverer, Watch with file and sincedb collections

Where it does not move after the first line, even if I wait for 15 minutes.

I SHOULD find the data under the logstash-* Index, right?

I'm not sure you can have two outputs.

If you send the data to logstash, you have to remove the outout.Elasticsearch. And vice versa

No, you cant. I did not mean to copy that in, sorry.

Is the log "/var/log/storage.log" still being written? The file input in logstash per default will tail the file for new events, if you want to read it from the beginning you need to set start_position => "beginning".

Thank you @leandrojmp . No it is not written, just some log file that I wanted to use for testing. I added the start_position into my .conf, sadly without success.

Is it right to say that if my log file and conf file are okay, the only mistake could be the logstash.yml? Or could there be another random error?

It worked! I did not change anything, but (only) 3 of 88 lines were added. Next Task should be to find out why only 3 lines were added. Thanks for the help everyone.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.