Logstash pipeline problem

So I have to achieve a Logstash setup where I have two (or maybe even more) log files, each with its own input and filters (output will always be sent to Elastic and then visualized in Kibana).
I saw this is very doable with the pipelines.yml configuration.
I've done this for two logs (two conf files in the default config folders) and the pipeline setup looks like this:

- pipeline.id: filter1
  path.config: "/etc/logstash/conf.d/log_conf.conf"
  pipeline.workers: 3
- pipeline.id: filter2
  path.config: "/etc/logstash/conf.d/log_conf2.conf"
  queue.type: persisted

I'm running my development project in a centOS 7 machine at work. Now, I tested this with pipeline by going to /usr/share/logstash/ and running ./bin/logstash to see if it works. The pipeline.yml file is in the correct folder and also the config files as well. Everything works OK, the data is sent from Logstash to Elastic and then I can see it in Kibana.

But here comes the problem: I want to have my ELK setup running all the time, so I want to do systemctl start logstash.service and then let the logstash pipeline work in the background, without having to start it from the binaries. However, when the process starts, no more data is going to Kibana, basically it's just like the logs are not even sent to Elastic/Kibana.
I tried to see the logs for starting the logstash service with systemctl status logstash.service and here is what I got:

Oct 23 18:14:52 elk.nipne.ro logstash[31677]: Pipeline_id:filter2
Oct 23 18:14:52 elk.nipne.ro logstash[31677]: Plugin: <LogStash::Inputs::File start_position=>"beginning", path=>["/home/robert.poenaru/elk/arc_slurm_jobs.txt"], id=>"0c16b9ff1fe1aaca45b6072f213460ef2e62993c22d521cccd05ebe4e4d66e1b", sincedb_path=>"NULL", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_7cca3f5f-6719-4396-b596-bf12ac253b57", enable_metric=>true, charset=>"UTF-8">, stat_interval=>1.0, discover_interval=>15, sincedb_write_interval=>15.0, delimiter=>"\n", close_older=>3600.0, mode=>"tail", file_completed_action=>"delete", sincedb_clean_after=>1209600.0, file_chunk_size=>32768, file_chunk_count=>140737488355327, file_sort_by=>"last_modified", file_sort_direction=>"asc">
Oct 23 18:14:52 elk.nipne.ro logstash[31677]: Error: Permission denied - NULL
Oct 23 18:14:52 elk.nipne.ro logstash[31677]: Exception: Errno::EACCES
Oct 23 18:14:52 elk.nipne.ro logstash[31677]: Stack: org/jruby/RubyIO.java:1237:in `sysopen'
Oct 23 18:14:52 elk.nipne.ro logstash[31677]: org/jruby/RubyFile.java:367:in `initialize'
Oct 23 18:14:52 elk.nipne.ro logstash[31677]: org/jruby/RubyIO.java:1156:in `open'
Oct 23 18:14:52 elk.nipne.ro logstash[31677]: uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/fileutils.rb:1136:in `block in touch'
Oct 23 18:14:52 elk.nipne.ro logstash[31677]: org/jruby/RubyArray.java:1800:in `each'
Oct 23 18:14:52 elk.nipne.ro logstash[31677]: uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/fileutils.rb:1130:in `touch'

Why is there a permission problem? I run the ELK stack with sudo rights. Also, the structure of logstash in the etc/ folder is:

[root@elk logstash]# pwd/etc/logstash
[root@elk logstash]# tree -h.
├── [  49]  conf.d (two simple configs for reading logs from two files)
│   ├── [ 503]  log_conf2.conf  
│   └── [ 502]  log_conf.conf
├── [2.0K]  jvm.options
├── [4.9K]  log4j2.properties
├── [ 342]  logstash-sample.conf
├── [8.1K]  logstash.yml
├── [ 578]  pipelines.yml  (content of the file the one given above)
└── [1.7K]  startup.options

Any ideas what's the issue? How can I make the pipeline work by using the logstash.service process instead of running it from the bin/ directory?

Thank you in advance :slight_smile:

On UNIX that will create a file called NULL in whichever directory logstash is running in, and when running as a service that may well not be writeable. If you do not want to persist the sincedb across restarts then use NUL on Windows and /dev/null on UNIX.

If that's not the problem please supply more of the stack trace, at least to the point that we can see where in the filter it is getting an error.

Ok,
As a first step, since I'm on UNIX, I've changed the path for both config files in Logstash, and now the service starts with NO errors:

[2019-10-24T07:58:24,683][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.4.1"}
[2019-10-24T07:58:28,869][INFO ][org.reflections.Reflections] Reflections took 68 ms to scan 1 urls, producing 20 keys and 40 values 
[2019-10-24T07:58:33,896][INFO ][logstash.outputs.elasticsearch][filter2] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2019-10-24T07:58:33,894][INFO ][logstash.outputs.elasticsearch][filter1] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2019-10-24T07:58:34,283][WARN ][logstash.outputs.elasticsearch][filter2] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-10-24T07:58:34,287][WARN ][logstash.outputs.elasticsearch][filter1] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-10-24T07:58:34,375][INFO ][logstash.outputs.elasticsearch][filter1] ES Output version determined {:es_version=>7}
[2019-10-24T07:58:34,385][INFO ][logstash.outputs.elasticsearch][filter2] ES Output version determined {:es_version=>7}
[2019-10-24T07:58:34,386][WARN ][logstash.outputs.elasticsearch][filter1] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-10-24T07:58:34,386][WARN ][logstash.outputs.elasticsearch][filter2] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-10-24T07:58:34,450][INFO ][logstash.outputs.elasticsearch][filter1] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-10-24T07:58:34,454][INFO ][logstash.outputs.elasticsearch][filter2] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-10-24T07:58:34,855][INFO ][logstash.outputs.elasticsearch][filter2] Creating rollover alias <logstash-{now/d}-000001>
[2019-10-24T07:58:34,861][INFO ][logstash.outputs.elasticsearch][filter1] Creating rollover alias <logstash-{now/d}-000001>
[2019-10-24T07:58:35,089][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][filter1] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2019-10-24T07:58:35,091][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][filter2] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2019-10-24T07:58:35,094][INFO ][logstash.javapipeline    ][filter1] Starting pipeline {:pipeline_id=>"filter1", "pipeline.workers"=>3, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>375, :thread=>"#<Thread:0xf8e0620 run>"}
[2019-10-24T07:58:35,097][INFO ][logstash.javapipeline    ][filter2] Starting pipeline {:pipeline_id=>"filter2", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x637b075@/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:38 run>"}
[2019-10-24T07:58:35,179][INFO ][logstash.outputs.elasticsearch][filter1] Rollover Alias <logstash-{now/d}-000001> already exists. Skipping
[2019-10-24T07:58:36,193][INFO ][logstash.javapipeline    ][filter1] Pipeline started {"pipeline.id"=>"filter1"}
[2019-10-24T07:58:36,193][INFO ][logstash.javapipeline    ][filter2] Pipeline started {"pipeline.id"=>"filter2"}
[2019-10-24T07:58:36,361][INFO ][filewatch.observingtail  ][filter2] START, creating Discoverer, Watch with file and sincedb collections
[2019-10-24T07:58:36,365][INFO ][filewatch.observingtail  ][filter1] START, creating Discoverer, Watch with file and sincedb collections
[2019-10-24T07:58:36,482][INFO ][logstash.agent           ] Pipelines running {:count=>2, :running_pipelines=>[:filter2, :filter1], :non_running_pipelines=>[]}
[2019-10-24T07:58:37,059][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

However, the data is still not reaching Kibana. There is an index pattern if I go to Elasticsearch management, but as you can see, document count is always zero, no matter how much I modify content from those two log files.

Remember that log files are properly seen if I open logstash from the binaries folder.

So what could be the issue now?

Impossible to say without seeing the pipeline configurations.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.