Impossible to create a second index

Hi,

(Sorry, I am not english. I am french, so, please, be patient with me, in english :)).

I created a configuration to test logstash. The configuration log my local syslog in Kibana. That, that works. But if I try to do the same thing to log my log "/var/log/cron.log", that doesn't work. Actually, if I start a --debug -f, I see that turn in loop, without precise anything (or too much things... telling me nothing about the problem).


So, to launch the configuration, I use : /usr/share/logstash/bin/logstash --verbose -f 00_input.conf


My configuration "input" :

root@Big-Monster:/etc/logstash/conf.d# cat 00_input.conf
input {
  file {
    id => "TEST-Syslog"
    path => [ "/var/log/syslog" ]
  }

  file {
    id => "TEST-Cron"
    path => [ "/var/log/cron.log" ]
  }
}

My configuration "output"

root@Big-Monster:/etc/logstash/conf.d# cat 99_output.conf
output {
  elasticsearch {
    id => "TEST-output-Syslog"
    hosts => [ "127.0.0.1" ]
    index => "syslog-%{+YYYY.MM.dd}"
    }
}

My index, till now :

root@Big-Monster:/var/log/logstash# curl -XGET 'localhost:9200/_cat/indices?v'
health status index                uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   syslog-2023.11.21    cSp0rE4eRP20gd2eKSUtWw   5   1     163983            0     49.2mb         49.2mb
green  open   .kibana_task_manager _uJwCVxeSaiyRCSOXAqAIQ   1   0          2            0     12.6kb         12.6kb
green  open   .kibana_1            ML69D0WfRaSC3v1-aZpr_A   1   0          6            1     30.2kb         30.2kb
yellow open   syslog-2023.11.20    BPSotvnQTVaFqeUGFoEz_g   5   1     206245            0     61.6mb         61.6mb

My "verbose" result, for now :

root@Big-Monster:/etc/logstash/conf.d# /usr/share/logstash/bin/logstash --verbose -f 00_input.conf
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2023-11-21 12:30:44.299 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2023-11-21 12:30:44.310 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.8.23"}
[INFO ] 2023-11-21 12:30:47.024 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2023-11-21 12:30:47.368 [[main]-pipeline-manager] file - No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_f5fdf6ea0ea92860c6a6b2b354bfcbbc", :path=>["/var/log/syslog"]}
[INFO ] 2023-11-21 12:30:47.415 [[main]-pipeline-manager] file - No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_9b0a58bc044ee19bc5c8f85111fa6dce", :path=>["/var/log/cron.log"]}
[INFO ] 2023-11-21 12:30:47.449 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x21c6662c run>"}
[INFO ] 2023-11-21 12:30:47.517 [[main]<file] observingtail - START, creating Discoverer, Watch with file and sincedb collections
[INFO ] 2023-11-21 12:30:47.529 [[main]<file] observingtail - START, creating Discoverer, Watch with file and sincedb collections
[INFO ] 2023-11-21 12:30:47.523 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2023-11-21 12:30:48.074 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9601}

(The process seems never finish, from here. I must interrupt it by a "control C").

My status :

root@Big-Monster:/etc/logstash/conf.d# service logstash status -l
● logstash.service - logstash
     Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: enabled)
     Active: active (running) since Mon 2023-11-20 17:15:09 CET; 19h ago
   Main PID: 322827 (java)
      Tasks: 36 (limit: 4498)
     Memory: 733.1M
        CPU: 20min 2.002s
     CGroup: /system.slice/logstash.service
             └─322827 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.he>

nov. 20 17:15:29 Big-Monster logstash[322827]: [2023-11-20T17:15:29,754][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
nov. 20 17:15:29 Big-Monster logstash[322827]: [2023-11-20T17:15:29,758][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type`>
nov. 20 17:15:29 Big-Monster logstash[322827]: [2023-11-20T17:15:29,789][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash:>
nov. 20 17:15:29 Big-Monster logstash[322827]: [2023-11-20T17:15:29,806][INFO ][logstash.outputs.elasticsearch] Using default mapping template
nov. 20 17:15:29 Big-Monster logstash[322827]: [2023-11-20T17:15:29,841][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_temp>
nov. 20 17:15:30 Big-Monster logstash[322827]: [2023-11-20T17:15:30,126][INFO ][logstash.inputs.file     ] No sincedb_path set, generating one based on the >
nov. 20 17:15:30 Big-Monster logstash[322827]: [2023-11-20T17:15:30,159][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"mai>
nov. 20 17:15:30 Big-Monster logstash[322827]: [2023-11-20T17:15:30,200][INFO ][filewatch.observingtail  ] START, creating Discoverer, Watch with file and s>
nov. 20 17:15:30 Big-Monster logstash[322827]: [2023-11-20T17:15:30,211][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>
nov. 20 17:15:30 Big-Monster logstash[322827]: [2023-11-20T17:15:30,486][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port>

Notice : I tried to create differents pipelines too, instead of using one file input. For that, I updated the file pipeline.yml. But that doesn't work either.

Anyway, for now, I have that, in the pipeline.yml :

root@Big-Monster:/etc/logstash/conf.d# cat ../pipelines.yml
# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
#   https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

- pipeline.id: main
  path.config: "/etc/logstash/conf.d/*.conf"

Could someone help me ? I am novice, on ELK. That application is powerfull, but not so easy to understand :slight_smile:

Best regards,
Chris

Hi,

I add an extract of the "debug's test" (with --debug -f"). Notice if I start a debug, that turn in loop.

[DEBUG] 2023-11-21 13:29:48.337 [[main]>worker2] pipeline - output received {"event"=>{"path"=>"/var/log/syslog", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.229Z, "message"=>"Nov 21 13:29:48 Big-Monster snapd-desktop-i[2595437]: Is a desktop session! Forcing a reload.", "@version"=>"1"}}
[DEBUG] 2023-11-21 13:29:48.337 [[main]>worker2] pipeline - output received {"event"=>{"path"=>"/var/log/cron.log", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.229Z, "message"=>"Nov 21 13:29:48 Big-Monster snapd-desktop-i[2595437]: Checking session /org/freedesktop/login1/session/c2...", "@version"=>"1"}}
[DEBUG] 2023-11-21 13:29:48.337 [[main]>worker1] pipeline - output received {"event"=>{"path"=>"/var/log/syslog", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.225Z, "message"=>"Nov 21 13:29:47 Big-Monster systemd[1363]: snap.snapd-desktop-integration.snapd-desktop-integration.service: Scheduled restart job, restart counter is at 138441.", "@version"=>"1"}}
[DEBUG] 2023-11-21 13:29:48.337 [[main]>worker2] pipeline - output received {"event"=>{"path"=>"/var/log/cron.log", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.231Z, "message"=>"Nov 21 13:29:48 Big-Monster snapd-desktop-i[2595437]: Checking session /org/freedesktop/login1/session/_3182...", "@version"=>"1"}}
[DEBUG] 2023-11-21 13:29:48.337 [[main]>worker1] pipeline - output received {"event"=>{"path"=>"/var/log/cron.log", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.225Z, "message"=>"Nov 21 13:29:47 Big-Monster systemd[1363]: snap.snapd-desktop-integration.snapd-desktop-integration.service: Scheduled restart job, restart counter is at 138441.", "@version"=>"1"}}
[DEBUG] 2023-11-21 13:29:48.338 [[main]>worker2] pipeline - output received {"event"=>{"path"=>"/var/log/syslog", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.231Z, "message"=>"Nov 21 13:29:48 Big-Monster snapd-desktop-i[2595437]: Checking session /org/freedesktop/login1/session/c1...", "@version"=>"1"}}
[DEBUG] 2023-11-21 13:29:48.338 [[main]>worker1] pipeline - output received {"event"=>{"path"=>"/var/log/syslog", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.226Z, "message"=>"Nov 21 13:29:47 Big-Monster systemd[1363]: Started Service for snap application snapd-desktop-integration.snapd-desktop-integration.", "@version"=>"1"}}
[DEBUG] 2023-11-21 13:29:48.338 [[main]>worker2] pipeline - output received {"event"=>{"path"=>"/var/log/cron.log", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.232Z, "message"=>"Nov 21 13:29:48 Big-Monster snapd-desktop-i[2595437]: Checking session /org/freedesktop/login1/session/_3184...", "@version"=>"1"}}
[DEBUG] 2023-11-21 13:29:48.338 [[main]>worker2] pipeline - output received {"event"=>{"path"=>"/var/log/syslog", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.232Z, "message"=>"Nov 21 13:29:48 Big-Monster snapd-desktop-i[2595437]: Loop exited. Forcing reload.", "@version"=>"1"}}
[DEBUG] 2023-11-21 13:29:48.338 [[main]>worker1] pipeline - output received {"event"=>{"path"=>"/var/log/syslog", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.228Z, "message"=>"Nov 21 13:29:48 Big-Monster snapd-desktop-i[2595437]: Failed to do gtk init. Waiting for a new session with desktop capabilities.", "@version"=>"1"}}

Another extracts :

[DEBUG] 2023-11-21 13:29:48.338 [[main]>worker2] pipeline - output received {"event"=>{"path"=>"/var/log/syslog", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.232Z, "message"=>"Nov 21 13:29:48 Big-Monster snapd-desktop-i[2595437]: Loop exited. Forcing reload.", "@version"=>"1"}}
[DEBUG] 2023-11-21 13:29:48.338 [[main]>worker1] pipeline - output received {"event"=>{"path"=>"/var/log/syslog", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.228Z, "message"=>"Nov 21 13:29:48 Big-Monster snapd-desktop-i[2595437]: Failed to do gtk init. Waiting for a new session with desktop capabilities.", "@version"=>"1"}}


[DEBUG] 2023-11-21 13:29:48.329 [[main]>worker0] pipeline - filter received {"event"=>{"path"=>"/var/log/cron.log", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.227Z, "message"=>"Nov 21 13:29:47 Big-Monster systemd[1363]: Started Service for snap application snapd-desktop-integration.snapd-desktop-integration.", "@version"=>"1"}}
[DEBUG] 2023-11-21 13:29:48.329 [[main]>worker0] pipeline - filter received {"event"=>{"path"=>"/var/log/cron.log", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.228Z, "message"=>"Nov 21 13:29:48 Big-Monster snapd-desktop-i[2595437]: Failed to do gtk init. Waiting for a new session with desktop capabilities.", "@version"=>"1"}}
[DEBUG] 2023-11-21 13:29:48.330 [[main]>worker0] pipeline - output received {"event"=>{"path"=>"/var/log/cron.log", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.227Z, "message"=>"Nov 21 13:29:47 Big-Monster systemd[1363]: Started Service for snap application snapd-desktop-integration.snapd-desktop-integration.", "@version"=>"1"}}
[DEBUG] 2023-11-21 13:29:48.330 [[main]>worker0] pipeline - output received {"event"=>{"path"=>"/var/log/cron.log", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.228Z, "message"=>"Nov 21 13:29:48 Big-Monster snapd-desktop-i[2595437]: Failed to do gtk init. Waiting for a new session with desktop capabilities.", "@version"=>"1"}}
[DEBUG] 2023-11-21 13:29:48.332 [[main]>worker3] pipeline - filter received {"event"=>{"path"=>"/var/log/syslog", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.226Z, "message"=>"Nov 21 13:29:47 Big-Monster systemd[1363]: Stopped Service for snap application snapd-desktop-integration.snapd-desktop-integration.", "@version"=>"1"}}

[DEBUG] 2023-11-21 13:29:38.288 [[main]>worker2] pipeline - output received {"event"=>{"path"=>"/var/log/cron.log", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:38.184Z, "message"=>"Nov 21 13:29:37 Big-Monster snapd-desktop-i[2595142]: Loop exited. Forcing reload.", "@version"=>"1"}}

[DEBUG] 2023-11-21 13:29:26.211 [[main]>worker1] pipeline - output received {"event"=>{"path"=>"/var/log/cron.log", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:26.109Z, "message"=>"Nov 21 13:29:25 Big-Monster snapd-desktop-i[2594772]: Is a desktop session! Forcing a reload.", "@version"=>"1"}}

[DEBUG] 2023-11-21 13:29:28.116 [[main]<file] file - Received line {:path=>"/var/log/cron.log", :text=>"Nov 21 13:29:27 Big-Monster sshd[2594699]: Failed password for invalid user support from 103.147.34.150 port 52970 ssh2"}

(This one is weird. I double check, my firewall is well closed. There is no access, for now, from the external... Anyway...)


[DEBUG] 2023-11-21 13:29:26.211 [[main]>worker2] pipeline - output received {"event"=>{"path"=>"/var/log/cron.log", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:26.110Z, "message"=>"Nov 21 13:29:25 Big-Monster snapd-desktop-i[2594772]: Loop exited. Forcing reload.", "@version"=>"1"}}
[DEBUG] 2023-11-21 13:29:27.595 [pool-5-thread-1] cgroup - One or more required cgroup files or directories not found: /proc/self/cgroup, /sys/fs/cgroup/cpuacct, /sys/fs/cgroup/cpu

Hello,

It is not clear what your issue is and what is this second index you mention.

The output configuration you shared has only one output, with the index syslog-%{YYYY.MM.dd}, so all your data, no matter the input, will be sent to this index.

Hi, thanks

Ok, I explain that differently. I have two entry, in the input file. Yes. I have One entry in the output. Yeap. So, yes, "Syslog" and "cron" will be redirected in the same canal, the one I configured in the output. It shouldn't be a problem, I have a customer which have a similar configuration. That works well, apparently.

If that work for him, that should work for me. In first view, of course :). But, the problem, in my case, is that doesn't work, precisely. That turn in loop. As you can see in the logs I added in this thread :

Loop exited. Forcing reload

Anyway, I understand what you mean... I do not see the second index, because precisely I do not have a second output. Noted.

Ok... I can try to add another entry in my file "output". Something like that :

root@Big-Monster:/etc/logstash/conf.d# cat 99_output.conf
output {
  elasticsearch {
    id => "TEST-output-Syslog"
    hosts => [ "127.0.0.1" ]
    index => "syslog-%{+YYYY.MM.dd}"
    }

  elasticsearch {
    id => "TEST-output-Cron"
    hosts => [ "127.0.0.1" ]
    index => "Cron-%{+YYYY.MM.dd}"
    }
}

But how to make in sort the logs configured to the input "cron" be redirected to the output "cron" and not the output "syslog" ? Maybe we must have the same ID in both configuration input, ouput for cron, or something like that ? Maybe there is some subtility I missed.

Or I must, necessary, use multiple pipelines ? it is another question I asked here : How to link input contain to output contain

Apologies, I am a newbe on ELK. I am precisely working to learn to use it. There is a lot of documentations, but it is not easy for me, stranger, to understand all. Even if I do my best :slight_smile:

Best regards,
Chris

This is not a Logstash log, this is a log collected from the log files that logstash is reading.

the DEBUG level will show you everything logstash is doing, it is pretty rare to need to enable it, what you shared is normal behavior, Logstash is just showing you every message it is reading from the files you configured.

The loop you mentioned is not a Logstash issue, from the logs you shared it is some issue with your snapd service.

[DEBUG] 2023-11-21 13:29:48.338 [[main]>worker2] pipeline - output received {"event"=>{"path"=>"/var/log/syslog", "host"=>"Big-Monster", "@timestamp"=>2023-11-21T12:29:48.232Z, "message"=>"Nov 21 13:29:48 Big-Monster snapd-desktop-i[2595437]: Loop exited. Forcing reload.", "@version"=>"1"}}

This is a log line from your /var/log/syslog file, it is not an issue on Logstash:

Nov 21 13:29:48 Big-Monster snapd-desktop-i[2595437]: Loop exited. Forcing reload.

I answered your other question, you will need to use multiple pipelines.

Aah ok. Thanks a lot !

In this case, I retry the multiple pipeline config.

Best regards,
Christian

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.