Logstash not able to run multi pipeline when running logstash as a service. It is running fine if i run the logstash from cli using /usr/share/logstash/bin/logstash --path.settings /etc/logstash/

Logstash not able to run multi pipeline when running logstash as a service. It is running fine if i run the logstash from cli using /usr/share/logstash/bin/logstash --path.settings /etc/logstash/.
logstash.service file contents:
[Unit]
Description=logstash

[Service]
Type=simple
User=logstash
Group=logstash
EnvironmentFile=-/etc/default/logstash
EnvironmentFile=-/etc/sysconfig/logstash
ExecStart=/usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash/"
Restart=always
WorkingDirectory=/
Nice=19
LimitNOFILE=16384

[Install]
WantedBy=multi-user.target

Logstash.yml file contents:
path.data: /var/lib/logstash
pipeline.batch.size: 1000
pipeline.batch.delay: 5
#path.config: /etc/logstash/conf.d

http.host: 0.0.0.0

path.logs: /var/log/logstash

dead_letter_queue.enable: true
path.dead_letter_queue: "/var/log/logstash/dead_letter_queue"
dead_letter_queue.max_bytes: 15360mb
xpack.monitoring.elasticsearch.url: ["my-elasticsearch-url:9200"]

My pipelines.yml :

  • pipeline.id: pipe_1
    path.config: "/etc/logstash/conf.d/pipe1.conf"
  • pipeline.id: pipe_2
    path.config: "/etc/logstash/conf.d/pipe2.conf"
  • pipeline.id: fluentbit
    path.config: "/etc/logstash/conf.d/fluentbit.conf"
  • pipeline.id: kubernetes
    path.config: "/etc/logstash/conf.d/kubernetes.conf"
  • pipeline.id: loghost
    path.config: "/etc/logstash/conf.d/loghost.conf"
  • pipeline.id: dlq
    path.config: "/etc/logstash/conf.d/dlq.conf"

Any help on how to run logstash as a service with multi pipeline setup will be appreciated.

Things i tried:

  1. commented out path.config in logstash.yml
  2. changed the ownership of pipelines.yml to logstash from root.
  3. changed the Exec start in logstash.service file by removing path.config setting.

Still logstash runs okay with CLI command. But, doesn't run as a service.

Logstash version: 6.3.1

your pipelines.yml file has some extra space/tab or something.

try first two pipleline.id by typing. and test.

I had same problem long time ago and it was gone once I fully type everything.

I have tried manually typing the content in pipelines.yml.
Then restarted my logstash service. Still, the pipelines are not shown in kibana monitoring dashboard. When i run logstash from cli using
/usr/share/logstash/bin/logstash --path.settings /etc/logstash/ the pipelines are seen in kibana monitoring dashboard.
If it is a indentation error. Shouldn't it fail when ran using command line as well?

you are contradicting your previous statement. you said pipeline not running in first reply

in last reply you said pipeline are not shown in kibana monitoring.

which one?

what is your log file output logstash-plain.log ?

Sorry, for the confusion. When i said pipelines are not running. I meant pipelines are not seen in kibana monitoring dashboard.
This is what i get when i cat /var/log/logstash/logstash-plain.log
[2020-02-24T01:27:19,620][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-1, groupId=es-rw-datalake] Revoking previously assigned partitions [cmw_rnw-3]
[2020-02-24T01:27:19,625][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-1, groupId=es-rw-datalake] (Re-)joining group
[2020-02-24T01:27:20,047][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-0, groupId=es-rw-datalake] Revoking previously assigned partitions [cmw_rnw-1]
[2020-02-24T01:27:20,048][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-0, groupId=es-rw-datalake] (Re-)joining group
[2020-02-24T01:27:20,348][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-4, groupId=es-rfy-datalake] Revoking previously assigned partitions [ribbons-90, ribbons-92, ribbons-91, ribbons-94, ribbons-93, ribbons-96, ribbons-95, ribbons-98, ribbons-97, ribbons-99]
[2020-02-24T01:27:20,349][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-4, groupId=es-rfy-datalake] (Re-)joining group
[2020-02-24T01:27:20,795][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-0, groupId=es-rfy-datalake] Revoking previously assigned partitions [ribbons-10, ribbons-12, ribbons-11, ribbons-14, ribbons-13, ribbons-16, ribbons-15, ribbons-18, ribbons-17, ribbons-19]
[2020-02-24T01:27:20,795][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-0, groupId=es-rfy-datalake] (Re-)joining group
[2020-02-24T01:27:20,914][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-1, groupId=es-rfy-datalake] Revoking previously assigned partitions [ribbons-39, ribbons-30, ribbons-32, ribbons-31, ribbons-34, ribbons-33, ribbons-36, ribbons-35, ribbons-38, ribbons-37]
[2020-02-24T01:27:20,915][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-1, groupId=es-rfy-datalake] (Re-)joining group
[2020-02-24T01:27:20,932][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-3, groupId=es-rfy-datalake] Revoking previously assigned partitions [ribbons-72, ribbons-71, ribbons-74, ribbons-73, ribbons-76, ribbons-75, ribbons-78, ribbons-77, ribbons-79, ribbons-70]
[2020-02-24T01:27:20,932][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-3, groupId=es-rfy-datalake] (Re-)joining group
[2020-02-24T01:27:21,328][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-2, groupId=es-rw-datalake] Revoking previously assigned partitions
[2020-02-24T01:27:21,329][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-2, groupId=es-rw-datalake] (Re-)joining group
[2020-02-24T01:27:21,367][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-4, groupId=es-rw-datalake] Revoking previously assigned partitions
[2020-02-24T01:27:21,368][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-4, groupId=es-rw-datalake] (Re-)joining group
[2020-02-24T01:27:21,959][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-2, groupId=es-rfy-datalake] Revoking previously assigned partitions [ribbons-56, ribbons-55, ribbons-58, ribbons-57, ribbons-59, ribbons-50, ribbons-52, ribbons-51, ribbons-54, ribbons-53]
[2020-02-24T01:27:21,960][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-2, groupId=es-rfy-datalake] (Re-)joining group
[2020-02-24T01:27:21,975][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-1, groupId=es-rfy-datalake] Successfully joined group with generation 1271
[2020-02-24T01:27:21,975][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-0, groupId=es-rfy-datalake] Successfully joined group with generation 1271
[2020-02-24T01:27:21,975][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-4, groupId=es-rfy-datalake] Successfully joined group with generation 1271
[2020-02-24T01:27:21,976][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-1, groupId=es-rfy-datalake] Setting newly assigned partitions [ribbons-24, ribbons-23, ribbons-26, ribbons-25, ribbons-28, ribbons-27, ribbons-30, ribbons-29, ribbons-20, ribbons-22, ribbons-21, ribbons-39, ribbons-32, ribbons-31, ribbons-34, ribbons-33, ribbons-36, ribbons-35, ribbons-38, ribbons-37]
[2020-02-24T01:27:21,976][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-3, groupId=es-rfy-datalake] Successfully joined group with generation 1271
[2020-02-24T01:27:21,976][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-0, groupId=es-rfy-datalake] Setting newly assigned partitions [ribbons-16, ribbons-15, ribbons-18, ribbons-17, ribbons-19, ribbons-8, ribbons-7, ribbons-10, ribbons-9, ribbons-12, ribbons-11, ribbons-14, ribbons-13, ribbons-0, ribbons-2, ribbons-1, ribbons-4, ribbons-3, ribbons-6, ribbons-5]
[2020-02-24T01:27:21,976][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-3, groupId=es-rfy-datalake] Setting newly assigned partitions [ribbons-60, ribbons-62, ribbons-61, ribbons-79, ribbons-72, ribbons-71, ribbons-74, ribbons-73, ribbons-76, ribbons-75, ribbons-78, ribbons-77, ribbons-64, ribbons-63, ribbons-66, ribbons-65, ribbons-68, ribbons-67, ribbons-70, ribbons-69]
[2020-02-24T01:27:21,976][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-4, groupId=es-rfy-datalake] Setting newly assigned partitions [ribbons-88, ribbons-87, ribbons-90, ribbons-89, ribbons-92, ribbons-91, ribbons-94, ribbons-93, ribbons-80, ribbons-82, ribbons-81, ribbons-84, ribbons-83, ribbons-86, ribbons-85, ribbons-96, ribbons-95, ribbons-98, ribbons-97, ribbons-99]
[2020-02-24T01:27:21,983][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-2, groupId=es-rfy-datalake] Successfully joined group with generation 1271
[2020-02-24T01:27:21,984][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-2, groupId=es-rfy-datalake] Setting newly assigned partitions [ribbons-56, ribbons-55, ribbons-58, ribbons-57, ribbons-59, ribbons-48, ribbons-47, ribbons-50, ribbons-49, ribbons-52, ribbons-51, ribbons-54, ribbons-53, ribbons-40, ribbons-42, ribbons-41, ribbons-44, ribbons-43, ribbons-46, ribbons-45]
[2020-02-24T01:27:22,139][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-3, groupId=es-rw-datalake] Revoking previously assigned partitions
[2020-02-24T01:27:22,139][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-3, groupId=es-rw-datalake] (Re-)joining group
[2020-02-24T01:27:22,146][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-3, groupId=es-rw-datalake] Successfully joined group with generation 1001
[2020-02-24T01:27:22,147][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-1, groupId=es-rw-datalake] Successfully joined group with generation 1001
[2020-02-24T01:27:22,147][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-3, groupId=es-rw-datalake] Setting newly assigned partitions [cmw_rnw-3]
[2020-02-24T01:27:22,147][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-4, groupId=es-rw-datalake] Successfully joined group with generation 1001
[2020-02-24T01:27:22,147][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-2, groupId=es-rw-datalake] Successfully joined group with generation 1001
[2020-02-24T01:27:22,147][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-4, groupId=es-rw-datalake] Setting newly assigned partitions [cmw_rnw-4]
[2020-02-24T01:27:22,147][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-2, groupId=es-rw-datalake] Setting newly assigned partitions [cmw_rnw-2]
[2020-02-24T01:27:22,147][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-1, groupId=es-rw-datalake] Setting newly assigned partitions [cmw_rnw-1]
[2020-02-24T01:27:22,147][INFO ][org.apache.kafka.clients.consumer.internals.AbstractCoordinator] [Consumer clientId=logstash-0, groupId=es-rw-datalake] Successfully joined group with generation 1001
[2020-02-24T01:27:22,148][INFO ][org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] [Consumer clientId=logstash-0, groupId=es-rw-datalake] Setting newly assigned partitions [cmw_rnw-0]

I don't see any error here.
for elastic to monitor logstash pipeline you need following enable in /etc/logstash/logstash.yml file

xpack.monitoring.enabled: true

also if you have setup any user/password for your elk you need to specify
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: xxxxx

and I think following as well
xpack.monitoring.elasticsearch.hosts: ["http://hostname:9200"]

Set xpack.monitoring.enabled:true in logstash.yml and restarted the logstash. Still, same issue.
When i run through cli even without explicitly mentioning the xpack.monitoring.enabked: true . Kibana was still able to show monitoring dashboards for logstash.

I do see this error in lostash logs.

Feb 24 21:44:42 d-gp2-logstash2-1 logstash[20079]: [2020-02-24T21:44:42,063][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:.monitoring-logstash, :exception=>"NoMethodError", :message=>"undefined method get_current_queue_size' for nil:NilClass", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:669:in collect_dlq_stats'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:236:in start'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:48:in block in execute'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:44:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:305:in block in converge_state'"]}

Any help on how to get this solved?

Hi
Post your logstash.yml file
post your pipeline file. keep only one and one pipeline and test.
it seems something still wrong with other config file listed in your pipeline.

LOGSTASH.YML:
path.data: /var/lib/logstash
pipeline.batch.size: 1000
pipeline.batch.delay: 5
#path.config: /etc/logstash/conf.d
#queue.max_bytes: 2gb

http.host: 0.0.0.0
path.logs: /var/log/logstash

dead_letter_queue.enable: true
path.dead_letter_queue: "/var/log/logstash/dead_letter_queue"
dead_letter_queue.max_bytes: 15360mb
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.url: ["my-elastic-url:9200"]

Pipelines.yml:

  • pipeline.id: pipe_1
    path.config: "/etc/logstash/conf.d/pipe1.conf"
  • pipeline.id: pipe_2
    path.config: "/etc/logstash/conf.d/pipe2.conf"

When i changed the service file for logstash and mentioned user as root instead of logstash. The kibana is able to show the pipelines in dahsboard.
New contents of logstash.service is as follows. I changed the user and group values from logstash to root.

logstash.service
[Unit]
Description=logstash

[Service]
Type=simple
User=root
Group=root

EnvironmentFile=-/etc/default/logstash
EnvironmentFile=-/etc/sysconfig/logstash
ExecStart=/usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash"
Restart=always
WorkingDirectory=/
Nice=19
LimitNOFILE=16384

[Install]
WantedBy=multi-user.target

Looks like a file permissions issue. But, i changed the ownership of pipelines.yml, logstash.yml to logstash.logstash. Still, when i run as logstash user the pipelines are not visible in kibana.
Any other file permissions which needed to be changed?

I start all my service as root.
systemctl restart/start/stop logstash

but in file logstash.service I have user=logstash and group=logstash

my file permission /etc/logstash/logstash.yml is logstash:logstash
and pipeline.yml is same logstash:logstash

I do ran service as root. But, if the user and group in service file is logstash. Then no pipelines are seen in kibana dashboard. If i change them to root then i am able to see pipelines in kibana dashboard.
My file permission /etc/logstash/logstash.yml is logstash:logstash
pipelines.yml is logstash:logstash as well.
Not, sure where i am messing this up.

well hopefully someone else in board can comment on it, if they have seen such issue

When running logstash a service for multi pipeline i saw the failed to execute action error. Then i tried to restart logstash, after multiple attempts. I didn't see the error.
Then i rolled back logstash for single pipeline and ran as a service. Now, again i see Failed to execute action error. Not, sure why i am seeing this error.

Feb 24 21:44:42 d-gp2-logstash2-1 logstash[20079]: [2020-02-24T21:44:42,063][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:.monitoring-logstash, :exception=>"NoMethodError", :message=>"undefined method get_current_queue_size' for nil:NilClass", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:669:in collect_dlq_stats'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:236:in start'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:48:in block in execute'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:44:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:305:in block in converge_state'"]}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.