Logstash ran as service won't read logs only when ran through the command line

Hi, I’m running Logstash on SUSE Linux where I’ve installed the RPM package for compatibility. Currently, When I start logstash as a service
sudo systemctl stop logstash.service and check service status it seems to be running fine

logstash.service - logstash

Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2023-12-15 00:04:35 UTC; 26s ago
Main PID: 27892 (java)
Tasks: 34
CGroup: /system.slice/logstash.service
└─ 27892 /usr/share/logstash/jdk/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly> Dec 15 00:04:54 hana-kms-01 logstash[27892]: value.serializer = class org.apache.kafka.common.serialization.StringSerializer
Dec 15 00:04:55 hana-kms-01 logstash[27892]: [2023-12-15T00:04:55,028][INFO ][org.apache.kafka.common.utils.AppInfoParser][main] Kafka version: 2.5.1
Dec 15 00:04:55 hana-kms-01 logstash[27892]: [2023-12-15T00:04:55,032][INFO ][org.apache.kafka.common.utils.AppInfoParser][main] Kafka commitId: 0efa8fb0f4c73d92
Dec 15 00:04:55 hana-kms-01 logstash[27892]: [2023-12-15T00:04:55,032][INFO ][org.apache.kafka.common.utils.AppInfoParser][main] Kafka startTimeMs: 1702598695017
Dec 15 00:04:55 hana-kms-01 logstash[27892]: [2023-12-15T00:04:55,211][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.wor>
Dec 15 00:04:55 hana-kms-01 logstash[27892]: [2023-12-15T00:04:55,640][INFO ][org.apache.kafka.clients.Metadata][main] [Producer clientId=producer-1] Cluster ID: 4tW>
Dec 15 00:04:56 hana-kms-01 logstash[27892]: [2023-12-15T00:04:56,046][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds">
Dec 15 00:04:56 hana-kms-01 logstash[27892]: [2023-12-15T00:04:56,102][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
Dec 15 00:04:56 hana-kms-01 logstash[27892]: [2023-12-15T00:04:56,158][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :>
Dec 15 00:04:56 hana-kms-01 logstash[27892]: [2023-12-15T00:04:56,210][INFO ][filewatch.observingtail ][main][ab83158615982f8d6ca1fe433994b8eeb743c067c6e0406b48078d>
lines 1-18/18 (END)

The problem is no logs are been sent to my file output or Kafka. However, if I run from the command line with
/usr/share/logstash/bin/logstash --debug -f /etc/logstash/conf.d/test-logs.conf on, the logs are read and sent to kafka just fine.

This is the content of /etc/systemd/system/logstash.service - (I removed "--path.settings" "/etc/logstash" as suggested in one of the community posts below)
[Unit]
Description=logstash [Service]
Type=simple
User=logstash
Group=logstash
# Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
# Prefixing the path with '-' makes it try to load, but if the file doesn't
# exist, it continues onward.
EnvironmentFile=-/etc/default/logstash
EnvironmentFile=-/etc/sysconfig/logstash
ExecStart=/usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash"
Restart=always
WorkingDirectory=/
Nice=19
LimitNOFILE=16384 # When stopping, how long to wait before giving up and sending SIGKILL?
# Keep in mind that SIGKILL on a process can cause data loss.
TimeoutStopSec=infinity [Install]
WantedBy=multi-user.target And this is the content for /etc/logstash/pipelines.yml # This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
# https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html - pipeline.id: main
path.config: "/etc/logstash/conf.d/test-logs.conf"

I’ve checked the permissions which look correct -rw-r--r-- 1 root root 460 Dec 14 22:13 /etc/logstash/conf.d/test-logs.conf

Also I've followed advice in these posts:

I’ve ran out of options of things to try and nothing seems to make it work, I'm new to Logstash, so any help would be appreciated !

Hello and welcome,

What do you have in Logstash logs?

Please restart your logstash service to get fresh logs and share the logs in the file /var/log/logstash/logstash-plain.log.

What do you mean with that? You removed this from the logstash.service file? This is not correct, this is required.

Please share the content of your configuration file.

Also, use the preformatted text option, the </> button, when sharing configuration and logs, the post can be really confusing to read without proper formating.

Which user you used to run this? Your user or did you run it as the root user?

Contents of /var/log/logstash/logstash-plain.log

This are the contents

[2023-12-15T15:14:08,875][INFO ][logstash.runner          ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2023-12-15T15:14:08,888][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.15.0", "jruby.version"=>"jruby 9.2.19.0 (2.5.8) 2021-06-15 55810c552b OpenJDK 64-Bit Server VM 11.0.11+9 on 11.0.11+9 +indy +jit [linux-x86_64]"}
[2023-12-15T15:14:10,382][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2023-12-15T15:14:11,016][INFO ][org.reflections.Reflections] Reflections took 75 ms to scan 1 urls, producing 120 keys and 417 values 
[2023-12-15T15:14:12,034][INFO ][org.apache.kafka.clients.producer.ProducerConfig][main] ProducerConfig values: 
	acks = 1
	batch.size = 16384
	bootstrap.servers = [<boostrapping-servers>]
	buffer.memory = 33554432
	client.dns.lookup = default
	client.id = producer-1
	compression.type = none
	connections.max.idle.ms = 540000
	delivery.timeout.ms = 120000
	enable.idempotence = false
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metadata.max.idle.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 50
	reconnect.backoff.ms = 50
	request.timeout.ms = 40000
	retries = 2147483647
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	security.providers = null
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLSv1.2
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.StringSerializer

[2023-12-15T15:14:12,096][INFO ][org.apache.kafka.common.utils.AppInfoParser][main] Kafka version: 2.5.1
[2023-12-15T15:14:12,100][INFO ][org.apache.kafka.common.utils.AppInfoParser][main] Kafka commitId: 0efa8fb0f4c73d92
[2023-12-15T15:14:12,100][INFO ][org.apache.kafka.common.utils.AppInfoParser][main] Kafka startTimeMs: 1702653252093
[2023-12-15T15:14:12,389][INFO ][org.apache.kafka.clients.Metadata][main] [Producer clientId=producer-1] Cluster ID: 4tWJyX0jSNKuqIdBOV3Nmw
[2023-12-15T15:14:12,434][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/etc/logstash/conf.d/test-logs.conf"], :thread=>"#<Thread:0x1628981a run>"}
[2023-12-15T15:14:13,183][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.74}
[2023-12-15T15:14:13,232][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2023-12-15T15:14:13,290][INFO ][filewatch.observingtail  ][main][c9dac137fd7d19ee8952076e6fa3e806d1f78c7df43892a9047c470499178a6e] START, creating Discoverer, Watch with file and sincedb collections
[2023-12-15T15:14:13,310][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} 

There are no errors in your logs, logstash start without any issue and it is reading from your kafka.

You didn't share the other things that was asked, so not sure what is the issue here.

that was one of the suggestions in this post Pipelines.yml ignored on logstash running as a service

to change line below in /etc/systemd/system/logstash.service

ExecStart=/usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash"

to

ExecStart=/usr/share/logstash/bin/logstash 

But I've reverted the change

input {
  file {
    path => "/root/db/kms/kms_manager_log/logs/kms_manager_rCURRENT.log"
    type => "kms_manager_rCURRENT.log"
    sincedb_path => "/dev/null"
    start_position => "beginning"
  }
}

filter {
  mutate {
    add_tag => [ "kms" ]
  }
}

output {
  kafka {
    bootstrap_servers => "<boostrap-servers>"
    topic_id => "vm-kms"
  }

  file {
   path => "/var/log/logstash/test-output.log"
  }
}

It's ran as a root user

Hi @leandrojmp sorry I was replying to the messages individually. I've shared it all let me know if there is something else to share, thank you

The log you shared, it is not clear if it was after you run logstash as a service or as command line because there are some lines missing.

What happens when you run systemctl start logstash ?

The logs I shared come from this command

systemctl start logstash.service

it was a clean start.

I just checked the logs to make sure I'm not missing any but they are the same I shared. What information is missing?

It looks like it's the same logs

[2023-12-15T15:41:01,951][INFO ][logstash.runner          ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2023-12-15T15:41:01,962][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.15.0", "jruby.version"=>"jruby 9.2.19.0 (2.5.8) 2021-06-15 55810c552b OpenJDK 64-Bit Server VM 11.0.11+9 on 11.0.11+9 +indy +jit [linux-x86_64]"}
[2023-12-15T15:41:03,992][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2023-12-15T15:41:04,443][INFO ][org.reflections.Reflections] Reflections took 75 ms to scan 1 urls, producing 120 keys and 417 values 
[2023-12-15T15:41:05,434][INFO ][org.apache.kafka.clients.producer.ProducerConfig][main] ProducerConfig values: 
	acks = 1
	batch.size = 16384
	bootstrap.servers = [<boostrap-servers>]
	buffer.memory = 33554432
	client.dns.lookup = default
	client.id = producer-1
	compression.type = none
	connections.max.idle.ms = 540000
	delivery.timeout.ms = 120000
	enable.idempotence = false
	interceptor.classes = []
	key.serializer = class org.apache.kafka.common.serialization.StringSerializer
	linger.ms = 0
	max.block.ms = 60000
	max.in.flight.requests.per.connection = 5
	max.request.size = 1048576
	metadata.max.age.ms = 300000
	metadata.max.idle.ms = 300000
	metric.reporters = []
	metrics.num.samples = 2
	metrics.recording.level = INFO
	metrics.sample.window.ms = 30000
	partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
	receive.buffer.bytes = 32768
	reconnect.backoff.max.ms = 50
	reconnect.backoff.ms = 50
	request.timeout.ms = 40000
	retries = 2147483647
	retry.backoff.ms = 100
	sasl.client.callback.handler.class = null
	sasl.jaas.config = null
	sasl.kerberos.kinit.cmd = /usr/bin/kinit
	sasl.kerberos.min.time.before.relogin = 60000
	sasl.kerberos.service.name = null
	sasl.kerberos.ticket.renew.jitter = 0.05
	sasl.kerberos.ticket.renew.window.factor = 0.8
	sasl.login.callback.handler.class = null
	sasl.login.class = null
	sasl.login.refresh.buffer.seconds = 300
	sasl.login.refresh.min.period.seconds = 60
	sasl.login.refresh.window.factor = 0.8
	sasl.login.refresh.window.jitter = 0.05
	sasl.mechanism = GSSAPI
	security.protocol = PLAINTEXT
	security.providers = null
	send.buffer.bytes = 131072
	ssl.cipher.suites = null
	ssl.enabled.protocols = [TLSv1.2]
	ssl.endpoint.identification.algorithm = https
	ssl.key.password = null
	ssl.keymanager.algorithm = SunX509
	ssl.keystore.location = null
	ssl.keystore.password = null
	ssl.keystore.type = JKS
	ssl.protocol = TLSv1.2
	ssl.provider = null
	ssl.secure.random.implementation = null
	ssl.trustmanager.algorithm = PKIX
	ssl.truststore.location = null
	ssl.truststore.password = null
	ssl.truststore.type = JKS
	transaction.timeout.ms = 60000
	transactional.id = null
	value.serializer = class org.apache.kafka.common.serialization.StringSerializer

[2023-12-15T15:41:05,493][INFO ][org.apache.kafka.common.utils.AppInfoParser][main] Kafka version: 2.5.1
[2023-12-15T15:41:05,496][INFO ][org.apache.kafka.common.utils.AppInfoParser][main] Kafka commitId: 0efa8fb0f4c73d92
[2023-12-15T15:41:05,500][INFO ][org.apache.kafka.common.utils.AppInfoParser][main] Kafka startTimeMs: 1702654865488
[2023-12-15T15:41:05,803][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>250, "pipeline.sources"=>["/etc/logstash/conf.d/test-logs.conf"], :thread=>"#<Thread:0x3e3d9d39 run>"}
[2023-12-15T15:41:05,943][INFO ][org.apache.kafka.clients.Metadata][main] [Producer clientId=producer-1] Cluster ID: 4tWJyX0jSNKuqIdBOV3Nmw
[2023-12-15T15:41:06,544][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.74}
[2023-12-15T15:41:06,599][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2023-12-15T15:41:06,660][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2023-12-15T15:41:06,687][INFO ][filewatch.observingtail  ][main][c9dac137fd7d19ee8952076e6fa3e806d1f78c7df43892a9047c470499178a6e] START, creating Discoverer, Watch with file and sincedb collections

Yeah, is correct, just checked here.

Your issue is probably here:

path => "/root/db/kms/kms_manager_log/logs/kms_manager_rCURRENT.log"

Your file input is configured to read something inside the /root directory and only the root user has access to this path, logstash service is executed under the logstash user.

You should move this file to a different path where the logstash user has permissions to read.

You should not change the permissions of the /root path, nor configure logstash to run as root.

I'm also running logstash at a separate machine same configuration but different path as shown below, but the same issue is happening. Only when running logstash from the command line is when logs are read. Also, I'm confused on why from command line it would read from root

 path => /usr/sa/QAB/lss/shared/data/trace/SYSTEMDB/lss_18-01.SYSTEMDB.001.trc

You mentioned that you run on the command line as the root user, this will give access to the logstash process to everything in your system.

It is probably the same thing, the logstash service runs under the logstash user, so if you are going to use a file input, the logstash user needs to have access to both the path and the file that you will read.

You need to put those files on a path that the logstash user can read.

Thank you @leandrojmp your proposed solution worked :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.