Logstash not seeing conf files in conf.d dir

I am not new to logstash but I am to 5.X

My issue is with files in the /etc/logstash/conf.d directory. When I created my configuration file in this directory and start logstash as a daemon it does nothing. When I start logstash with /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf it works just fine.

I checked the permissions and they are root:root for the file set at 777 for testing. Not sure why logstash as a daemon isn't seeing it. Should the file have its ownership changed to logstash?

How did you install Logstash? What OS are you using? What startup command are you using?

Centos 7
Installed from yum
Systemctl start logstash

You say 5.x. Which version, exactly?

Logstash 5.2

.0? .1?

5.2.1

may be start your command line with verbose

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --verbose

When you installed via YUM, did you by any chance see a message like this?

Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME.

In context, the entire output looks like:

Dependencies Resolved

===============================================================================================================================================================================================================================================================
 Package                                                      Arch                                                       Version                                                        Repository                                                        Size
===============================================================================================================================================================================================================================================================
Installing:
 logstash                                                     noarch                                                     1:5.2.1-1                                                      logstash-5.x                                                      91 M

Transaction Summary
===============================================================================================================================================================================================================================================================
Install  1 Package

Total download size: 91 M
Installed size: 167 M
Is this ok [y/d/N]: y
Downloading packages:
logstash-5.2.1.rpm                                                                                                                                                                                                                      |  91 MB  00:00:07
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : 1:logstash-5.2.1-1.noarch                                                                                                                                                                                                                   1/1
Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME.
warning: %post(logstash-1:5.2.1-1.noarch) scriptlet failed, exit status 1
Non-fatal POSTIN scriptlet failure in rpm package 1:logstash-5.2.1-1.noarch
  Verifying  : 1:logstash-5.2.1-1.noarch                                                                                                                                                                                                                   1/1

Installed:
  logstash.noarch 1:5.2.1-1

This happens if a JAVA_HOME is not set when you run the yum install command.

When it is properly set, this is the result:

Transaction test succeeded
Running transaction
  Installing : 1:logstash-5.2.1-1.noarch                                                                                                                                                                                                                   1/1
Using provided startup.options file: /etc/logstash/startup.options
Successfully created system startup script for Logstash
  Verifying  : 1:logstash-5.2.1-1.noarch

After adding 3 config files (one input, one filter, one output, for the sake of file merge testing), it works for me:

[root@centos7-pkg-test conf.d]# systemctl start logstash
[root@centos7-pkg-test conf.d]# systemctl status logstash
● logstash.service - logstash
   Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2017-02-21 18:48:32 UTC; 5s ago
 Main PID: 1011 (java)
   CGroup: /system.slice/logstash.service
           └─1011 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx...

Feb 21 18:48:32 centos7-pkg-test.untergeek.net systemd[1]: Started logstash.
Feb 21 18:48:32 centos7-pkg-test.untergeek.net systemd[1]: Starting logstash...
[root@centos7-pkg-test conf.d]# ps auwwx | grep jav
logstash  1011  0.0 53.5 4102040 280764 ?      SNsl 18:48   0:10 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xms256m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb --path.settings /etc/logstash

Note that you should not have to uninstall and reinstall with a JAVA_HOME exported. You can simply run:

export JAVA_HOME=/path/to/java_home
/usr/share/logstash/bin/system-install /etc/logstash/startup.options

This will regenerate the systemd startup files for you. Note: you don't actually need to provide startup.options, it will be automatically selected. I include it here so you know which file is being read.

I apologize for the inconvenience. It is unfortunate that RHEL-based systems do not make the JAVA_HOME setting more automatic.

Thank you for the troubleshooting. A quick note on the install page could assist in this issue.

As always one of the best support pages for an ope source tool in the community

testing now. Will respond back

1 Like
export JAVA_HOME=/usr/java/jdk1.8.0_73/
sudo /usr/share/logstash/bin/system-install /etc/logstash/startup.options
Using provided startup.options file: /etc/logstash/startup.options
Successfully created system startup script for Logstash

Started logstash and still logstash started with:

sudo systemctl start logstash does't appear to read the directory.

the logstash-plain.log shows:

[2017-02-21T14:11:30,181][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2017-02-21T14:11:30,190][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#]}
[2017-02-21T14:11:30,194][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}
[2017-02-21T14:11:30,195][INFO ][logstash.pipeline        ] Pipeline main started
[2017-02-21T14:11:30,228][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

I am starting it with /usr/share/logstash/bin/logstash --debug -f /etc/logstash/conf.d/logstash.conf to see if there are any new errors.

The contents of my conf file are:

input {
  file {
    path => "/home/tdesroch/test_url/*"
    type => "urllog"
    start_position => "beginning"
    sincedb_path => "/dev/null"
  }
}

filter {
  csv {
    separator => "      "
    columns => ["timestamp","src_ip","src_port","dst_ip","dst_port","method","host","uri","referrer","user_agent"]
  }
  date {
    match => [ "timestamp", "ISO8601" ]
  }
}

output {
  if [type] == "urllog" {
    elasticsearch {
      hosts => [ "host1" ]
      index => "%{type}-%{+YYYY.MM.dd}"
    }
  }
#    stdout { codec => rubydebug }
#  }
}

Still confused. Works with starting from command line just not as a service. :frowning:

What are the contents of /etc/logstash/logstash.yml?

I believe these are the defaults. I forget if I changed anything. Very different than the conf file for 2.X. I was going to take a harder look at things when I started to use ingest nodes.

# Settings file in YAML
#
# Settings can be specified either in hierarchical form, e.g.:
#
#   pipeline:
#     batch:
#       size: 125
#       delay: 5
#
# Or as flat keys:
#
#   pipeline.batch.size: 125
#   pipeline.batch.delay: 5
#
# ------------  Node identity ------------
#
# Use a descriptive name for the node:
#
node.name: IRT_LS_1 
#
# If omitted the node name will default to the machine's host name
#
# ------------ Data path ------------------
#
# Which directory should be used by logstash and its plugins
# for any persistent needs. Defaults to LOGSTASH_HOME/data
#
path.data: /var/lib/logstash
#
# ------------ Pipeline Settings --------------
#
# Set the number of workers that will, in parallel, execute the filters+outputs
# stage of the pipeline.
#
# This defaults to the number of the host's CPU cores.
#
# pipeline.workers: 2
#
# How many workers should be used per output plugin instance
#
# pipeline.output.workers: 1
#
# How many events to retrieve from inputs before sending to filters+workers
#
# pipeline.batch.size: 125
#
# How long to wait before dispatching an undersized batch to filters+workers
# Value is in milliseconds.
#
# pipeline.batch.delay: 5
#
# Force Logstash to exit during shutdown even if there are still inflight
# events in memory. By default, logstash will refuse to quit until all
# received events have been pushed to the outputs.
#
# WARNING: enabling this can lead to data loss during shutdown
#
# pipeline.unsafe_shutdown: false
#
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
path.config: /etc/logstash/conf.d
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
# At startup, test if the configuration is valid and exit (dry run)
#
# config.test_and_exit: false
#
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
#
# config.reload.automatic: false
#
# How often to check if the pipeline configuration has changed (in seconds)
#
config.reload.interval: 3
#
# Show fully compiled configuration as debug log message
# NOTE: --log.level must be 'debug'
#
# config.debug: false
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
# queue.type: memory
#
# If using queue.type: persisted, the directory path where the data files will be stored.
# Default is path.data/queue
#
# path.queue:
#
# If using queue.type: persisted, the page data files size. The queue data consists of
# append-only data files separated into pages. Default is 250mb
#
# queue.page_capacity: 250mb
#
# If using queue.type: persisted, the maximum number of unread events in the queue.
# Default is 0 (unlimited)
#
# queue.max_events: 0
#
# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the 
# capacity using this setting. Please make sure your disk drive has capacity greater than 
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick 
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
# queue.max_bytes: 1024mb
#
# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.acks: 1024
#
# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.writes: 1024
#
# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# Default is 1000, 0 for no periodic checkpoint.
#
# queue.checkpoint.interval: 1000
#
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
# http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
#   * fatal
#   * error
#   * warn
#   * info (default)
#   * debug
#   * trace
#
# log.level: info
path.logs: /var/log/logstash
#
# ------------ Other Settings --------------
#
# Where to find custom plugins
# path.plugins: []

Hmmm. I copied, and slightly modified, your logstash.conf:

input {
  file {
    path => "/tmp/logstash/test/*"
    type => "urllog"
    start_position => "beginning"
    sincedb_path => "/dev/null"
  }
}

filter {
  csv {
    separator => "      "
    columns => ["timestamp","src_ip","src_port","dst_ip","dst_port","method","host","uri","referrer","user_agent"]
  }
  date {
    match => [ "timestamp", "ISO8601" ]
  }
}

output {
#  if [type] == "urllog" {
#    elasticsearch {
#      hosts => [ "host1" ]
#      index => "%{type}-%{+YYYY.MM.dd}"
#    }
#  }
    stdout { codec => rubydebug }
#  }
}

I don't have your sample data, so I added non-valid stuff to a file in that path, and even appended to it while Logstash was running.

Logstash starts for me:

[root@centos7-pkg-test logstash]# systemctl start logstash
[root@centos7-pkg-test logstash]# systemctl status logstash
● logstash.service - logstash
   Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2017-02-21 19:35:01 UTC; 4s ago
 Main PID: 1259 (java)
   CGroup: /system.slice/logstash.service
           └─1259 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx...

Feb 21 19:35:01 centos7-pkg-test systemd[1]: Started logstash.
Feb 21 19:35:01 centos7-pkg-test systemd[1]: Starting logstash...
[root@centos7-pkg-test logstash]# ps auwwx | grep jav
logstash  1259  0.0 47.8 4102040 250792 ?      SNsl 19:35   0:06 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xms256m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb --path.settings /etc/logstash

And, sure enough, those look like the defaults, except for your node name.

Here is what I get with systemctl status logstash:

● logstash.service - logstash
   Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2017-02-21 14:10:50 EST; 35min ago
 Main PID: 11354 (java)
   CGroup: /system.slice/logstash.service
           └─11354 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx12g -Xms1g -Xss2048k -Djff...

Feb 21 14:10:50 gs-3285-logstash1 systemd[1]: Started logstash.
Feb 21 14:10:50 gs-3285-logstash1 systemd[1]: Starting logstash...
Feb 21 14:11:28 gs-3285-logstash1 logstash[11354]: Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties

Logstash starts just fine, it jsut doesn't appear to read the conf.d directory contents. Thats why I initially thought it may be a permissions issue. But the entire directory under /etc/logstash is owned by root. I installed it as my user with sudo.

Permissions shouldn't matter if the full path allows for world read (and for directories, execute). My files and directories are all owned by root.

hmmmm. As a note when logstash starts in the log:

[2017-02-21T14:52:53,551][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#]}

I see the above line and in that line I see the IP of my elasticsearch host. So I am incline to say it is seeing my conf file but not ingesting the data?

Entirely possible. Did you add the sincedb_path => "/dev/null" after a first run, or from the very start? If after a first run, you may still have a sincedb stored in the default --path.data (should be /var/lib/logstash).

In troubleshooting things like this, it's often useful to have the file plugin point to a regular file, and then append data to that file after Logstash starts. That way the sincedb doesn't matter, as the data is "new."