Logstash not sending logs to elastic search

CentOS Linux release 7.2.1511 (Core)

Using the latest Logstash and ES versions:

Name        : elasticsearch
Arch        : noarch
Version     : 2.4.0
Release     : 1

Name        : logstash
Arch        : noarch
Epoch       : 1
Version     : 2.4.0
Release     : 1

I have a 2 node ES cluster. The nodes are bound to their NICs IP in the config file (10.1.1.1, and 10.1.1.2 for the other node):

network.host: 10.1.1.1
Curling 10.1.1.1:9200 successfully responds with ES cluster info.

  "cluster_name": "mycluster",
  "status": "green",
  "timed_out": false,
  "number_of_nodes": 2,
  "number_of_data_nodes": 2,
  "active_primary_shards": 0,
  "active_shards": 0,
  "relocating_shards": 0,
  "initializing_shards": 0,
  "unassigned_shards": 0,
  "delayed_unassigned_shards": 0,
  "number_of_pending_tasks": 0,
  "number_of_in_flight_fetch": 0,
  "task_max_waiting_in_queue_millis": 0,
  "active_shards_percent_as_number": 100

Installed logstash on one of the nodes. Following this tutorial with the example log data they provide.

Used the same config file in the tut but replaced "localhost" with "10.1.1.1"

I see this output with no errors:

When I look at the the ES nodes I see no indexes have been created (checking with 10.1.1.1:9200/_cat/indices?v)

THere are no logs in /var/log/logstash

The es logs in /var/log/elasticsearch aren't touched or changed after running this command

running the log stash command with --verbose doesn't show any errors, just messages about loading filters, then just sits at "pipeline main started" and outputs nothing.

Stopped the firewalld service, but still nothing

What's your LS config look like though?

Also please don't post pictures of text, they are difficult to read and some people may not be even able to see them :slight_smile:

Figured out the issue.

Ok so this issue was that I specified the direct path of the log file (not *.log as show in the screen shot).

According to the tutorial (and file plugin docs), specifying start_position => 0 and ignore_older => 0 should suck in the log file and any pre existing data in it, but it was not doing that. If I started the logstash process and pushed new data into the log file with echo >> logstash finally pushing data into elasticsearch.

So my question is why does it not process the log's pre-existing data when specifying the direct path of a log file? Why is *.log necessary for it to grab the pre existing data?

What does the config you used look like?

start_position => 0

That's not a valid value for that option. It should be either "beginning" or "end".

So my question is why does it not process the log's pre-existing data when specifying the direct path of a log file? Why is *.log necessary for it to grab the pre existing data?

What probably happened here was that Logstash was tailing the file, and when you changed the filename pattern to a wildcard the sincedb path changed. With an empty sincedb file start_position => beginning" will actually result in the file being read from the beginning.