Logserver in AWS not receiving any logs

I have setup logserver using ELK in aws ubuntu instance. And I need to send syslogs from my production server to logserver. For that filebeat is installed in production server.

Contents of configuration files are given below.

elasticsearch.yml

bootstrap.mlockall: true
network.host: 0.0.0.0
http.port: 9200

kibana.yml

server.port: 5601

 server.host: "172.30.0.28"

I have used nginx proxy server , and here is its 'default' server block.

server {
        listen 80;

        server_name aws_public_address;

        auth_basic "Restricted Access";
        auth_basic_user_file /etc/nginx/htpasswd.users;

        location / {
            proxy_pass http://172.30.0.28:5601;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade ;
            proxy_set_header Connection 'upgrade';
            proxy_set_header Host $host;
            proxy_cache_bypass $http_upgrade;
            allow all;
        }
        }

Logstash config file logstash_syslog.conf

input {
  tcp {
    port => 5040
    type => 'syslog'
}
}

filter {
  if [type == 'syslog'] {
    grok {
      match => [ 'TimeCreated', "Date\(%{NUMBER:timestamp}\)"]
        }
    date {
      match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }

}
}

output {
     if [type == 'syslog'] {
     elasticsearch {
     hosts => ["localhost:9200"]
}
}
}

filebeat.yml in production server

 prospectors:
      -
      paths:
        - /var/log/syslog
      input_type: log
      document_type: syslog
  registry_file: "C:/ProgramData/filebeat/registry"

output:  
  logstash:
    hosts: ["aws_public_ip:5040"]

I couldn't see any logs in kibana view of logserver. Please help me to solve this problem.

Thanks in advance

Hello,

Did you allowed connections to this instance from your server/network? Also in your filebeat set logging on and set it to debug so you can see what happens. Here's an example of logging configuration for filebeat:
logging:
to_files: true

files:
path: /var/log/mybeat/
name: filebeat.log
level: debug

@ngv I have allowed HTTP , SSH and ICMP (Echo Request and Echo Reply) protocols for the instance.
And in my filebeat log I am getting..

    2016-02-23T05:31:13-05:00 DBG  Flushing spooler because of timeout. Events flushed: 0
    2016-02-23T05:31:15-05:00 DBG  End of file reached: /var/log/mysql.log; Backoff now.
    2016-02-23T05:31:20-05:00 DBG  Flushing spooler because of timeout. Events flushed: 0
    2016-02-23T05:31:20-05:00 DBG  Start next scan
    2016-02-23T05:31:20-05:00 DBG  scan path /var/log/mysql.log
    2016-02-23T05:31:20-05:00 DBG  Check file for harvesting: /var/log/mysql.log
    2016-02-23T05:31:20-05:00 DBG  Update existing file for harvesting: /var/log/mysql.log

Hey,

You need to create a rule that allows your production server/network external ip addresse(s) to connect to port 5040 on this instance, because none of the protocols listed by you is not used in this case.

Now I allowed All TCP for all port range for any address. Still no logs were sent.

Can you share a little bit more of your debug log (especially the startup part) to see what happens?

Ok .. Here is the starting part of filebeat.log

2016-02-23T22:23:18-05:00 DBG  Disable stderr logging
2016-02-23T22:23:18-05:00 DBG  Initializing output plugins
2016-02-23T22:23:18-05:00 INFO GeoIP disabled: No paths were set under output.geoip.paths
2016-02-23T22:23:18-05:00 INFO Max Retries set to: 3
2016-02-23T22:23:18-05:00 DBG  connect
2016-02-23T22:23:18-05:00 INFO Activated logstash as output plugin.
2016-02-23T22:23:18-05:00 DBG  Create output worker
2016-02-23T22:23:18-05:00 DBG  No output is defined to store the topology. The server fields might not be filled.
2016-02-23T22:23:18-05:00 INFO Publisher name: xdrgprod.ontash.local
2016-02-23T22:23:18-05:00 INFO Flush Interval set to: 1s
2016-02-23T22:23:18-05:00 INFO Max Bulk Size set to: 2048
2016-02-23T22:23:18-05:00 DBG  create bulk processing worker (interval=1s, bulk size=2048)
2016-02-23T22:23:18-05:00 INFO Init Beat: filebeat; Version: 1.1.1
2016-02-23T22:23:18-05:00 INFO filebeat sucessfully setup. Start running.
2016-02-23T22:23:18-05:00 INFO Registry file set to: /var/lib/filebeat/registry
2016-02-23T22:23:18-05:00 INFO Loading registrar data from /var/lib/filebeat/registry
2016-02-23T22:23:19-05:00 DBG  Set idleTimeoutDuration to 5s
2016-02-23T22:23:19-05:00 DBG  File Configs: [/var/log/syslog]
2016-02-23T22:23:19-05:00 INFO Set ignore_older duration to 24h0m0s
2016-02-23T22:23:19-05:00 INFO Set scan_frequency duration to 10s
2016-02-23T22:23:19-05:00 INFO Input type set to: log
2016-02-23T22:23:19-05:00 INFO Set backoff duration to 1s
2016-02-23T22:23:19-05:00 INFO Set max_backoff duration to 10s
2016-02-23T22:23:19-05:00 INFO force_close_file is disabled
2016-02-23T22:23:19-05:00 DBG  Waiting for 1 prospectors to initialise
2016-02-23T22:23:19-05:00 INFO Starting prospector of type: log
2016-02-23T22:23:19-05:00 DBG  exclude_files: []
2016-02-23T22:23:19-05:00 DBG  scan path /var/log/syslog
2016-02-23T22:23:19-05:00 DBG  Check file for harvesting: /var/log/syslog
2016-02-23T22:23:19-05:00 DBG  Start harvesting unknown file: /var/log/syslog
2016-02-23T22:23:19-05:00 DBG  Launching harvester on new file: /var/log/syslog
2016-02-23T22:23:19-05:00 DBG  scan path /var/log/syslog
2016-02-23T22:23:19-05:00 DBG  Check file for harvesting: /var/log/syslog
2016-02-23T22:23:19-05:00 DBG  Update existing file for harvesting: /var/log/syslog
2016-02-23T22:23:19-05:00 DBG  Not harvesting, file didn't change: /var/log/syslog
2016-02-23T22:23:19-05:00 DBG  No pending prospectors. Finishing setup
2016-02-23T22:23:19-05:00 INFO All prospectors initialised with 0 states to persist
2016-02-23T22:23:19-05:00 INFO Starting Registrar
2016-02-23T22:23:19-05:00 INFO Start sending events to output
2016-02-23T22:23:19-05:00 DBG  harvest: "/var/log/syslog" (offset snapshot:0)
2016-02-23T22:23:19-05:00 INFO Harvester started for file: /var/log/syslog
2016-02-23T22:23:19-05:00 DBG  full line read
2016-02-23T22:23:19-05:00 DBG  full line read

Can you try telnet the elastic server on port 5040 and see if you can connect from the server you are sending logs from. Also i've just noticed in your config file that you are NOT using beats as input:
input {
tcp {
port => 5040
type => 'syslog'
}
}

if you are sending logs via filebeat this should look like this:

input {
beats {
port => 5040
type => 'syslog'
}
}

Sorry, I got a new issue with kibana.
kibana is not responding now. I can start kibana and its showing 'started'. But when checked the status, it is showing kibana is not running.
I checked kibana log, but there is nothing special for this case.

@babeesh Does that mean your previous issue is resolved?

In case of a Kibana issue, it is best to post it in the kibana forum: https://discuss.elastic.co/c/kibana

@ruflin I already posted the kibana issue in kibana forum.
That was a disk full error in aws EC2. I reinistalled elasticsearch and now kibana running.
But the issue in this thread still exists. I can't see any logs in kibana view. I made some modifications in logstash configuration.

input {
  beats {
    type => 'xdrgprod'
    port => 5044
    codec=> json{
      charset => 'UTF-8'
    }
 }
}

filter {
  if [type == 'xdrgprod'] {
    grok {
      match => ['TimeCreated', "Date\(%{NUMBER:timestamp}\)"]
         }
    date {
      match => [ timestamp, "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
if [type == 'xdrgprod']{
  elasticsearch {
    hosts => ["172.30.0.28:9200"]
}
stdout {codec => rubydebug}
}
}

And logstash.log

{:timestamp=>"2016-02-29T09:12:39.156000+0000", :message=>"JSON parse failure. Falling back to plain-text", :error=>#<LogStash::Json::ParserError: Unrecognized token 'Feb': was expecting ('true', 'false' or 'null')
 at [Source: [B@6d2b888a; line: 1, column: 5]>, :data=>"Feb 28 16:47:21 xdrgprod /usr/bin/filebeat[19210]: reader.go:138: End of file reached: /var/log/mysql.log; Backoff now.", :level=>:error}
{:timestamp=>"2016-02-29T09:12:39.156000+0000", :message=>"JSON parse failure. Falling back to plain-text", :error=>#<LogStash::Json::ParserError: Unrecognized token 'Feb': was expecting ('true', 'false' or 'null')
 at [Source: [B@21bb5db7; line: 1, column: 5]>, :data=>"Feb 28 16:47:24 xdrgprod /usr/bin/filebeat[19210]: prospector.go:179: Start next scan", :level=>:error}

Edit: Adding a little bit more logstash.log

{:timestamp=>"2016-02-29T09:49:11.242000+0000", :message=>"Beats Input: Remote connection closed", :peer=>"172.30.0.28:36463", :exception=>#<Lumberjack::Beats::Connection::ConnectionClosed: Lumberjack::Beats::Connection::ConnectionClosed wrapping: Lumberjack::Beats::Parser::UnsupportedProtocol, unsupported protocol 71>, :level=>:warn}
{:timestamp=>"2016-02-29T10:20:13.923000+0000", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}

Which version of the beats-input-plugin do you use? Which version of LS? I saw in the logs above you use filebeat 1.1.1.

There seem to be some issue with the json code. Can you remove it to test if it works without it?

How to check beats-input -plugin version?
LS 2.2.x
I checked removing json code. But no luck.

Check the docs here: https://www.elastic.co/guide/en/beats/libbeat/1.1/logstash-installation.html#logstash-input-update But to be honest if you are on 2.2.x you should have the most recent version.

If you remove the codec, do you still get the unsupported protocol for the beats input in the LS protocol?

Input beats version : logstash-input-beats (2.1.3)

And now logstash.log shows the error

{:timestamp=>"2016-02-29T12:46:53.806000+0000", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
{:timestamp=>"2016-02-29T12:47:09.303000+0000", :message=>"Beats Input: Remote connection closed", :peer=>"68.236.192.171:42428", :exception=>#<Lumberjack::Beats::Connection::ConnectionClosed: Lumberjack::Beats::Connection::ConnectionClosed wrapping: EOFError, End of file reached>, :level=>:warn}
{:timestamp=>"2016-02-29T12:47:09.500000+0000", :message=>"Beats Input: Remote connection closed", :peer=>"68.236.192.171:42427", :exception=>#<Lumberjack::Beats::Connection::ConnectionClosed: Lumberjack::Beats::Connection::ConnectionClosed wrapping: Errno::EPIPE, Broken pipe - Broken pipe>, :level=>:warn}

I used This tutorial to setup ELK.

This last message is about logstash being shut down due to SIGTERM. Server being shutdown/rebooted? Anyone killing or restarting logstash?

Let's start with a very minimal config having filebeat forward to logstash and logstash print to console. We reconfigure the filebeat registry file for testing purpose. The registry_test file should be deleted after every single run of filebeat to have reproducible tests.

For filebeat:

filebeat
  prospectors:
    - paths:
        - /var/log/syslog
      input_type: log
      document_type: syslog
  registry_file: "C:/ProgramData/filebeat/registry_test"

output:  
  logstash:
    hosts: ["aws_public_ip:5044"]

and for logstash (to be run from command line as we print events to console for testing):

input {
  beats {
    port => 5044
  }
}

output {
  stdout {
    codec => rubydebug
  }
}

Is this port (input plugin) only used for input by filebeat?

Can you see events being send to logstash?

I tried your way with different port number . Regisrty file is

registry_file: /var/lib/filebeat/registry

But still no luck.

logstash log files...

logstash.log

{:timestamp=>"2016-03-01T07:25:11.879000+0000", :message=>"Beats Input: Remote connection closed", :peer=>"68.236.192.171:35429", :exception=>#<Lumberjack::Beats::Connection::ConnectionClosed: Lumberjack::Beats::Connection::ConnectionClosed wrapping: EOFError, End of file reached>, :level=>:warn}

logstash.log.1

{:timestamp=>"2016-03-01T05:14:38.264000+0000", :message=>"Beats input: The circuit breaker has detected a slowdown or stall in the pipeline, the input is closing the current connection and rejecting new connection until the pipeline recover.", :exception=>LogStash::Inputs::BeatsSupport::CircuitBreaker::HalfOpenBreaker, :level=>:warn}
{:timestamp=>"2016-03-01T05:14:44.721000+0000", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}

need logs from filebeat and logstash. These logs are just telling me the TCP connection filebeat->logstash has been closed.

Have any message been send from filebeat to logstash?

It works now with basic syslog configuration .

I disabled firewall in client pc. But I didn't get an immediate result.
Logs were sent after 30 minutes approximately.

Thank you all