How do i make sure that logstash received data from filebeat?

I am trying to use the filebeat and ELK for the first time. I have my components running on RHEL5 server. Filebeat is running on another server with RHEL6 as OS and ELK is running in a 3rd server with RHEL6 as OS.

As suggested by many experts, i have mounted the logs folder from the RHEL5 machine to filebeat server. So now filebeat server can access the logs folder.

Following is the filebeat configuration file:

  ############################# Filebeat ######################################
filebeat:
  # List of prospectors to fetch data.
  prospectors:
    # Each - is a prospector. Below are the prospector specific configurations
    -
      # Paths that should be crawled and fetched. Glob based paths.
      # To fetch all ".log" files from a specific level of subdirectories
      # /var/log/*/*.log can be used.
      # For each file found under this path, a harvester is started.
      # Make sure not file is defined twice as this can lead to unexpected behaviour.
      paths:
        - /mnt/cmdc_logs/*.audit

logstash:
    # The Logstash hosts
    hosts: ["10.209.26.151:5044"]

How can i verify that filebeat is sending all the logs from the mounted folder to logstash? Is there any logs? I dont see anything in Kibana. Once i resolve this, i can check why Kibana is not showing anything.

Here you can find the different logging options: https://www.elastic.co/guide/en/beats/filebeat/1.2/configuration-logging.html

Be aware that we do not recommend to use mounted volumes: https://www.elastic.co/guide/en/beats/filebeat/1.2/filebeat-network-volumes.html

I am getting this error in the log.
2016-05-12T10:51:43Z CRIT Unable to publish events to console: write /dev/stdout: invalid argument
2016-05-12T10:51:43Z ERR Error sending/writing event: write /dev/stdout: invalid argument
Any idea why is this?

/dev/stdout? You changed your config?

This issue is seen on filebeat. I didnt change the config at all..Can i attach my config for your reference?

Steffens
Can you please help me in this?

Please attach your full config file.

Steffens... Please find the configuration file used for logstash.

input {
beats {
port=>5044
}
}
output {
elasticsearch {
hosts => ["10.209.26.147:9200"]
manage_template => false
}
}

I am afraid that i dont see any other config files used for logs. I dont see a logs folder also.

Hi Roshan,

Where did you add this piece of code -

logging:
level: warning

enable file rotation with default configuration

to_files: true

do not log to syslog

to_syslog: false

files:
path: /var/log/mybeat
name: mybeat.log
keepfiles: 7

is it in filebeat.yml ?

Thanks and Regards,
Karunesh

Hello,
This is in filebeat.yml

Hi Roshan,

Are you able to see the output of filebeat?
I am also facing same problem , I also want to know the file transfer status and details about it.

I did some configuration , but not able to see any log files .

Thanks and Regards,
Karunesh Upadhyay

Can you please share your full filebeat config file?

First need to mapping for filebeat index in kibana, default index pattern is filebeat-*

filebeat:
  prospectors:
    -
      paths:
        - /var/log/secure
        - /var/log/messages
      #  - /var/log/*.log
      
      input_type: log
      
      document_type: syslog


  registry_file: /var/lib/filebeat/registry

output:
  logstash:
    hosts: ["123.12.3.54:5044"]
    bulk_max_size: 1024

    tls:
      certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

shipper:
  name: XYZ
logging:
  files:
    rotateeverybytes: 10485760 # = 10MB

Too many unrelated configs? Sorry, I totally lost track. @Roshan_r can you please post your filebeat config file?

You get same error when running filebeat on console?

$ filebeat -e -v -c <path to filebeat config>

?

Hi Ruflin,

I am new to filebeat and logstash.
This is my first time.

################## Filebeat Configuration Example #########################

############################# Filebeat ######################################
filebeat:

prospectors:

  paths:
    - /opt/apache-tomcat-7.0.69/logs/*
    
   
  input_type: log

logstash:

hosts: ["localhost:5044"]

logging:

to_syslog: false

to_files: true

files:

path: "/var/log"

name: filebeat.log

rotateeverybytes: 10485760 # = 10MB

keepfiles: 7

selectors: ["*"]
level: warning

#########################################################33

Output is Logstash .
starting Filebeat - ./filebeat -e -v -d '*' . It is running but not able to log folder.

Thanks and Regards,
Karunesh Upadhyay

steffens,
The full config is as below for filebeat.

filebeat:
  prospectors:
    -
      paths:
        - /mnt/cmdc_logs/*.audit*"
      input_type: log
        document_type: my_log
 output:
   logstash:
     hosts: ["10.209.26.147:5044"]
   console:
   pretty: true
 shipper:
  logging:

 to_files: true

files:
path: /var/log/mybeat

# The name of the files where the logs are written to.
name: mybeat

# Configure log file size limit. If limit is reached, log file will be
# automatically rotated
rotateeverybytes: 10485760 # = 10MB

# Number of rotated log files to keep. Oldest files will be deleted first.
keepfiles: 7

#selectors: [ ]

level: info

Please find the output when i run. I dont see any errors here.

[root@astroHeka filebeat]# service filebeat start
Starting filebeat: 2016/05/18 07:49:18.047889 geolite.go:24: INFO GeoIP disabled: No paths were set under output.geoip.paths
2016/05/18 07:49:18.047934 outputs.go:126: INFO Activated console as output plugin.
2016/05/18 07:49:18.048154 logstash.go:106: INFO Max Retries set to: 3
2016/05/18 07:49:18.051179 outputs.go:126: INFO Activated logstash as output plugin.
2016/05/18 07:49:18.051793 publish.go:288: INFO Publisher name: astroHeka
2016/05/18 07:49:18.058809 async.go:78: INFO Flush Interval set to: 1s
2016/05/18 07:49:18.058832 async.go:84: INFO Max Bulk Size set to: 2048
2016/05/18 07:49:18.058931 async.go:78: INFO Flush Interval set to: 1s
2016/05/18 07:49:18.058947 async.go:84: INFO Max Bulk Size set to: 2048
2016/05/18 07:49:18.058998 beat.go:147: INFO Init Beat: filebeat; Version: 1.2.2
[ OK ]

devendra... where i should insert the default index pattern? I am getting an error in elasticsearch saying an issue with default index

The indentation looks weird. I can not tell if due to copy'n paste or indentation is off for real. beats use YAML format which is very sensitive to indentation.

This could be your problem. When running filebeat as service, stdout might be closed. Comment out output.console section in config file.

The indentation went wrong when i copy and pasted. What does it say from the logs when i start the service as it does not show any errors.