Filebeat error not sending logs

filebeat.service - filebeat
Loaded: loaded (/lib/systemd/system/filebeat.service; enabled; vendor preset:
Active: inactive (dead) (Result: exit-code) since Tue 2017-05-30 12:28:22 UTC
Docs: https://www.elastic.co/guide/en/beats/filebeat/current/index.html
Process: 9866 ExecStart=/usr/bin/filebeat -c /etc/filebeat/filebeat.yml (code=
Main PID: 9866 (code=exited, status=1/FAILURE)

When i installed filebeat on client server, when i check the status i am getting this error, hi any one please let me know about this error, Thank you

Could you provide or check the filebeat log file (probably /var/log/filebeat/*)?

If you can't find it or is it empty, another useful check would be to run /usr/bin/filebeat -c /etc/filebeat/filebeat.yml and check any message it displays.

Regards

Thanks for your replay, when i fired this command i got this error and can you please check once

/usr/bin/filebeat -c /etc/filebeat/filebeat.yml
Loading config file error: YAML config parsing failed on /etc/filebeat/filebeat.yml: yaml: line 76: did not find expected key. Exiting.

in. Default: log

on line 76 i got above statement

sudo systemctl enable filebeat
Synchronizing state of filebeat.service with SysV init with /lib/systemd/systemd -sysv-install...
Executing /lib/systemd/systemd-sysv-install enable filebeat
insserv: Script nagios is broken: incomplete LSB comment.
insserv: missing Default-Start:' entry: please add even if empty. insserv: missingDefault-Stop:' entry: please add even if empty.
insserv: Script nagios is broken: incomplete LSB comment.
insserv: missing Default-Start:' entry: please add even if empty. insserv: missingDefault-Stop:' entry: please add even if empty.
insserv: Default-Start undefined, assuming empty start runlevel(s) for script n agios' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for scriptn agios'
insserv: Script nagios is broken: incomplete LSB comment.
insserv: missing Default-Start:' entry: please add even if empty. insserv: missingDefault-Stop:' entry: please add even if empty.
insserv: Script nagios is broken: incomplete LSB comment.
insserv: missing Default-Start:' entry: please add even if empty. insserv: missingDefault-Stop:' entry: please add even if empty.
insserv: Default-Start undefined, assuming empty start runlevel(s) for script n agios' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for scriptn agios'
------------------when i fired sudo systemctl enable filebeat ----------------i got this error i am not able to trace out this could you please help me on this, thank you

Why is there an spae in in. Default: log? Could you please share the full config file, seems that something is broken in it

-----------Thanks for your replay---------------i am sharing whole file----please let me know thanks in advance

filebeat:
  # List of prospectors to fetch data.
  prospectors:
    # Each - is a prospector. Below are the prospector specific configurations
    -
      # Paths that should be crawled and fetched. Glob based paths.
      # To fetch all ".log" files from a specific level of subdirectories
      # /var/log/*/*.log can be used.
      # For each file found under this path, a harvester is started.
      # Make sure not file is defined twice as this can lead to unexpected behaviour.
      paths:
       # - /var/log/*.log

         - /var/log/auth.log
         - /var/log/syslog
        #- c:\programdata\elasticsearch\logs\*

      # Configure the file encoding for reading files with international characters
      # following the W3C recommendation for HTML5 (http://www.w3.org/TR/encoding).
      # Some sample encodings:
      #   plain, utf-8, utf-16be-bom, utf-16be, utf-16le, big5, gb18030, gbk,
      #    hz-gb-2312, euc-kr, euc-jp, iso-2022-jp, shift-jis, ...
      #encoding: plain

     
      # Possible options are:
      # * log: Reads every line of the log file (default)
      # * stdin: Reads the standard in
      input_type: log

    

      # exclude_files: [".gz$"]

   
      #fields:
      #  level: debug
      #  review: 1

      
      #fields_under_root: false

   
      #ignore_older: 0

  

      # Type to be published in the 'type' field. For Elasticsearch output,
      # the type defines the document type these entries should be stored
      # in.Default:log
       document_type: syslog

      # Scan frequency in seconds.
      # How often these files should be checked for changes. In case it is set
      # to 0s, it is done as often as possible. Default: 10s
      #scan_frequency: 10s

      # Defines the buffer size every harvester uses when fetching the file
      #harvester_buffer_size: 16384

      # Maximum number of bytes a single log event can have
      # All bytes after max_bytes are discarded and not sent. The default is 10MB.
      # This is especially useful for multiline log messages which can get large.
      #max_bytes: 10485760
       # Mutiline can be used for log messages spanning multiple lines. This is common
      # for Java Stack Traces or C-Line Continuation
      #multiline:

        # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
        #pattern: ^\[

        # Defines if the pattern set under pattern should be negated or not. Default is false.
        #negate: false

        # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
        # that was (not) matched before or after or as long as a pattern is not matched based on negate.
        # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
        #match: after

     
        # Default is 500
        #max_lines: 500

        # After the defined timeout, an multiline event is sent even if no new pattern was found to start a new event
        # Default is 5s.
        #timeout: 5s

      # Setting tail_files to true means filebeat starts readding new files at the end
      # instead of the beginning. If this is used in combination with log rotation
      # this can mean that the first entries of a new file are skipped.
      #tail_files: false

      # Max backoff defines what the maximum backoff time is. After having backed off multiple times
      # from checking the files, the waiting time will never exceed max_backoff idenependent of the
      # backoff factor. Having it set to 10s means in the worst case a new line can be added to a log
      # file after having backed off multiple times, it takes a maximum of 10s to read the new line
      #max_backoff: 10s
      #backoff_factor: 2
      #force_close_files: false

    # Additional prospector
    #-
      # Configuration to use stdin input
      #input_type: stdin
  # filebeat again, indexing starts from the beginning again.
  registry_file: /var/lib/filebeat/registry
  #config_dir:
# Multiple outputs may be used.
output:
  ### Elasticsearch as output
  #elasticsearch:
    # Array of hosts to connect to.
    # Scheme and port can be left out and will be set to the default (http and 9200)
    # In case you specify and additional path, the scheme is required: http://localhost:9200/path
    # IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
    hosts: ["localhost:9200"]
    # Optional protocol and basic auth credentials.
    #protocol: "https"
    #username: "admin"
    #password: "s3cr3t"
    # Number of workers per Elasticsearch host.
    #worker: 1
    # Optional index name. The default is "filebeat" and generates
    # [filebeat-]YYYY.MM.DD keys.
    #index: "filebeat"
      # Template name. By default the template name is filebeat.
      #name: "filebeat"

      # Path to template file
      #path: "filebeat.template.json"

      # Overwrite existing template
      #overwrite: false
       # Optional HTTP Path
    #path: "/elasticsearch"

    # Proxy server url
    #proxy_url: http://proxy:3128
    # The maximum number of events to bulk in a single Elasticsearch bulk API index request.
    # The default is 50.
    #bulk_max_size: 50

    # Configure http request timeout before failing an request to Elasticsearch.
    #timeout: 90
    #save_topology: false

    # The time to live in seconds for the topology information that is stored in
    # Elasticsearch. The default is 15 seconds.
    #topology_expire: 15

    # tls configuration. By default is off.
    #tls:
      # List of root certificates for HTTPS server verifications
      #certificate_authorities: ["/etc/pki/root/ca.pem"]

      # Certificate for TLS client authentication
      #certificate: "/etc/pki/client/cert.pem"

      # Client Certificate Key
      #certificate_key: "/etc/pki/client/cert.key"

      
      #insecure: true

      # Configure cipher suites to be used for TLS connections
      #cipher_suites: []

      # Configure curve types for ECDHE based cipher suites
      #curve_types: []

      # Configure minimum TLS version allowed for connection to logstash
      #min_version: 1.0

      # Configure maximum TLS version allowed for connection to logstash
      #max_version: 1.2


  ### Logstash as output
   logstash:
    # The Logstash hosts
     hosts: ["localhost:5044"]
     bulk_max_size: 1024

There are two errors on your config file:

  1.   Line 51:
    
    document_type: syslog

There is a leading space here. Please remove it.

  1. Line 116:
    hosts: ["localhost:9200"]

You have a host config for elasticsearch output, while elasticsearch is being commented out, and logstash enabled. I think you should comment out that line

Removing that leading space, and commenting out the hosts on ES output section, does work for me on my test env.

Regards

--------Thanks Xavy Javier for replay-----i changed what you recommended but still i am not able to get i am sharing my file again please have a look--------------Thank you---------------------
################### Filebeat Configuration Example #########################

############################# Filebeat ######################################
filebeat:

List of prospectors to fetch data.

prospectors:
# Each - is a prospector. Below are the prospector specific configurations
-
# Paths that should be crawled and fetched. Glob based paths.
# To fetch all ".log" files from a specific level of subdirectories
# /var/log//.log can be used.
# For each file found under this path, a harvester is started.
# Make sure not file is defined twice as this can lead to unexpected behaviour.
paths:
# - /var/log/*.log

     - /var/log/auth.log
     - /var/log/syslog
    #- c:\programdata\elasticsearch\logs\*

  # Configure the file encoding for reading files with international characters
  # following the W3C recommendation for HTML5 (http://www.w3.org/TR/encoding).
  # Some sample encodings:
  #   plain, utf-8, utf-16be-bom, utf-16be, utf-16le, big5, gb18030, gbk,
  #    hz-gb-2312, euc-kr, euc-jp, iso-2022-jp, shift-jis, ...
  #encoding: plain

  # Type of the files. Based on this the way the file is read is decided.
  # The different types cannot be mixed in one prospector
  #
  # Possible options are:
  # * log: Reads every line of the log file (default)
  # * stdin: Reads the standard in
  input_type: log

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list. The include_lines is called before
  # exclude_lines. By default, no lines are dropped.
  # exclude_lines: ["^DBG"]

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list. The include_lines is called before
  # exclude_lines. By default, all the lines are exported.
  # include_lines: ["^ERR", "^WARN"]

Exclude files. A list of regular expressions to match. Filebeat drops the files that

  # are matching any regular expression from the list. By default, no files are dropped.
  # exclude_files: [".gz$"]

  # Optional additional fields. These field can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  # Set to true to store the additional fields as top level fields instead
  # of under the "fields" sub-dictionary. In case of name conflicts with the
  # fields added by Filebeat itself, the custom fields overwrite the default
  # fields.
  #fields_under_root: false

  # Ignore files which were modified more then the defined timespan in the past.
  # In case all files on your system must be read you can set this value very large.
  # Time strings like 2h (2 hours), 5m (5 minutes) can be used.
  #ignore_older: 0

  # Close older closes the file handler for which were not modified
  # for longer then close_older
  # Time strings like 2h (2 hours), 5m (5 minutes) can be used.
  #close_older: 1h

  # Type to be published in the 'type' field. For Elasticsearch output,
  # the type defines the document type these entries should be stored
  # in.Default:log
   document_type:syslog

  # Scan frequency in seconds.
  # How often these files should be checked for changes. In case it is set
  # to 0s, it is done as often as possible. Default: 10s
  #scan_frequency: 10s

  # Defines the buffer size every harvester uses when fetching the file
  #harvester_buffer_size: 16384

  # Maximum number of bytes a single log event can have
  # All bytes after max_bytes are discarded and not sent. The default is 10MB.
  # This is especially useful for multiline log messages which can get large.
  #max_bytes: 10485760
  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation
  #multiline:

    # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
    #pattern: ^\[

    # Defines if the pattern set under pattern should be negated or not. Default is false.
    #negate: false

    # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
    # that was (not) matched before or after or as long as a pattern is not matched based on negate.
    # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
    #match: after

    # The maximum number of lines that are combined to one event.
    # In case there are more the max_lines the additional lines are discarded.
    # Default is 500
    #max_lines: 500

    # After the defined timeout, an multiline event is sent even if no new pattern was found to start a new event
    # Default is 5s.
    #timeout: 5s

  # Setting tail_files to true means filebeat starts readding new files at the end
  # instead of the beginning. If this is used in combination with log rotation
  # this can mean that the first entries of a new file are skipped.
  #tail_files: false

  # Backoff values define how agressively filebeat crawls new files for updates
  # The default values can be used in most cases. Backoff defines how long it is waited
  # to check a file again after EOF is reached. Default is 1s which means the file
  # is checked every second if new lines were added. This leads to a near real time crawling.
  # Every time a new line appears, backoff is reset to the initial value.
  #backoff: 1s

  # Max backoff defines what the maximum backoff time is. After having backed off multiple times
  # from checking the files, the waiting time will never exceed max_backoff idenependent of the
  # backoff factor. Having it set to 10s means in the worst case a new line can be added to a log
  # file after having backed off multiple times, it takes a maximum of 10s to read the new line
  #max_backoff: 10s

  # The backoff factor defines how fast the algorithm backs off. The bigger the backoff factor,
  # the faster the max_backoff value is reached. If this value is set to 1, no backoff will happen.

The backoff value will be multiplied each time with the backoff_factor until max_backoff is reached

  #backoff_factor: 2

  # This option closes a file, as soon as the file name changes.
  # This config option is recommended on windows only. Filebeat keeps the files it's reading open. This can cause
  # issues when the file is removed, as the file will not be fully removed until also Filebeat closes
  # the reading. Filebeat closes the file handler after ignore_older. During this time no new file with the
  # same name can be created. Turning this feature on the other hand can lead to loss of data
  # on rotate files. It can happen that after file rotation the beginning of the new
  # file is skipped, as the reading starts at the end. We recommend to leave this option on false
  # but lower the ignore_older value to release files faster.
  #force_close_files: false

# Additional prospector
#-
  # Configuration to use stdin input
  #input_type: stdin

General filebeat configuration options

Event count spool threshold - forces network flush if exceeded

#spool_size: 2048

Enable async publisher pipeline in filebeat (Experimental!)

#publish_async: false

Defines how often the spooler is flushed. After idle_timeout the spooler is

Flush even though spool_size is not reached.

#idle_timeout: 5s

Name of the registry file. Per default it is put in the current working

directory. In case the working directory is changed after when running

filebeat again, indexing starts from the beginning again.

registry_file: /var/lib/filebeat/registry

Full Path to directory with additional prospector configuration files. Each file must end with .yml

These config files must have the full filebeat config part inside, but only

the prospector part is processed. All global options like spool_size are ignored.

The config_dir MUST point to a different directory then where the main filebeat config file is in.

#config_dir:

###############################################################################
############################# Libbeat Config ##################################

Base config file used by all other beats for using libbeat features

############################# Output ##########################################

Configure what outputs to use when sending the data collected by the beat.

Multiple outputs may be used.

output:

Elasticsearch as output

elasticsearch:
# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
hosts: ["localhost:9200"]

# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "admin"
#password: "s3cr3t"

# Number of workers per Elasticsearch host.
#worker: 1

# Optional index name. The default is "filebeat" and generates
# [filebeat-]YYYY.MM.DD keys.
#index: "filebeat"

# A template is used to set the mapping in Elasticsearch
# By default template loading is disabled and no template is loaded.
# These settings can be adjusted to load your own template or overwrite existing ones
#template:

  # Template name. By default the template name is filebeat.
  #name: "filebeat"

  # Path to template file
  #path: "filebeat.template.json"

  # Overwrite existing template
  #overwrite: false

# Optional HTTP Path
#path: "/elasticsearch"

# Proxy server url

#proxy_url: http://proxy:3128

# The number of times a particular Elasticsearch index operation is attempted. If
# the indexing operation doesn't succeed after this many retries, the events are
# dropped. The default is 3.
#max_retries: 3

# The maximum number of events to bulk in a single Elasticsearch bulk API index request.
# The default is 50.
#bulk_max_size: 50

# Configure http request timeout before failing an request to Elasticsearch.
#timeout: 90

# The number of seconds to wait for new events between two bulk API index requests.
# If `bulk_max_size` is reached before this interval expires, addition bulk index
# requests are made.
#flush_interval: 1

# Boolean that sets if the topology is kept in Elasticsearch. The default is
# false. This option makes sense only for Packetbeat.
#save_topology: false

# The time to live in seconds for the topology information that is stored in
# Elasticsearch. The default is 15 seconds.
#topology_expire: 15

# tls configuration. By default is off.
#tls:
  # List of root certificates for HTTPS server verifications
  #certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for TLS client authentication
  #certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #certificate_key: "/etc/pki/client/cert.key"

  # Controls whether the client verifies server certificates and host name.
  # If insecure is set to true, all server host names and certificates will be
  # accepted. In this mode TLS based connections are susceptible to
  # man-in-the-middle attacks. Use only for testing.
  #insecure: true

  # Configure cipher suites to be used for TLS connections
  #cipher_suites: []

  # Configure curve types for ECDHE based cipher suites
  #curve_types: []

  # Configure minimum TLS version allowed for connection to logstash
  #min_version: 1.0

  # Configure maximum TLS version allowed for connection to logstash
  #max_version: 1.2

Logstash as output

logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
bulk_max_size: 1024

# Number of workers per Logstash host.
#worker: 1

# The maximum number of events to bulk into a single batch window. The
# default is 2048.
#bulk_max_size: 2048

# Set gzip compression level.
#compression_level: 3

# Optional load balance the events between the Logstash hosts
#loadbalance: true

# Optional index name. The default index name depends on the each beat.
# For Packetbeat, the default is set to packetbeat, for Topbeat
# top topbeat and for Filebeat to filebeat.
#index: filebeat

# Optional TLS. By default is off.
 tls:
  # List of root certificates for HTTPS server verifications
   certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

  # Certificate for TLS client authentication
  #certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
   #certificate_key: "/etc/pki/client/cert.key"

  # Controls whether the client verifies server certificates and host name.
  # If insecure is set to true, all server host names and certificates will be
  # accepted. In this mode TLS based connections are susceptible to
  # man-in-the-middle attacks. Use only for testing.
  #insecure: true

  # Configure cipher suites to be used for TLS connections
  #cipher_suites: []

  # Configure curve types for ECDHE based cipher suites
  #curve_types: []

File as output

#file:
# Path to the directory where to save the generated files. The option is mandatory.
#path: "/tmp/filebeat"

# Name of the generated files. The default is `filebeat` and it generates files: `filebeat`, `filebeat.1`, `filebeat.2`, etc.
#filename: filebeat

# Maximum size in kilobytes of each file. When this size is reached, the files are
# rotated. The default value is 10 MB.
#rotate_every_kb: 10000

# Maximum number of files under path. When this number of files is reached, the
# oldest file is deleted and the rest are shifted from last to first. The default
# is 7 files.
#number_of_files: 7

Console output

console:

# Pretty print json event
#pretty: false

############################# Shipper #########################################

shipper:

The name of the shipper that publishes the network data. It can be used to group

all the transactions sent by a single shipper in the web interface.

If this options is not defined, the hostname is used.

If this options is not defined, the hostname is used.

#name:

The tags of the shipper are included in their own field with each

transaction published. Tags make it easy to group servers by different

logical properties.

#tags: ["service-X", "web-tier"]

Uncomment the following if you want to ignore transactions created

by the server on which the shipper is installed. This option is useful

to remove duplicates if shippers are installed on multiple servers.

#ignore_outgoing: true

How often (in seconds) shippers are publishing their IPs to the topology map.

The default is 10 seconds.

#refresh_topology_freq: 10

Expiration time (in seconds) of the IPs published by a shipper to the topology map.

All the IPs will be deleted afterwards. Note, that the value must be higher than

refresh_topology_freq. The default is 15 seconds.

#topology_expire: 15

Internal queue size for single events in processing pipeline

#queue_size: 1000

Configure local GeoIP database support.

If no paths are not configured geoip is disabled.

#geoip:
#paths:
# - "/usr/share/GeoIP/GeoLiteCity.dat"
# - "/usr/local/var/GeoIP/GeoLiteCity.dat"

############################# Logging #########################################

There are three options for the log ouput: syslog, file, stderr.

Under Windos systems, the log files are per default sent to the file output,

under all other system per default to syslog.

logging:

Send all logging output to syslog. On Windows default is false, otherwise

default is true.

#to_syslog: true

Write all logging output to files. Beats automatically rotate files if rotateeverybytes

limit is reached.

#to_files: false

To enable logging to files, to_files option has to be set to true

files:
# The directory where the log files will written to.
#path: /var/log/mybeat

# The name of the files where the logs are written to.
#name: mybeat

Configure log file size limit. If limit is reached, log file will be

# automatically rotated
rotateeverybytes: 10485760 # = 10MB

# Number of rotated log files to keep. Oldest files will be deleted first.
#keepfiles: 7

Enable debug output for selected components. To enable all selectors use ["*"]

Other available selectors are beat, publish, service

Multiple selectors can be chained.

#selectors: [ ]

Sets log level. The default log level is error.

Available log levels are: critical, error, warning, info, debug

#level: error

----------------hi Xavy Javier when i fire this command i got this error can you please let me know what is this ---------------------sudo systemctl enable filebeat---------------
Synchronizing state of filebeat.service with SysV init with /lib/systemd/systemd-sysv-install...
Executing /lib/systemd/systemd-sysv-install enable filebeat
insserv: Script nagios is broken: incomplete LSB comment.
insserv: missing Default-Start:' entry: please add even if empty. insserv: missingDefault-Stop:' entry: please add even if empty.
insserv: Script nagios is broken: incomplete LSB comment.
insserv: missing Default-Start:' entry: please add even if empty. insserv: missingDefault-Stop:' entry: please add even if empty.
insserv: Default-Start undefined, assuming empty start runlevel(s) for script nagios' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for scriptnagios'
insserv: Script nagios is broken: incomplete LSB comment.
insserv: missing Default-Start:' entry: please add even if empty. insserv: missingDefault-Stop:' entry: please add even if empty.
insserv: Script nagios is broken: incomplete LSB comment.
insserv: missing Default-Start:' entry: please add even if empty. insserv: missingDefault-Stop:' entry: please add even if empty.
insserv: Default-Start undefined, assuming empty start runlevel(s) for script nagios' insserv: Default-Stop undefined, assuming empty stop runlevel(s) for scriptnagios

Hello:

It's a little bit difficult to read your config file as you have posted it, could you please upload it somewhere and post the link to it?. Additionally, could you please share the output when running:

/usr/bin/filebeat -c /etc/filebeat/filebeat.yml

Regarding the errors when you enable systemctl for filebeat, they are not related to filebeat, but to some other service you have on your system (nagios, actually) You will need to review the systemctl config for nagios, but that's something out of the scope of this forum, I think. They should not be preventing systemctl for filebeat to work, though

Hi Javier when i fired that command i got below error-----------

Loading config file error: YAML config parsing failed on /etc/filebeat/filebeat.yml: yaml: line 74: did not find expected key. Exiting.

in.Default:log-------------------------74 line--------------------

   document_type:syslog

Could you please upload your config file to pastebin and provide the link?

----------------------https://pastebin.com/QWuzuHva----------------------
Hi Xavy this is my link please have a look thank you

--------when i check status-----------of filebeat-----i got this error---please let me know what is this--

systemctl status filebeat.service
● filebeat.service - filebeat
Loaded: loaded (/lib/systemd/system/filebeat.service; enabled; vendor preset:
Active: inactive (dead) (Result: exit-code) since Wed 2017-05-31 10:31:10 UTC
Docs: https://www.elastic.co/guide/en/beats/filebeat/current/index.html
Main PID: 27382 (code=exited, status=1/FAILURE)
May 31 10:31:10 ip- systemd[1]: Stopped filebeat.
May 31 10:31:10 ip- systemd[1]: filebeat.service: Start request rep
May 31 10:31:10 ip- systemd[1]: Failed to start filebeat.

Hello:
I'm afraid that you still have two leading spaces on line 74:

You should leave it like this:

Something similar happens with line 278:

which should remain as :

Please notice in all cases the alignment of the line with the previous one.

Thank you very much for your support now it is showing active status. but i am not getting logs from my filebeat server to logstach server could you please help me on this issue