Logstash not receiving beats

Hi guys,
my filebeat is sending to logstash but it just sending the operating system logs, Note that i gave him the path of my logs files; those are my configs :
i am runing filebeat on windows

filebeat.yml

> ###################### Filebeat Configuration Example #########################
> 
> # This file is an example configuration file highlighting only the most common
> # options. The filebeat.reference.yml file from the same directory contains all the
> # supported options with more comments. You can use it as a reference.
> #
> # You can find the full configuration reference here:
> # https://www.elastic.co/guide/en/beats/filebeat/index.html
> 
> # For more available modules and options, please see the filebeat.reference.yml sample
> # configuration file.
> 
> #=========================== Filebeat inputs =============================
> 
> filebeat.inputs:
> 
> # Each - is an input. Most options can be set at the input level, so
> # you can use different inputs for various configurations.
> # Below are the input specific configurations.
> 
> - type: log
> 
>   # Change to true to enable this input configuration.
>   enabled: true
> 
>   # Paths that should be crawled and fetched. Glob based paths.
>   paths:
>     #- /var/log/*.log
>     #- c:\programdata\elasticsearch\logs\*
>     - C:\Users\ay\Desktop\gid\*.log
> 
>   # Exclude lines. A list of regular expressions to match. It drops the lines that are
>   # matching any regular expression from the list.
>   #exclude_lines: ['^DBG']
> 
>   # Include lines. A list of regular expressions to match. It exports the lines that are
>   # matching any regular expression from the list.
>   #include_lines: ['^ERR', '^WARN']
> 
>   # Exclude files. A list of regular expressions to match. Filebeat drops the files that
>   # are matching any regular expression from the list. By default, no files are dropped.
>   #exclude_files: ['.gz$']
> 
>   # Optional additional fields. These fields can be freely picked
>   # to add additional information to the crawled log files for filtering
>   #fields:
>   #  level: debug
>   #  review: 1
> 
>   ### Multiline options
> 
>   # Multiline can be used for log messages spanning multiple lines. This is common
>   # for Java Stack Traces or C-Line Continuation
> 
>   # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
>   #multiline.pattern: ^\[
> 
>   # Defines if the pattern set under pattern should be negated or not. Default is false.
>   #multiline.negate: false
> 
>   # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
>   # that was (not) matched before or after or as long as a pattern is not matched based on negate.
>   # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
>   #multiline.match: after
> 
> 
> #============================= Filebeat modules ===============================
> 
> filebeat.config.modules:
>   # Glob pattern for configuration loading
>   path: ${path.config}/modules.d/*.yml
> 
>   # Set to true to enable config reloading
>   reload.enabled: false
> 
>   # Period on which files under path should be checked for changes
>   #reload.period: 10s
> 
> #==================== Elasticsearch template setting ==========================
> 
> setup.template.settings:
>   index.number_of_shards: 1
>   #index.codec: best_compression
>   #_source.enabled: false
> 
> #================================ General =====================================
> 
> # The name of the shipper that publishes the network data. It can be used to group
> # all the transactions sent by a single shipper in the web interface.
> #name:
> 
> # The tags of the shipper are included in their own field with each
> # transaction published.
> #tags: ["service-X", "web-tier"]
> 
> # Optional fields that you can specify to add additional information to the
> # output.
> #fields:
> #  env: staging
> 
> 
> #============================== Dashboards =====================================
> # These settings control loading the sample dashboards to the Kibana index. Loading
> # the dashboards is disabled by default and can be enabled either by setting the
> # options here or by using the `setup` command.
> #setup.dashboards.enabled: false
> 
> # The URL from where to download the dashboards archive. By default this URL
> # has a value which is computed based on the Beat name and version. For released
> # versions, this URL points to the dashboard archive on the artifacts.elastic.co
> # website.
> #setup.dashboards.url:
> 
> #============================== Kibana =====================================
> 
> # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
> # This requires a Kibana endpoint configuration.
> setup.kibana:
> 
>   # Kibana Host
>   # Scheme and port can be left out and will be set to the default (http and 5601)
>   # In case you specify and additional path, the scheme is required: http://localhost:5601/path
>   # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
>   #host: "localhost:5601"
> 
>   # Kibana Space ID
>   # ID of the Kibana Space into which the dashboards should be loaded. By default,
>   # the Default Space will be used.
>   #space.id:
> 
> #============================= Elastic Cloud ==================================
> 
> # These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).
> 
> # The cloud.id setting overwrites the `output.elasticsearch.hosts` and
> # `setup.kibana.host` options.
> # You can find the `cloud.id` in the Elastic Cloud web UI.
> #cloud.id:
> 
> # The cloud.auth setting overwrites the `output.elasticsearch.username` and
> # `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
> #cloud.auth:
>
 > #================================ Outputs =====================================
    > 
    > # Configure what output to use when sending the data collected by the beat.
    > 
    > #-------------------------- Elasticsearch output ------------------------------
    > #output.elasticsearch:
    >   # Array of hosts to connect to.
    >   #hosts: ["localhost:9200"]
    > 
    >   # Optional protocol and basic auth credentials.
    >   #protocol: "https"
    >   #username: "elastic"
    >   #password: "changeme"
    > 
    > #----------------------------- Logstash output --------------------------------
    > output.logstash:
    >   # The Logstash hosts
    >   hosts: ["myIpAdress:5044"]
    > 
    >   # Optional SSL. By default is off.
    >   # List of root certificates for HTTPS server verifications
    >   #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
    > 
    >   # Certificate for SSL client authentication
    >   #ssl.certificate: "/etc/pki/client/cert.pem"
    > 
    >   # Client Certificate Key
    >   #ssl.key: "/etc/pki/client/cert.key"
    > 
    > #================================ Processors =====================================
    > 
    > # Configure processors to enhance or manipulate events generated by the beat.
    > 
    > processors:
    >   - add_host_metadata: ~
    >   - add_cloud_metadata: ~
    > 
    > #================================ Logging =====================================
    > 
    > # Sets log level. The default log level is info.
    > # Available log levels are: error, warning, info, debug
    > #logging.level: debug
    > 
    > # At debug level, you can selectively enable logging only for some components.
    > # To enable all selectors use ["*"]. Examples of other selectors are "beat",
    > # "publish", "service".
    > #logging.selectors: ["*"]
    > 
    > #============================== Xpack Monitoring ===============================
    > # filebeat can export internal metrics to a central Elasticsearch monitoring
    > # cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
    > # reporting is disabled by default.
    > 
    > # Set to true to enable the monitoring reporter.
    > #xpack.monitoring.enabled: false
    > 
    > # Uncomment to send the metrics to Elasticsearch. Most settings from the
    > # Elasticsearch output are accepted here as well. Any setting that is not set is
    > # automatically inherited from the Elasticsearch output configuration, so if you
    > # have the Elasticsearch output configured, you can simply uncomment the
    > # following line.
    > #xpack.monitoring.elasticsearch:
    > 
    > #================================= Migration ==================================
    > 
    > # This allows to enable 6.7 migration aliases
    > #migration.6_to_7.enabled: true

myfilter.conf

> input {
>   beats {
>     port => 5044
>    }
>  }
> 
> filter {
>      if [source] =~ "gid" {
>         grok {
>             match => {"message" => "%{TIMESTAMP_ISO8601:timestamp}\|(\[%{BASE10NUM:nbr}\])\|%{IPORHOST:ClientIP}\|%{USERNAME:User};%{DATA:Node}\|%{URIPATH:Url}\|%{NOTSPACE:data}\|%{LOGLEVEL:Loglevel}\|%{GREEDYDATA:Message}"}
>         }
>      }else if [source] =~ "server" {
>         grok {
>             match =>{ "message" => [
>                 "(?m)%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:Loglevel} \[(?<Classname>[^\]]+)\] %{GREEDYDATA:Message}",
>                 "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:Loglevel}  \[(?<Classname>[^\]]+)\] %{WORD:n} \| %{NUMBER:number} \| %{WORD:b} \| %{DATA:Url} \| %{GREEDYDATA:Message}",
>                 "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:Loglevel}  \[(?<Classname>[^\]]+)\] %{GREEDYDATA:Message}"] }
>         }
>       }
>   date {match => [ "timestamp" , "MMM dd yyyy HH:mm:ss","MMM d yyyy HH:mm:ss", "ISO8601" ]
>       target => "@timestamp"}
>   mutate {
>         remove_field => [ "[beat][name]", "[beat][version]", "[beat][hostname]", "[host][architecture]", "[host][containerized]", "[host][id]", "[host][name]", "[host][codename]", "[host][family]", "[host][os][codename]", "[host][os][family]", "[host][os][name]", "[host][os][platform]", "[host][os][version]", "[input][type]", "[log][file][path]", "message", "offset", "source", "tags", "[prospector][type]" ]
>   }
> }
> 
> output {
>   elasticsearch {
>     hosts => ["myIpAdress:9200"]
>     index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
>     user => "logstash_internal"
>     password => "alwanali"
>     document_type => "%{[@metadata][type]}"
>   }
>   stdout { codec => rubydebug }
> }

logstash.yml

> # Settings file in YAML
> #
> # Settings can be specified either in hierarchical form, e.g.:
> #
> #   pipeline:
> #     batch:
> #       size: 125
> #       delay: 5
> #
> # Or as flat keys:
> #
> #   pipeline.batch.size: 125
> #   pipeline.batch.delay: 5
> #
> # ------------  Node identity ------------
> #
> # Use a descriptive name for the node:
> #
> # node.name: test
> #
> # If omitted the node name will default to the machine's host name
> #
> # ------------ Data path ------------------
> #
> # Which directory should be used by logstash and its plugins
> # for any persistent needs. Defaults to LOGSTASH_HOME/data
> #
> path.data: /var/lib/logstash
> #
> # ------------ Pipeline Settings --------------
> #
> # The ID of the pipeline.
> #
> # pipeline.id: main
> #
> # Set the number of workers that will, in parallel, execute the filters+outputs
> # stage of the pipeline.
> #
> # This defaults to the number of the host's CPU cores.
> #
> # pipeline.workers: 2
> #
> # How many events to retrieve from inputs before sending to filters+workers
> #
> # pipeline.batch.size: 125
> #
> # How long to wait in milliseconds while polling for the next event
> # before dispatching an undersized batch to filters+outputs
> #
> # pipeline.batch.delay: 50
> #
> # Force Logstash to exit during shutdown even if there are still inflight
> # events in memory. By default, logstash will refuse to quit until all
> # received events have been pushed to the outputs.
> #
> # WARNING: enabling this can lead to data loss during shutdown
> #
> # pipeline.unsafe_shutdown: false
> #
> # ------------ Pipeline Configuration Settings --------------
> #
> # Where to fetch the pipeline configuration for the main pipeline
> #
> # path.config:
> #
> # Pipeline configuration string for the main pipeline
> #
> # config.string:
> #
> # At startup, test if the configuration is valid and exit (dry run)
> #
> # config.test_and_exit: false
> #
> # Periodically check if the configuration has changed and reload the pipeline
> # This can also be triggered manually through the SIGHUP signal
> #
> # config.reload.automatic: false
> #
> # How often to check if the pipeline configuration has changed (in seconds)
> #
> # config.reload.interval: 3s
> #
> # Show fully compiled configuration as debug log message
> # NOTE: --log.level must be 'debug'
> #
> # config.debug: false
> #
> # When enabled, process escaped characters such as \n and \" in strings in the
> # pipeline configuration files.
> #
> # config.support_escapes: false
> #
> # ------------ Module Settings ---------------
> # Define modules here.  Modules definitions must be defined as an array.
> # The simple way to see this is to prepend each `name` with a `-`, and keep
> # all associated variables under the `name` they are associated with, and
> # above the next, like this:
> #
> # modules:
> #   - name: MODULE_NAME
> #     var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
> #     var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
> #     var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
> #     var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
> #
> # Module variable names must be in the format of
> #
> # var.PLUGIN_TYPE.PLUGIN_NAME.KEY
> #
> # modules:
> #
> # ------------ Cloud Settings ---------------
> # Define Elastic Cloud settings here.
> # Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
> # and it may have an label prefix e.g. staging:dXMtZ...
> # This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
> # cloud.id: <identifier>
> #
> # Format of cloud.auth is: <user>:<pass>
> # This is optional
> # If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
> # If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
> # cloud.auth: elastic:<password>
> #
> # ------------ Queuing Settings --------------
> #
> # Internal queuing model, "memory" for legacy in-memory based queuing and
> # "persisted" for disk-based acked queueing. Defaults is memory
> #
> # queue.type: memory
> #
> # If using queue.type: persisted, the directory path where the data files will be stored.
> # Default is path.data/queue
> #
> # path.queue:
> #
> # If using queue.type: persisted, the page data files size. The queue data consists of
> # append-only data files separated into pages. Default is 64mb
> #
> # queue.page_capacity: 64mb
> #
> # If using queue.type: persisted, the maximum number of unread events in the queue.
> # Default is 0 (unlimited)
> #
> # queue.max_events: 0
> #
> # If using queue.type: persisted, the total capacity of the queue in number of bytes.
> # If you would like more unacked events to be buffered in Logstash, you can increase the
> # capacity using this setting. Please make sure your disk drive has capacity greater than
> # the size specified here. If both max_bytes and max_events are specified, Logstash will pick
> # whichever criteria is reached first
> # Default is 1024mb or 1gb
> #
> # queue.max_bytes: 1024mb
> #
> # If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
> # Default is 1024, 0 for unlimited
> #
> # queue.checkpoint.acks: 1024
> #
> # If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
> # Default is 1024, 0 for unlimited
> #
> # queue.checkpoint.writes: 1024
> #
> # If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
> # Default is 1000, 0 for no periodic checkpoint.
> #
> # queue.checkpoint.interval: 1000
> #
> # ------------ Dead-Letter Queue Settings --------------
> # Flag to turn on dead-letter queue.
> #
> # dead_letter_queue.enable: false
> 
> # If using dead_letter_queue.enable: true, the maximum size of each dead letter queue. Entries
> # will be dropped if they would increase the size of the dead letter queue beyond this setting.
> # Default is 1024mb
> # dead_letter_queue.max_bytes: 1024mb
> 
> # If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
> # Default is path.data/dead_letter_queue
> #
> # path.dead_letter_queue:
> #
> # ------------ Metrics Settings --------------
> #
> # Bind address for the metrics REST endpoint
> #
> # http.host: "127.0.0.1"
> #
> # Bind port for the metrics REST endpoint, this option also accept a range
> # (9600-9700) and logstash will pick up the first available ports.
> #
> # http.port: 9600-9700
> #
> # ------------ Debugging Settings --------------
> #
> # Options for log.level:
> #   * fatal
> #   * error
> #   * warn
> #   * info (default)
> #   * debug
> #   * trace
> #
> # log.level: info
> path.logs: /var/log/logstash
> #
> # ------------ Other Settings --------------
> #
> # Where to find custom plugins
> # path.plugins: []
> #
> # ------------ X-Pack Settings (not applicable for OSS build)--------------
> #
> # X-Pack Monitoring
> # https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
> xpack.monitoring.enabled: true
> xpack.monitoring.elasticsearch.username: logstash_system
> xpack.monitoring.elasticsearch.password: alwanali
> #xpack.monitoring.elasticsearch.hosts: ["http://es:9200"]
> #xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
> #xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
> #xpack.monitoring.elasticsearch.ssl.truststore.password: password
> #xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
> #xpack.monitoring.elasticsearch.ssl.keystore.password: password
> #xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
> #xpack.monitoring.elasticsearch.sniffing: false
> #xpack.monitoring.collection.interval: 10s
> #xpack.monitoring.collection.pipeline.details.enabled: true
> #
> # X-Pack Management
> # https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
> #xpack.management.enabled: false
> #xpack.management.pipeline.id: ["main", "apache_logs"]
> #xpack.management.elasticsearch.username: logstash_admin_user
> #xpack.management.elasticsearch.password: password
> #xpack.management.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
> #xpack.management.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
> #xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
> #xpack.management.elasticsearch.ssl.truststore.password: password
> #xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
> #xpack.management.elasticsearch.ssl.keystore.password: password
> #xpack.management.elasticsearch.ssl.verification_mode: certificate
> #xpack.management.elasticsearch.sniffing: false
> #xpack.management.logstash.poll_interval: 5s

and this is my logstash_writer role:

@Badger @Christian_Dahlqvist

@magnusbaeck

@ikakavas

Hi @markov ! Please refrain from pinging/addressing people directly. This is a public forum and questions are answered by whoever has the time to put in the effort to do so. We monitor these forums and will provide answers to the best of our ability and availability.

Also, please be patient in waiting for responses to your question and refrain from
pinging multiple times asking for a response or opening multiple topics for
the same question. This is a community forum, it may take time for someone to
reply to your question. For more information please refer to the Community
Code of Conduct
specifically
the section "Be patient".

If you are in need of a service with an SLA that covers response times for
questions then you may want to consider talking to us about a
subscription.

1 Like

sorry for that, i am an internship student and still to late that's why, sorry again thank you for your support

Check if port 5044 is opened

tcp        0      0 51.xx.xx.xx:5601       0.0.0.0:*               LISTEN      2082/node51
tcp6       0      0 51.xx.xx.xx:9200       :::*                    LISTEN      2064/java
tcp6       0      0 51.xx.xx.xx:9300       :::*                    LISTEN      2064/java

yes it's open

tcp        0      0 51.xx.xx.xx:5601       0.0.0.0:*               LISTEN      2082/node
tcp6       0      0 51.xx.xx.xx:9200       :::*                    LISTEN      2064/java
tcp6       0      0 :::5044                 :::*                    LISTEN      11610/java

sorry, this is how it appears

Did you run filebeat once before? If yes then filebeat thinks the file has been sent over.
In order to resent those files, you must delete the filebeat registry. If you did not configure the default registry path, it should be at /var/lib/filebeat/registry/filebeat/

  1. Stop filebeat
  2. Delete registry: "rm -rf /var/lib/filebeat/registry/filebeat"
  3. Start logstash pipeline
  4. Start filebeat

I am also am intern using ELK 7.0.1. It's fun and really useful. :slight_smile:

Edit:
Sorry, i saw you are using filebeat on windows. I am not sure where the registry path is but the process is similar. delete the registry data and restart filebeat.

i deleted the registry folder but still sending the operating system logs.
Version of elk : 7.1.1
after deleting the filter part the message was shipped

this is my filter :

> filter {
>      if [source] =~ "gid" {
>         grok {
>             match => {"message" => "%{TIMESTAMP_ISO8601:timestamp}\|(\[%{BASE10NUM:nbr}\])\|%{IPORHOST:ClientIP}\|%{USERNAME:User};%{DATA:Node}\|%{URIPATH:Url}\|%{NOTSPACE:data}\|%{LOGLEVEL:Loglevel}\|%{GREEDYDATA:Message}"}
>         }
>      }else if [source] =~ "server" {
>         grok {
>             match =>{ "message" => [
>                 "(?m)%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:Loglevel} \[(?<Classname>[^\]]+)\] %{GREEDYDATA:Message}",
>                 "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:Loglevel}  \[(?<Classname>[^\]]+)\] %{WORD:n} \| %{NUMBER:number} \| %{WORD:b} \| %{DATA:Url} \| %{GREEDYDATA:Message}",
>                 "%{TIMESTAMP_ISO8601:timestamp} %{LOGLEVEL:Loglevel}  \[(?<Classname>[^\]]+)\] %{GREEDYDATA:Message}"] }
>         }
>       }
>   date {match => [ "timestamp" , "MMM dd yyyy HH:mm:ss","MMM d yyyy HH:mm:ss", "ISO8601" ]
>       target => "@timestamp"}
>   mutate {
>         remove_field => [ "[beat][name]", "[beat][version]", "[beat][hostname]", "[host][architecture]", "[host][containerized]", "[host][id]", "[host][name]", "[host][codename]", "[host][family]", "[host][os][codename]", "[host][os][family]", "[host][os][name]", "[host][os][platform]", "[host][os][version]", "[input][type]", "[log][file][path]", "message", "offset", "source", "tags", "[prospector][type]" ]
>   }
> }

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.