Need Help Configuring Filebeat

I have never used the Elastic Stack before so I am trying to learn as a I go. I am on a Windows machine and do not have access to curl. I am using 7.1.1 for Filebeat, Elasticsearch, and Kibana. I am trying to send CSVs to Elasticsearch using Filebeats. I do not have Logstash. When I run Filebeats using:
./filebeat -e -c filebeat.yml -d "elasticsearch"
It doesn't throw any errors, but I also do not see any data in Kibana.
Here is my filebeat.yml:
###################### Filebeat Configuration Example #########################

    # =========================== Filebeat inputs =============================

    filebeat.inputs:

    - type: log

      # Change to true to enable this input configuration.
      enabled: true

      # Paths that should be crawled and fetched. Glob based paths.
      paths:
        - C:/Program Files (x86)/Kantech/Server_SE/Report/*.csv
        # - c:\programdata\elasticsearch\logs\*

      exclude_lines: ['^Sequence']

      # Exclude files. A list of regular expressions to match. Filebeat drops the files that
      # are matching any regular expression from the list. By default, no files are dropped.
      exclude_files: ['.rsf','.adt','.adi']

    # ============================= Filebeat modules ===============================

    filebeat.config.modules:
      # Glob pattern for configuration loading
      path: ${path.config}/modules.d/*.yml

      # Set to true to enable config reloading
      reload.enabled: true

    # ==================== Elasticsearch template setting ==========================

    setup.template:
      enabled: true
      name: "my_pipeline_id"
      pattern: "my_pipeline_id-*"
      overwrite: true

    setup.template.settings:
      index.number_of_shards: 1
      index.codec: best_compression
      # _source.enabled: false
      # name: "sequence"
      # pattern: "*"

    # ================================ General ====================================

    # ============================== Dashboards =====================================
    # These settings control loading the sample dashboards to the Kibana index. Loading
    # the dashboards is disabled by default and can be enabled either by setting the
    # options here or by using the `setup` command.
    # setup.dashboards.enabled: false

    # ============================== Kibana =====================================

    # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
    # This requires a Kibana endpoint configuration.
    setup.kibana:

      # Kibana Host
      # Scheme and port can be left out and will be set to the default (http and 5601)
      # In case you specify and additional path, the scheme is required: http://localhost:5601/path
      # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
      # host: "xxxxxxxx:5601"
      host: "localhost:5601"

      # Kibana Space ID
      # ID of the Kibana Space into which the dashboards should be loaded. By default,
      # the Default Space will be used.
      # space.id:

    # ============================= Elastic Cloud ==================================

    # ================================ Outputs =====================================

    # Configure what output to use when sending the data collected by the beat.

    # -------------------------- Elasticsearch output ------------------------------
    output.elasticsearch:
      # Array of hosts to connect to.
      # hosts: ["xxxxxxxx:9200"]
      enabled: true
      hosts: ["localhost:9200"]
      pipline: my_pipeline_id
      index: "my_pipeline_id"
      indices:
        - index: "my_pipeline_id"
          mappings: 
          default: "my_pipeline_id"
      # Optional protocol and basic auth credentials.
      # protocol: "ssh"
      # username: "xxxxxxxx"
      # password: "xxxxxxxx"

    # ================================ Processors =====================================

    # Configure processors to enhance or manipulate events generated by the beat.

    processors:
      - add_host_metadata: ~
      - add_cloud_metadata: ~

    #================================ Logging =====================================

    # Sets log level. The default log level is info.
    # Available log levels are: error, warning, info, debug
    # logging.level: debug

    # At debug level, you can selectively enable logging only for some components.
    # To enable all selectors use ["*"]. Examples of other selectors are "beat",
    # "publish", "service".
    # logging.selectors: ["*"]

    #============================== Xpack Monitoring ===============================
    # filebeat can export internal metrics to a central Elasticsearch monitoring
    # cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
    # reporting is disabled by default.

    # Set to true to enable the monitoring reporter.
    # xpack.monitoring.enabled: false

Here is my pipeline.json:

{
    "description": "Test pipeline",
    "processors": [
        {
            "grok": {
                "field": "message",
				"patterns": ["%{NUMBER:Sequence},%{MONTHNUM:Month}/%{MONTHDAY:Day}/%{YEAR:Year} %{TIME:Time} %{WORD:12-Hour},%{EM:Event_Message},%{NUMBER:Event_Number},%{OBJECT:Object_1},%{DESCRIPTION:Description_1},%{OBJECT:Object_2},%{DESCRIPTION:Description_2},%{OBJECT:Object_3},%{DESCRIPTION:Description_3},%{OBJECT:Object_4},%{DESCRIPTION:Description_4},%{DATA:Card_Number}"],
				"pattern_definitions": {
				  "EM": ".+?(?=,)",
                  "OBJECT": "%{NUMBER}|.+?(?=,)",
                  "DESCRIPTION": ".+?(?=,)"
				}
            }
        },
		{
      "set": {
        "field" : "@timestamp",
        "value" : "//"
      }
    },
    {
      "date" : {
        "field" : "@timestamp",
        "formats" : ["yyyy/MM/dd"]
      }
    }
    ],
	"on_failure": [
	  {
	    "set": {
		  "field": "error",
		  "value": " - Error processing message - "
		}
	  }
	]
}

Like I said, I am very new to this and any help would be greatly appreciated.

Hi @ShadowMole :slight_smile: Welcome to Elastic's Discuss forum

Silent errors like this are usually Grok parsing issues. Try some online Grok patterns checker just to double check. I'm assuming that you put the pipeline into Elasticsearch, of course.

Try to debug your issue is to activate console output on your Filebeat config file. To do this comment the lines under output.elasticsearch and add this https://www.elastic.co/guide/en/beats/filebeat/current/console-output.html

output.console
    pretty: true

This will print whatever it reads to the console. Launch Filebeat with full debugging info filebeat -e -d "*" to see the output.

If this works, and you see the data in your console, you probably have some error in the network to Elasticsearch, auth or something similar, but we can see then :wink:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.