Fitebeat 6.3.1 system module issue

recently I have installed fitebeat 6.3.1. I see the system logs are propagating to elasticsearch with 7hours delay.

I have set var.convert_timezone true, still, I don't see those logs with current time.

system module file:
/etc/filebeat/modules.d/system.yml

  • module: system

    Syslog

    syslog:
    enabled: true

    Set custom paths for the log files. If left empty,

    Filebeat will choose the paths depending on your OS.

    #var.paths:

    Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.

    #var.convert_timezone: false

    Authorization logs

    auth:
    enabled: true

    Set custom paths for the log files. If left empty,

    Filebeat will choose the paths depending on your OS.

    #var.paths:

    Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.

    var.convert_timezone: true

@andrewkroh, @ruflin , Would you please take a look into it.

Here is the log:

2018-07-25T09:15:38.932-0700	WARN	elasticsearch/client.go:502	Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xbece466a77105996, ext:747401007978008, loc:(*time.Location)(0x1f4a300)}, Meta:common.MapStr{"pipeline":"filebeat-6.3.1-system-auth-pipeline"}, Fields:common.MapStr{"message":"Jul 25 09:15:36 autofind-i-0250f2df10123c4a3 sshd[25035]: User child is on pid 25085", "prospector":common.MapStr{"type":"log"}, "input":common.MapStr{"type":"log"}, "fileset":common.MapStr{"module":"system", "name":"auth"}, "beat":common.MapStr{"timezone":"-07:00", "name":"autofind-i-0250f2df10123c4a3.k.dev.eng-us", "hostname":"autofind-i-0250f2df10123c4a3.k.dev.eng-us", "version":"6.3.1"}, "host":common.MapStr{"name":"autofind-i-0250f2df10123c4a3.k.dev.eng-us"}, "source":"/var/log/auth.log", "offset":95506}, Private:file.State{Id:"", Finished:false, Fileinfo:(*os.fileStat)(0xc420a5cc30), Source:"/var/log/auth.log", Offset:95591, Timestamp:time.Time{wall:0xbece25dd388af459, ext:714068032790436, loc:(*time.Location)(0x1f4a300)}, TTL:-1, Type:"log", Meta:map[string]string{}, FileStateOS:file.StateOS{Inode:0x10db, Device:0xca01}}}, Flags:0x1} (status=400): {"type":"illegal_argument_exception","reason":"pipeline with id [filebeat-6.3.1-system-auth-pipeline] does not exist"}
2018-07-25T09:19:26.942-0700	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":53440,"time":{"ms":8}},"total":{"ticks":238730,"time":{"ms":8},"value":238730},"user":{"ticks":185290}},"info":{"ephemeral_id":"363fa537-770b-4948-8dae-afdf30a094a2","uptime":{"ms":747630020}},"memstats":{"gc_next":5694272,"memory_alloc":3940440,"memory_total":21587357176}},"filebeat":{"harvester":{"open_files":4,"running":4}},"libbeat":{"config":{"module":{"running":5}},"pipeline":{"clients":15,"events":{"active":0}}},"registrar":{"states":{"current":14}},"system":{"load":{"1":0,"15":0,"5":0,"norm":{"1":0,"15":0,"5":0}}}}}}
2018-07-25T09:19:56.942-0700	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":53440,"time":{"ms":4}},"total":{"ticks":238730,"time":{"ms":4},"value":238730},"user":{"ticks":185290}},"info":{"ephemeral_id":"363fa537-770b-4948-8dae-afdf30a094a2","uptime":{"ms":747660020}},"memstats":{"gc_next":5694272,"memory_alloc":4350864,"memory_total":21587767600}},"filebeat":{"harvester":{"open_files":4,"running":4}},"libbeat":{"config":{"module":{"running":5}},"pipeline":{"clients":15,"events":{"active":0}}},"registrar":{"states":{"current":14}},"system":{"load":{"1":0,"15":0,"5":0,"norm":{"1":0,"15":0,"5":0}}}}}}
2018-07-25T09:20:08.934-0700	WARN	elasticsearch/client.go:502	Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xbece46adf79e7df0, ext:747671017293480, loc:(*time.Location)(0x1f4a300)}, Meta:common.MapStr{"pipeline":"filebeat-6.3.1-system-auth-pipeline"}, Fields:common.MapStr{"message":"Jul 25 09:20:01 autofind-i-0250f2df10123c4a3 CRON[25161]: pam_unix(cron:session): session opened for user root by (uid=0)", "input":common.MapStr{"type":"log"}, "fileset":common.MapStr{"module":"system", "name":"auth"}, "prospector":common.MapStr{"type":"log"}, "beat":common.MapStr{"timezone":"-07:00", "version":"6.3.1", "name":"autofind-i-0250f2df10123c4a3.k.dev.eng-us", "hostname":"autofind-i-0250f2df10123c4a3.k.dev.eng-us"}, "host":common.MapStr{"name":"autofind-i-0250f2df10123c4a3.k.dev.eng-us"}, "source":"/var/log/auth.log", "offset":95972}, Private:file.State{Id:"", Finished:false, Fileinfo:(*os.fileStat)(0xc420a5cc30), Source:"/var/log/auth.log", Offset:96094, Timestamp:time.Time{wall:0xbece25dd388af459, ext:714068032790436, loc:(*time.Location)(0x1f4a300)}, TTL:-1, Type:"log", Meta:map[string]string{}, FileStateOS:file.StateOS{Inode:0x10db, Device:0xca01}}}, Flags:0x1} (status=400): {"type":"illegal_argument_exception","reason":"pipeline with id [filebeat-6.3.1-system-auth-pipeline] does not exist"}
2018-07-25T09:20:13.934-0700	WARN	elasticsearch/client.go:502	Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xbece46adf79e8d0c, ext:747671017297335, loc:(*time.Location)(0x1f4a300)}, Meta:common.MapStr{"pipeline":"filebeat-6.3.1-system-auth-pipeline"}, Fields:common.MapStr{"input":common.MapStr{"type":"log"}, "fileset":common.MapStr{"name":"auth", "module":"system"}, "prospector":common.MapStr{"type":"log"}, "beat":common.MapStr{"timezone":"-07:00", "name":"autofind-i-0250f2df10123c4a3.k.dev.eng-us", "hostname":"autofind-i-0250f2df10123c4a3.k.dev.eng-us", "version":"6.3.1"}, "host":common.MapStr{"name":"autofind-i-0250f2df10123c4a3.k.dev.eng-us"}, "source":"/var/log/auth.log", "offset":96094, "message":"Jul 25 09:20:01 autofind-i-0250f2df10123c4a3 CRON[25161]: pam_unix(cron:session): session closed for user root"}, Private:file.State{Id:"", Finished:false, Fileinfo:(*os.fileStat)(0xc420a5cc30), Source:"/var/log/auth.log", Offset:96205, Timestamp:time.Time{wall:0xbece25dd388af459, ext:714068032790436, loc:(*time.Location)(0x1f4a300)}, TTL:-1, Type:"log", Meta:map[string]string{}, FileStateOS:file.StateOS{Inode:0x10db, Device:0xca01}}}, Flags:0x1} (status=400): {"type":"illegal_argument_exception","reason":"pipeline with id [filebeat-6.3.1-system-auth-pipeline] does not exist"}
2018-07-25T09:20:26.942-0700	INFO	[monitoring]	log/log.go:124	Non-zero metrics in the last 30s	{"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":53440,"time":{"ms":8}},"total":{"ticks":238740,"time":{"ms":16},"value":238740},"user":{"ticks":185300,"time":{"ms":8}}},"info":{"ephemeral_id":"363fa537-770b-4948-8dae-afdf30a094a2","uptime":{"ms":747690019}},"memstats":{"gc_next":5695840,"memory_alloc":3572904,"memory_total":21588652112}},"filebeat":{"events":{"added":5,"done":5},"harvester":{"open_files":4,"running":4}},"libbeat":{"config":{"module":{"running":5}},"output":{"events":{"acked":3,"batches":2,"dropped":2,"total":5},"read":{"bytes":763},"write":{"bytes":3473}},"pipeline":{"clients":15,"events":{"active":0,"published":5,"total":5},"queue":{"acked":5}}},"registrar":{"states":{"current":14,"update":5},"writes":{"success":2,"total":2}},"system":{"load":{"1":0,"15":0,"5":0,"norm":{"1":0,"15":0,"5":0}}}}}}
}}}}

Hi @mouli_v,

Could you provide the output of curl -XGET "http://localhost:9200/filebeat-*/_search?_source=@timestamp,beat.timezone", replacing localhost:9200 with your Elasticsearch host and port, if necessary?

Also, are the timestamps in your syslog in UTC or in your local time zone?

Shaunak

Also, one more request:

Could you please provide the output of curl -XGET "http://localhost:9200/_ingest/pipeline/file*syslog*" as well?


curl -XGET "http://localhost:9200/filebeat-*/_search?_source=@timestamp,beat.timezone"
{"took":197,"timed_out":false,"_shards":{"total":36,"successful":36,"skipped":0,"failed":0},"hits":{"total":76008730,"max_score":1.0,"hits":[{"_index":"filebeat-6.3.1-2018.07.16","_type":"doc","id":"uSnypWQBdaushYXi0Dp","_score":1.0,"_source":{"@timestamp":"2018-07-17T01:55:06.032Z"}},{"_index":"filebeat-6.3.1-2018.07.16","_type":"doc","id":"uinypWQBdaushYXi0Dp","_score":1.0,"_source":{"@timestamp":"2018-07-17T01:55:06.032Z"}},{"_index":"filebeat-6.3.1-2018.07.16","_type":"doc","id":"vCnypWQBdaushYXi0Dp","_score":1.0,"_source":{"@timestamp":"2018-07-17T01:55:06.032Z"}},{"_index":"filebeat-6.3.1-2018.07.16","_type":"doc","id":"vSnypWQBdaushYXi0Dp","_score":1.0,"_source":{"@timestamp":"2018-07-17T01:55:06.032Z"}},{"_index":"filebeat-6.3.1-2018.07.16","_type":"doc","id":"vynypWQBdaushYXi0Dp","_score":1.0,"_source":{"@timestamp":"2018-07-17T01:55:06.032Z"}},{"_index":"filebeat-6.3.1-2018.07.16","_type":"doc","id":"wSnypWQBdaushYXi0Dp","_score":1.0,"_source":{"@timestamp":"2018-07-17T01:55:06.032Z"}},{"_index":"filebeat-6.3.1-2018.07.16","_type":"doc","id":"wynypWQBdaushYXi0Dp","_score":1.0,"_source":{"@timestamp":"2018-07-17T01:55:06.032Z"}},{"_index":"filebeat-6.3.1-2018.07.16","_type":"doc","id":"yCnypWQBdaushYXi0Dp","_score":1.0,"_source":{"@timestamp":"2018-07-17T01:55:06.032Z"}},{"_index":"filebeat-6.3.1-2018.07.16","_type":"doc","id":"zCnypWQBdaushYXi0Dp","_score":1.0,"_source":{"@timestamp":"2018-07-17T01:55:06.032Z"}},{"_index":"filebeat-6.3.1-2018.07.16","_type":"doc","id":"zynypWQBdaushYXi0Dp","_score":1.0,"_source":{"@timestamp":"2018-07-17T01:55:06.032Z"}}]}}[quote="shaunak, post:3, topic:141587, full:true"]
Hi @mouli_v,

Could you provide the output of curl -XGET "http://localhost:9200/filebeat-*/_search?_source=@timestamp,beat.timezone", replacing localhost:9200 with your Elasticsearch host and port, if necessary?

Also, are the timestamps in your syslog in UTC or in your local time zone?

Shaunak
[/quote]

curl -XGET "http://localhost:9200/_ingest/pipeline/filesyslog"

{
   "filebeat-6.3.1-system-syslog-pipeline":{
      "description":"Pipeline for parsing Syslog messages.",
      "processors":[
         {
            "grok":{
               "pattern_definitions":{
                  "GREEDYMULTILINE":"(.|\n)*"
               },
               "ignore_missing":true,
               "field":"message",
               "patterns":[
                  "%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{SYSLOGHOST:system.syslog.hostname} %{DATA:system.syslog.program}(?:\\[%{POSINT:system.syslog.pid}\\])?: %{GREEDYMULTILINE:system.syslog.message}",
                  "%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{GREEDYMULTILINE:system.syslog.message}"
               ]
            }
         },
         {
            "remove":{
               "field":"message"
            }
         },
         {
            "date":{
               "target_field":"@timestamp",
               "formats":[
                  "MMM  d HH:mm:ss",
                  "MMM dd HH:mm:ss"
               ],
               "ignore_failure":true,
               "field":"system.syslog.timestamp"
            }
         }
      ],
      "on_failure":[
         {
            "set":{
               "field":"error.message",
               "value":"{{ _ingest.on_failure_message }}"
            }
         }
      ]
   },
   "filebeat-6.3.2-system-syslog-pipeline":{
      "description":"Pipeline for parsing Syslog messages.",
      "processors":[
         {
            "grok":{
               "field":"message",
               "patterns":[
                  "%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{SYSLOGHOST:system.syslog.hostname} %{DATA:system.syslog.program}(?:\\[%{POSINT:system.syslog.pid}\\])?: %{GREEDYMULTILINE:system.syslog.message}",
                  "%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{GREEDYMULTILINE:system.syslog.message}"
               ],
               "pattern_definitions":{
                  "GREEDYMULTILINE":"(.|\n)*"
               },
               "ignore_missing":true
            }
         },
         {
            "remove":{
               "field":"message"
            }
         },
         {
            "date":{
               "field":"system.syslog.timestamp",
               "target_field":"@timestamp",
               "formats":[
                  "MMM  d HH:mm:ss",
                  "MMM dd HH:mm:ss"
               ],
               "ignore_failure":true
            }
         }
      ],
      "on_failure":[
         {
            "set":{
               "field":"error.message",
               "value":"{{ _ingest.on_failure_message }}"
            }
         }
      ]
   }
}

I ran on the elastic node.

Sorry, the host name in the curl request should be localhost, not elasticsearch. Copy-paste error from me!

Yup; I have adjusted that; Please find the result i have shared.

Okay, I see the issue now. Looks like your filebeat ingest pipelines need to be updated. The easiest way to do this is:

  1. Stop filebeat
  2. Run curl -XDELETE "http://localhost:9200/_ingest/pipeline/filebeat-*"
  3. Start filebeat

To confirm that your filebeat ingest pipelines were indeed updated correctly, please run curl -XGET "http://localhost:9200/_ingest/pipeline/file*syslog*" again, and check that the date processor has the "timezone": "{{ beat.timezone }}" setting in it.

I already gave a try deleting old ingest/pipeline, still no luck.

curl -XGET "http://localhost:9200/_ingest/pipeline/file*syslog*"

{
   "filebeat-6.3.1-system-syslog-pipeline":{
      "description":"Pipeline for parsing Syslog messages.",
      "processors":[
         {
            "grok":{
               "pattern_definitions":{
                  "GREEDYMULTILINE":"(.|\n)*"
               },
               "ignore_missing":true,
               "field":"message",
               "patterns":[
                  "%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{SYSLOGHOST:system.syslog.hostname} %{DATA:system.syslog.program}(?:\\[%{POSINT:system.syslog.pid}\\])?: %{GREEDYMULTILINE:system.syslog.message}",
                  "%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{GREEDYMULTILINE:system.syslog.message}"
               ]
            }
         },
         {
            "remove":{
               "field":"message"
            }
         },
         {
            "date":{
               "target_field":"@timestamp",
               "formats":[
                  "MMM  d HH:mm:ss",
                  "MMM dd HH:mm:ss"
               ],
               "ignore_failure":true,
               "field":"system.syslog.timestamp"
            }
         }
      ],
      "on_failure":[
         {
            "set":{
               "field":"error.message",
               "value":"{{ _ingest.on_failure_message }}"
            }
         }
      ]
   }
}

We have 2 environments: i have verified on the both environments:
result for prod env:

curl -XGET "http://localhost:9200/filebeat-*/_search?_source=@timestamp,beat.timezone"
{"took":6,"timed_out":false,"_shards":{"total":18,"successful":18,"skipped":0,"failed":0},"hits":{"total":773273,"max_score":1.0,"hits":[{"_index":"filebeat-6.2.1-2018.07.25","_type":"doc","_id":"AE6k0WQB2cu211gImLq4","_score":1.0,"_source":{"@timestamp":"2018-07-25T13:33:01.320Z"}},{"_index":"filebeat-6.2.1-2018.07.25","_type":"doc","_id":"A06k0WQB2cu211gImLq4","_score":1.0,"_source":{"@timestamp":"2018-07-25T13:33:01.320Z"}},{"_index":"filebeat-6.2.1-2018.07.25","_type":"doc","_id":"Bk6k0WQB2cu211gImLq4","_score":1.0,"_source":{"@timestamp":"2018-07-25T13:33:01.320Z"}},{"_index":"filebeat-6.2.1-2018.07.25","_type":"doc","_id":"CE6k0WQB2cu211gImLq4","_score":1.0,"_source":{"@timestamp":"2018-07-25T13:33:01.321Z"}},{"_index":"filebeat-6.2.1-2018.07.25","_type":"doc","_id":"Ck6k0WQB2cu211gImLq4","_score":1.0,"_source":{"@timestamp":"2018-07-25T13:33:01.321Z"}},{"_index":"filebeat-6.2.1-2018.07.25","_type":"doc","_id":"C06k0WQB2cu211gImLq4","_score":1.0,"_source":{"@timestamp":"2018-07-25T13:33:01.321Z"}},{"_index":"filebeat-6.2.1-2018.07.25","_type":"doc","_id":"DE6k0WQB2cu211gImLq4","_score":1.0,"_source":{"@timestamp":"2018-07-25T13:33:01.321Z"}},{"_index":"filebeat-6.2.1-2018.07.25","_type":"doc","_id":"D06k0WQB2cu211gImLq4","_score":1.0,"_source":{"@timestamp":"2018-07-25T13:33:01.321Z"}},{"_index":"filebeat-6.2.1-2018.07.25","_type":"doc","_id":"EE6k0WQB2cu211gImLq4","_score":1.0,"_source":{"@timestamp":"2018-07-25T13:33:01.321Z"}},{"_index":"filebeat-6.2.1-2018.07.25","_type":"doc","_id":"EU6k0WQB2cu211gImLq4","_score":1.0,"_source":{"@timestamp":"2018-07-25T13:33:01.321Z"}}]}}

The problem definitely seems to be with the filebeat-6.3.1-system-syslog-pipeline ingest pipeline not getting updated even after you delete it and restart filebeat. I have to admit, I'm a bit stumped as to why that' s not working for you as it seems to be working for me (I've downloaded and tested with filebeat 6.3.1 as well).

After you run curl -XDELETE "http://localhost:9200/_ingest/pipeline/filebeat-*" but before you restart filebeat, can you run curl -XGET "http://localhost:9200/_ingest/pipeline/file*syslog*" again? I want to check if the pipeline is actually being deleted or not.

Shaunak

curl -XGET "http://localhost:9200/_ingest/pipeline/file*syslog*"
{"filebeat-6.3.1-system-syslog-pipeline":{"description":"Pipeline for parsing Syslog messages.","processors":[{"grok":{"pattern_definitions":{"GREEDYMULTILINE":"(.|\n)"},"ignore_missing":true,"field":"message","patterns":["%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{SYSLOGHOST:system.syslog.hostname} %{DATA:system.syslog.program}(?:\[%{POSINT:system.syslog.pid}\])?: %{GREEDYMULTILINE:system.syslog.message}","%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{GREEDYMULTILINE:system.syslog.message}"]}},{"remove":{"field":"message"}},{"date":{"target_field":"@timestamp","formats":["MMM d HH:mm:ss","MMM dd HH:mm:ss"],"ignore_failure":true,"field":"system.syslog.timestamp"}}],"on_failure":[{"set":{"field":"error.message","value":"{{ _ingest.on_failure_message }}"}}]},"filebeat-6.3.2-system-syslog-pipeline":{"description":"Pipeline for parsing Syslog messages.","processors":[{"grok":{"field":"message","patterns":["%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{SYSLOGHOST:system.syslog.hostname} %{DATA:system.syslog.program}(?:\[%{POSINT:system.syslog.pid}\])?: %{GREEDYMULTILINE:system.syslog.message}","%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{GREEDYMULTILINE:system.syslog.message}"],"pattern_definitions":{"GREEDYMULTILINE":"(.|\n)"},"ignore_missing":true}},{"remove":{"field":"message"}},{"date":{"field":"system.syslog.timestamp","target_field":"@timestamp","formats":["MMM d HH:mm:ss","MMM dd HH:mm:ss"],"ignore_failure":true}}],"on_failure":[{"set":{"value":"{{ _ingest.on_failure_message }}","field":"error.message"}}]}}root@sherlock-i-0c82683e261bb829e:~#

systemctl stop filebeat.service

curl -XDELETE "http://localhost:9200/_ingest/pipeline/filebeat-*"
{"acknowledged":true}

systemctl start filebeat.service

systemctl status filebeat.service
● filebeat.service - filebeat
Loaded: loaded (/lib/systemd/system/filebeat.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2018-07-25 12:00:56 PDT; 3s ago
Docs: Filebeat Reference [8.11] | Elastic
Main PID: 29980 (filebeat)
Tasks: 12
Memory: 16.1M
CPU: 219ms
CGroup: /system.slice/filebeat.service
└─29980 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat

Jul 25 12:00:56 sherlock-i-0c82683e261bb829e.k.dev.eng-us systemd[1]: Started filebeat.

curl -XGET "http://localhost:9200/_ingest/pipeline/file*syslog*"
{"filebeat-6.3.1-system-syslog-pipeline":{"description":"Pipeline for parsing Syslog messages.","processors":[{"grok":{"patterns":["%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{SYSLOGHOST:system.syslog.hostname} %{DATA:system.syslog.program}(?:\[%{POSINT:system.syslog.pid}\])?: %{GREEDYMULTILINE:system.syslog.message}","%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{GREEDYMULTILINE:system.syslog.message}"],"pattern_definitions":{"GREEDYMULTILINE":"(.|\n)*"},"ignore_missing":true,"field":"message"}},{"remove":{"field":"message"}},{"date":{"formats":["MMM d HH:mm:ss","MMM dd HH:mm:ss"],"ignore_failure":true,"field":"system.syslog.timestamp","target_field":"@timestamp"}}],"on_failure":[{"set":{"field":"error.message","value":"{{ _ingest.on_failure_message }}"}}]}}

Sorry if I wasn't clear earlier. Lets ignore filebeat altogether for a moment. Stop it but don't start it up at all.

Then:

  1. Run curl -XDELETE "http://localhost:9200/_ingest/pipeline/filebeat-*"
  2. Run curl -XGET "http://localhost:9200/_ingest/pipeline/file*syslog*" and post it's response here.

i-0c82683e261bb829e:~# systemctl stop filebeat.service

i-0c82683e261bb829e:~# curl -XDELETE "http://localhost:9200/_ingest/pipeline/filebeat-*"
{"acknowledged":true}

i-0c82683e261bb829e:~# curl -XGET "http://localhost:9200/_ingest/pipeline/file*syslog*"
{"filebeat-6.3.1-system-syslog-pipeline":{"on_failure":[{"set":{"field":"error.message","value":"{{ _ingest.on_failure_message }}"}}],"description":"Pipeline for parsing Syslog messages.","processors":[{"grok":{"patterns":["%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{SYSLOGHOST:system.syslog.hostname} %{DATA:system.syslog.program}(?:\[%{POSINT:system.syslog.pid}\])?: %{GREEDYMULTILINE:system.syslog.message}","%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{GREEDYMULTILINE:system.syslog.message}"],"pattern_definitions":{"GREEDYMULTILINE":"(.|\n)*"},"ignore_missing":true,"field":"message"}},{"remove":{"field":"message"}},{"date":{"formats":["MMM d HH:mm:ss","MMM dd HH:mm:ss"],"ignore_failure":true,"field":"system.syslog.timestamp","target_field":"@timestamp"}}]}}

Wow, this is bizarre. It would appear that you are unable to delete the ingest pipeline! Or something is immediately recreating it. Is it possible that there's a filebeat process somewhere that keeps restarting?

Is it possible that there's a filebeat process somewhere that keeps restarting?
No. service still on stopped state on elastisearch instance.

Filebeat service is stopped on on elasticsearch instance.I have installed filebeat on 50+ servers and those all will be pointing to same elasticserver instance. Do you think that keeps recreating the file, as the service filebeat is running on them.

Alright, lets try one more thing.

  1. Run curl -XDELETE "http://localhost:9200/_ingest/pipeline/filebeat-*"

  2. Edit your filebeat.yml and add: filebeat.overwrite_pipelines: true

  3. Start up filebeat

  4. In your filebeat logs, check if you have two lines like these:

    2018-07-25T13:29:18.570-0700    INFO    fileset/pipelines.go:62 Elasticsearch pipeline with ID 'filebeat-6.3.1-system-auth-pipeline' loaded
    2018-07-25T13:29:18.591-0700    INFO    fileset/pipelines.go:62 Elasticsearch pipeline with ID 'filebeat-6.3.1-system-syslog-pipeline' loaded
    
  5. Run curl -XGET "http://localhost:9200/_ingest/pipeline/file*syslog*" and post it's response here.

No luck,

I have enabled other custom modules, which i could see them in log, not system:

2018-07-25T13:41:45.099-0700 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.4
2018-07-25T13:41:45.100-0700 INFO template/load.go:73 Template already exists and will not be overwritten.
2018-07-25T13:41:45.119-0700 INFO fileset/pipelines.go:62 Elasticsearch pipeline with ID 'filebeat-6.3.1-ansible-log-pipeline' loaded
2018-07-25T13:41:45.132-0700 INFO fileset/pipelines.go:62 Elasticsearch pipeline with ID 'filebeat-6.3.1-aptitude-log-pipeline' loaded
2018-07-25T13:41:45.178-0700 INFO fileset/pipelines.go:62 Elasticsearch pipeline with ID 'filebeat-6.3.1-dpkg-log-pipeline' loaded
2018-07-25T13:41:45.227-0700 INFO fileset/pipelines.go:62 Elasticsearch pipeline with ID 'filebeat-6.3.1-mail-log-pipeline' loaded
2018-07-25T13:42:10.929-0700 INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":60,"time":{"ms":8}},"total":{"ticks":310,"time":{"ms":12},"value":310},"user":{"ticks":250,"time":{"ms":4}}},"info":{"ephemeral_id":"84990714-8d64-461a-899c-0a4e1946f224","uptime":{"ms":210009}},"memstats":{"gc_next":15048528,"memory_alloc":9175640,"memory_total":36340152,"rss":-8192}},"filebeat":{"harvester":{"open_files":3,"running":3}},"libbeat":{"config":{"module":{"running":5}},"output":{"read":{"bytes":1415},"write":{"bytes":4508}},"pipeline":{"clients":13,"events":{"active":4119,"retry":50}}},"registrar":{"states":{"current":13}},"system":{"load":{"1":1.69,"15":0.97,"5":1.24,"norm":{"1":0.4225,"15":0.2425,"5":0.31}}}}}}
2018-07-25T13:42:40.929-0700 INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":70,"time":{"ms":4}},"total":{"ticks":320,"time":{"ms":8},"value":320},"user":{"ticks":250,"time":{"ms":4}}},"info":{"ephemeral_id":"84990714-8d64-461a-899c-0a4e1946f224","uptime":{"ms":240009}},"memstats":{"gc_next":15048528,"memory_alloc":9541408,"memory_total":36705920}},"filebeat":{"harvester":{"open_files":3,"running":3}},"libbeat":{"config":{"module":{"running":5}},"pipeline":{"clients":13,"events":{"active":4119}}},"registrar":{"states":{"current":13}},"system":{"load":{"1":1.39,"15":0.97,"5":1.22,"norm":{"1":0.3475,"15":0.2425,"5":0.305}}}}}}
2018-07-25T13:42:45.238-0700 ERROR pipeline/output.go:74 Failed to connect: Connection marked as failed because the onConnect callback failed: Error loading pipeline for fileset system/auth: This module requires the ingest-geoip plugin to be installed in Elasticsearch. You can install it using the following command in the Elasticsearch home directory:
sudo bin/elasticsearch-plugin install ingest-geoip

--

2018-07-25T13:52:10.929-0700 INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":120,"time":{"ms":4}},"total":{"ticks":560,"time":{"ms":12},"value":560},"user":{"ticks":440,"time":{"ms":8}}},"info":{"ephemeral_id":"84990714-8d64-461a-899c-0a4e1946f224","uptime":{"ms":810008}},"memstats":{"gc_next":14949840,"memory_alloc":9194328,"memory_total":47956416}},"filebeat":{"harvester":{"open_files":3,"running":3}},"libbeat":{"config":{"module":{"running":5}},"output":{"read":{"bytes":1415},"write":{"bytes":4508}},"pipeline":{"clients":13,"events":{"active":4119,"retry":50}}},"registrar":{"states":{"current":13}},"system":{"load":{"1":1.85,"15":1.2,"5":1.46,"norm":{"1":0.4625,"15":0.3,"5":0.365}}}}}}
2018-07-25T13:52:40.929-0700 INFO [monitoring] log/log.go:124 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":130},"total":{"ticks":570,"time":{"ms":4},"value":570},"user":{"ticks":440,"time":{"ms":4}}},"info":{"ephemeral_id":"84990714-8d64-461a-899c-0a4e1946f224","uptime":{"ms":840008}},"memstats":{"gc_next":14949840,"memory_alloc":9556672,"memory_total":48318760}},"filebeat":{"harvester":{"open_files":3,"running":3}},"libbeat":{"config":{"module":{"running":5}},"pipeline":{"clients":13,"events":{"active":4119}}},"registrar":{"states":{"current":13}},"system":{"load":{"1":2.32,"15":1.25,"5":1.6,"norm":{"1":0.58,"15":0.3125,"5":0.4}}}}}}
2018-07-25T13:52:46.993-0700 ERROR pipeline/output.go:74 Failed to connect: Connection marked as failed because the onConnect callback failed: Error loading pipeline for fileset system/auth: This module requires the ingest-geoip plugin to be installed in Elasticsearch. You can install it using the following command in the Elasticsearch home directory:
sudo bin/elasticsearch-plugin install ingest-geoip
2018-07-25T13:52:46.993-0700 INFO [publish] pipeline/retry.go:172 retryer: send unwait-signal to consumer
2018-07-25T13:52:46.993-0700 INFO [publish] pipeline/retry.go:174 done
2018-07-25T13:52:46.993-0700 INFO [publish] pipeline/retry.go:149 retryer: send wait signal to consumer
2018-07-25T13:52:46.994-0700 INFO [publish] pipeline/retry.go:151 done
2018-07-25T13:52:46.994-0700 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.4
2018-07-25T13:52:46.997-0700 INFO template/load.go:73 Template already exists and will not be overwritten.
2018-07-25T13:52:47.013-0700 INFO fileset/pipelines.go:62 Elasticsearch pipeline with ID 'filebeat-6.3.1-ansible-log-pipeline' loaded
2018-07-25T13:52:47.048-0700 INFO fileset/pipelines.go:62 Elasticsearch pipeline with ID 'filebeat-6.3.1-aptitude-log-pipeline' loaded
2018-07-25T13:52:47.063-0700 INFO fileset/pipelines.go:62 Elasticsearch pipeline with ID 'filebeat-6.3.1-dpkg-log-pipeline' loaded
2018-07-25T13:52:47.098-0700 INFO fileset/pipelines.go:62 Elasticsearch pipeline with ID 'filebeat-6.3.1-mail-log-pipeline' loaded
2018-07-25T13:52:47.136-0700 INFO fileset/pipelines.go:62 Elasticsearch pipeline with ID 'filebeat-6.3.1-system-syslog-pipeline' loaded

Do you think, it is causing issues:

2018-07-25T13:41:45.099-0700 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.4

Elasticsearch version 6.2.4