Filebeat SSH dashboard failing to show events (Kibana 7.0)

Hi All,

I have recently rebuilt my elasticsearch and kibana infrastructure to 7.0 and reinstalled my filebeat and metricbeat collectors to 7.0.

I currently collect access and error logs for apache2 (I have the module enabled) as well as system logs for syslog and auth (I have the module enabled as well).

I ran filebeat setup to load the dashboards and reloaded the config however the [Filebeat System] SSH login attempts ECS dashboard only seems to be dispaying data from one host intermittently - it is certainly not reliable.

I have checked the data in the elasticsearch instance and it appears that there is auth log data from both hosts - a lot more events that are displaying on the dashboard.

The [Filebeat Apache] Access and error logs ECS dashboard is working fine.

Can anyone provide any advice as to why the dashboard is not dispaying events for SSH auth and failed events.

Cheers,
Brad

Ok, it appears the date and time of the events is out by 12 hours. I worked out why this happening but I can't fix it. The servers are set to AU timezone with NTP sync enables. Time of the event in auth.log are correct but ingest into elastic and visualisation in kibana appear to be wrong and scewing the data.

Any ideas?

Timestamps are converted to UTC-0 by default. Have you set convert_timezone in order to convert timestamps into your local time zone?

After adding this setting you must run filebeat setup --pipelines, so to update the Ingest Node pipelines.

Thanks Steffens

I have tired this but it doesn't appear to be working. I just seems to be very unreliable.

I am not sure if it is the new dashboard or time problems. The older dashboard in 6.2 just worked and displayed useful information.

I seem to be getting events for 1 host but not the other (I only have 2 hosts) and the host I am getting events for don't seem to be accurate. I seem to get events for people attempting to access my host (which is on the internet) but not events for when I login and everything is failed (there is no success). Strange.

/var/log/auth.log doesn't look anything like the events I get in the dashboard.

Is there a dashboard that that I can create or load based on the older dashboard?

Cheers,
Brad

@bradfordaemorton could you check if the convert_timezones option is being applied to the pipelines? For that run GET _ingest/pipeline/filebeat-7.0.0-apache*, and check that the returned pipelines have an option like "timezone" : "{{ beat.timezone }}".

If it is not, filebeat setup --pipelines may not be enough to fix this due to an existing bug.
But you can try to restart one of your filebeats with filebeat.overwrite_pipelines: true in the configuration, that should reinstall the pipelines.

If the "timezone" : "{{ beat.timezone }}" option is in the pipeline, check that events from the apache metricset contain the beat.timezone field.

Thanks Jaime,

Had a chance to run filebeat setup --pipelines and restarted the process however the time zone is still out.

I had a chance to recreate some of the dashboards which helped me look at the data correctly - the standard SSH dashboard created when setup is run is "rubbish" and provides no events in the visualisation.

After looking at the data properly I can confirm the data is out by -10 hours. The event stamped at April 29 at 6.06AM actually occurred on 28th April 8.06PM (AU).

The strange thing is there is field called event.timezone +10:00. This seems to suggest it is adding 10 hours to the time. Is that the case? Current setup is apache2 server with timezone set to AU and an elastic/kibana instance with timezone also set to AU. If both servers are in AU maybe I dont need to set var.convert_timezone: true?

An example event on my test box is below:

@timestamp	Apr 29, 2019 @ 06:06:24.000
      	t  _id	mthpY2oBqAWZ5IeoHP7q
      	t  _index	filebeat-7.0.0-2019.04.23-000001
    	#  _score	 - 
      	t  _type	_doc
       	t  agent.ephemeral_id	00784976-91a1-4159-ac8c-668234c0841a
       	t  agent.hostname	ubuntu-s-1vcpu-2gb-fra1-01
       	t  agent.id	0a109873-fc26-4cd4-93d9-48d9c6ad2aa1
       	t  agent.type	filebeat
       	t  agent.version	7.0.0
       	t  cloud.instance.id	141140177
       	t  cloud.provider	digitalocean
       	t  cloud.region	fra1
       	t  ecs.version	1.0.0
       	t  event.action	ssh_login
       	t  event.category	authentication
       	t  event.dataset	system.auth
       	t  event.module	system
       	t  event.outcome	success
       	t  event.timezone	+10:00
       	t  event.type	authentication_success
       	t  fileset.name	auth
       	t  host.architecture	x86_64
       	 host.containerized	false
       	t  host.hostname	ubuntu-s-1vcpu-2gb-fra1-01
       	t  host.id	93d600e2aa544019b668c69d38478540
       	t  host.name	ubuntu-s-1vcpu-2gb-fra1-01
       	t  host.os.codename	bionic
       	t  host.os.family	debian
       	t  host.os.kernel	4.15.0-48-generic
       	t  host.os.name	Ubuntu
       	t  host.os.platform	ubuntu
       	t  host.os.version	18.04.2 LTS (Bionic Beaver)
       	t  input.type	log
       	t  log.file.path	/var/log/auth.log
       	#  log.offset	7,260
       	t  process.name	sshd
       	#  process.pid	22,437
       	t  service.type	system
       	t  source.geo.city_name	Canberra
       	t  source.geo.continent_name	Oceania
       	t  source.geo.country_iso_code	AU
     	 source.geo.location	{
  "lon": 149.1344,
  "lat": -35.276
}
       	t  source.geo.region_iso_code	AU-ACT
       	t  source.geo.region_name	Australian Capital Territory
       	 source.ip	193.119.57.253
       	#  source.port	52,489
       	 suricata.eve.timestamp	Apr 29, 2019 @ 06:06:24.000
       	t  system.auth.ssh.event	Accepted
       	t  system.auth.ssh.method	password
       	t  user.name	root

Do you mean that there is no timezone option in the pipeline (as seen with GET _ingest/pipeline/filebeat-7.0.0-apache*)?

There are some issues with these dashboards in 7.0, this should be fixed in following versions, you can find more about this in Filebeat system visualisation do not use ECS · Issue #11859 · elastic/beats · GitHub

This is correct, this is added by filebeat when using var.convert_timezone: true. We'd have to confirm if the installed pipeline is using this timezone field.

Hi @jsoriano,

Thanks for the help on this.

I am really have issues with version 7.0 and starting to lose faith. Setting up a file monitor should not be this hard?

Now I am not getting any data for authentications from one host at all. I have run filebeat setup -e, filebeat setup -e --pipelines, service filebeat force-reload, service filebeat restart and the system data just does seem to be appearing in elastic. Is there a way to really identify why the data is not here or any errors that might be occuring?

I have confirmed the server can communicate with the elastic server, data is being sent for apache2 access and error logs, but there a is either a bug/configuration issue/ timestamp issue or a combination of them all.

Is there a few common places I can check for any errors with file monitoring or yml conflicts?

Cheers,
Brad

Hi Team, @steffens @jsoriano

OK, I found this in the filebeat -e output:

2019-04-30T13:53:24.262+1000    ERROR   fileset/factory.go:105  Error creating input: Can only start an input when all related states are finished: {Id:2000-64513 Finished:false Fileinfo:0xc0003d15f0                                       Source:/var/log/auth.log Offset:168772 Timestamp:2019-04-30 13:53:22.218559334 +1000 AEST m=+134.080205720 TTL:-1ns Type:log Meta:map[] FileStateOS:2000-64513}
    2019-04-30T13:53:24.262+1000    ERROR   [reload]        cfgfile/list.go:96      Error creating runner from config: Can only start an input when all related states are finished: {Id:2000-64513 Finishe                                      d:false Fileinfo:0xc0003d15f0 Source:/var/log/auth.log Offset:168772 Timestamp:2019-04-30 13:53:22.218559334 +1000 AEST m=+134.080205720 TTL:-1ns Type:log Meta:map[] FileStateOS:2000-64513}

Full extract from output attached:

2019-04-30T13:58:44.397+1000    ERROR   fileset/factory.go:105  Error creating input: Can only start an input when all related states are finished: {Id:2000-64513 Finished:false Fileinfo:0xc0003d15f0 Source:/var/log/auth.log Offset:168888 Timestamp:2019-04-30 13:57:47.26100497 +1000 AEST m=+399.122651359 TTL:-1ns Type:log Meta:map[] FileStateOS:2000-64513}
2019-04-30T13:58:44.398+1000    ERROR   [reload]        cfgfile/list.go:96      Error creating runner from config: Can only start an input when all related states are finished: {Id:2000-64513 Finished:false Fileinfo:0xc0003d15f0 Source:/var/log/auth.log Offset:168888 Timestamp:2019-04-30 13:57:47.26100497 +1000 AEST m=+399.122651359 TTL:-1ns Type:log Meta:map[] FileStateOS:2000-64513}
2019-04-30T13:58:54.400+1000    ERROR   fileset/factory.go:105  Error creating input: Can only start an input when all related states are finished: {Id:2000-64513 Finished:false Fileinfo:0xc0003d15f0 Source:/var/log/auth.log Offset:168888 Timestamp:2019-04-30 13:57:47.26100497 +1000 AEST m=+399.122651359 TTL:-1ns Type:log Meta:map[] FileStateOS:2000-64513}
2019-04-30T13:58:54.401+1000    ERROR   [reload]        cfgfile/list.go:96      Error creating runner from config: Can only start an input when all related states are finished: {Id:2000-64513 Finished:false Fileinfo:0xc0003d15f0 Source:/var/log/auth.log Offset:168888 Timestamp:2019-04-30 13:57:47.26100497 +1000 AEST m=+399.122651359 TTL:-1ns Type:log Meta:map[] FileStateOS:2000-64513}
2019-04-30T13:59:04.404+1000    ERROR   fileset/factory.go:105  Error creating input: Can only start an input when all related states are finished: {Id:2000-64513 Finished:false Fileinfo:0xc0003d15f0 Source:/var/log/auth.log Offset:168888 Timestamp:2019-04-30 13:57:47.26100497 +1000 AEST m=+399.122651359 TTL:-1ns Type:log Meta:map[] FileStateOS:2000-64513}
2019-04-30T13:59:04.404+1000    ERROR   [reload]        cfgfile/list.go:96      Error creating runner from config: Can only start an input when all related states are finished: {Id:2000-64513 Finished:false Fileinfo:0xc0003d15f0 Source:/var/log/auth.log Offset:168888 Timestamp:2019-04-30 13:57:47.26100497 +1000 AEST m=+399.122651359 TTL:-1ns Type:log Meta:map[] FileStateOS:2000-64513}
2019-04-30T13:59:08.161+1000    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":140,"time":{"ms":12}},"total":{"ticks":850,"time":{"ms":31},"value":850},"user":{"ticks":710,"time":{"ms":19}}},"handles":{"limit":{"hard":1048576,"soft":1024},"open":12},"info":{"ephemeral_id":"fa51d92a-5478-4a14-be3e-522b36f2ad68","uptime":{"ms":480019}},"memstats":{"gc_next":5671568,"memory_alloc":4680368,"memory_total":106943792}},"filebeat":{"events":{"added":6,"done":6},"harvester":{"open_files":3,"running":3}},"libbeat":{"config":{"module":{"running":0},"reloads":3},"output":{"events":{"acked":6,"batches":3,"total":6},"read":{"bytes":1090},"write":{"bytes":7194}},"pipeline":{"clients":54,"events":{"active":0,"published":6,"total":6},"queue":{"acked":6}}},"registrar":{"states":{"current":49,"update":6},"writes":{"success":3,"total":3}},"system":{"load":{"1":0,"15":0,"5":0,"norm":{"1":0,"15":0,"5":0}}}}}}
2019-04-30T13:59:14.407+1000    ERROR   fileset/factory.go:105  Error creating input: Can only start an input when all related states are finished: {Id:2000-64513 Finished:false Fileinfo:0xc0003d15f0 Source:/var/log/auth.log Offset:168888 Timestamp:2019-04-30 13:57:47.26100497 +1000 AEST m=+399.122651359 TTL:-1ns Type:log Meta:map[] FileStateOS:2000-64513}
2019-04-30T13:59:14.407+1000    ERROR   [reload]        cfgfile/list.go:96      Error creating runner from config: Can only start an input when all related states are finished: {Id:2000-64513 Finished:false Fileinfo:0xc0003d15f0 Source:/var/log/auth.log Offset:168888 Timestamp:2019-04-30 13:57:47.26100497 +1000 AEST m=+399.122651359 TTL:-1ns Type:log Meta:map[] FileStateOS:2000-64513}
2019-04-30T13:59:24.412+1000    ERROR   fileset/factory.go:105  Error creating input: Can only start an input when all related states are finished: {Id:2000-64513 Finished:false Fileinfo:0xc0003d15f0 Source:/var/log/auth.log Offset:168888 Timestamp:2019-04-30 13:57:47.26100497 +1000 AEST m=+399.122651359 TTL:-1ns Type:log Meta:map[] FileStateOS:2000-64513}
2019-04-30T13:59:24.412+1000    ERROR   [reload]        cfgfile/list.go:96      Error creating runner from config: Can only start an input when all related states are finished: {Id:2000-64513 Finished:false Fileinfo:0xc0003d15f0 Source:/var/log/auth.log Offset:168888 Timestamp:2019-04-30 13:57:47.26100497 +1000 AEST m=+399.122651359 TTL:-1ns Type:log Meta:map[] FileStateOS:2000-64513}
2019-04-30T13:59:34.415+1000    ERROR   fileset/factory.go:105  Error creating input: Can only start an input when all related states are finished: {Id:2000-64513 Finished:false Fileinfo:0xc0003d15f0 Source:/var/log/auth.log Offset:168888 Timestamp:2019-04-30 13:57:47.26100497 +1000 AEST m=+399.122651359 TTL:-1ns Type:log Meta:map[] FileStateOS:2000-64513}
2019-04-30T13:59:34.416+1000    ERROR   [reload]        cfgfile/list.go:96      Error creating runner from config: Can only start an input when all related states are finished: {Id:2000-64513 Finished:false Fileinfo:0xc0003d15f0 Source:/var/log/auth.log Offset:168888 Timestamp:2019-04-30 13:57:47.26100497 +1000 AEST m=+399.122651359 TTL:-1ns Type:log Meta:map[] FileStateOS:2000-64513}
2019-04-30T13:59:38.162+1000    INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":150,"time":{"ms":9}},"total":{"ticks":890,"time":{"ms":37},"value":890},"user":{"ticks":740,"time":{"ms":28}}},"handles":{"limit":{"hard":1048576,"soft":1024},"open":12},"info":{"ephemeral_id":"fa51d92a-5478-4a14-be3e-522b36f2ad68","uptime":{"ms":510019}},"memstats":

@jsoriano @steffens

Ok, I am back to square one, with data from both servers now being collected, after updating the configuration and restarting the process but the time is still out.

An example event occurred at April 30 4.24 PM (AU time) with the event timestamp in Kibana showing May 1, 2019 @ 02:24:20.000 (10 hours different). Event is below:

    @timestamp	May 1, 2019 @ 02:24:20.000
          	t  _id	UufqbGoBiYFzG3OzhH8o
          	t  _index	filebeat-7.0.0-2019.04.23-000001
        	#  _score	 - 
          	t  _type	_doc
           	t  agent.ephemeral_id	1e36dd11-d2f3-4126-b51f-78055310aec9
           	t  agent.hostname	wordpress-s-1vcpu-1gb-fra1-01
           	t  agent.id	31dca386-48b9-485c-ad2f-4f284614eb34
           	t  agent.type	filebeat
           	t  agent.version	7.0.0
           	t  cloud.instance.id	118378094
           	t  cloud.provider	digitalocean
           	t  cloud.region	fra1
           	t  ecs.version	1.0.0
           	t  event.action	ssh_login
           	t  event.category	authentication
           	t  event.dataset	system.auth
           	t  event.module	system
           	t  event.outcome	success
           	t  event.timezone	+10:00
           	t  event.type	authentication_success
           	t  fileset.name	auth
           	t  host.architecture	x86_64
           	 host.containerized	false
           	t  host.hostname	wordpress-s-1vcpu-1gb-fra1-01
           	t  host.id	c9afd5eca53543e082c2f6eaf495c72b
           	t  host.name	wordpress-s-1vcpu-1gb-fra1-01
           	t  host.os.codename	bionic
           	t  host.os.family	debian
           	t  host.os.kernel	4.15.0-48-generic
           	t  host.os.name	Ubuntu
           	t  host.os.platform	ubuntu
           	t  host.os.version	18.04.2 LTS (Bionic Beaver)
           	t  input.type	log
           	t  log.file.path	/var/log/auth.log
           	#  log.offset	186,701
           	t  process.name	sshd
           	#  process.pid	4,102
           	t  service.type	system
           	t  source.geo.city_name	Canberra
           	t  source.geo.continent_name	Oceania
           	t  source.geo.country_iso_code	AU
         	 source.geo.location	{
      "lon": 149.1344,
      "lat": -35.276
    }
           	t  source.geo.region_iso_code	AU-ACT
           	t  source.geo.region_name	Australian Capital Territory
           	 source.ip	193.119.57.253
           	#  source.port	56,228
           	 suricata.eve.timestamp	May 1, 2019 @ 02:24:20.000
           	t  system.auth.ssh.event	Accepted
           	t  system.auth.ssh.method	password
           	t  user.name	root

@bradfordaemorton did you check if the timezone option appears in GET _ingest/pipeline/filebeat-7.0.0-apache*? This will be very helpful to investigate this problem.

I ran _ingest/pipeline/filebeat-7.0.0-apache* however it is the standard filebeat systems/auth logs that is the issue (not Apache).

I have attached the output from _ingest/pipeline/filebeat-7.0.0* and included the extract from "filebeat-7.0.0-system-auth-pipeline" in the next message.

> {"filebeat-7.0.0-apache-access-default":{"description":"Pipeline for parsing Apache HTTP Server access logs. Requires the geoip and user_agent plugins.","processors":[{"grok":{"field":"message","patterns":["%{IPORHOST:source.address} - %{DATA:user.name} \\[%{HTTPDATE:apache.access.time}\\] \"%{WORD:http.request.method} %{DATA:url.original} HTTP/%{NUMBER:http.version}\" %{NUMBER:http.response.status_code:long} (?:%{NUMBER:http.response.body.bytes:long}|-)( \"%{DATA:http.request.referrer}\")?( \"%{DATA:user_agent.original}\")?","%{IPORHOST:source.address} - %{DATA:user.name} \\[%{HTTPDATE:apache.access.time}\\] \"-\" %{NUMBER:http.response.status_code:long} -","\\[%{HTTPDATE:apache.access.time}\\] %{IPORHOST:source.address} %{DATA:apache.access.ssl.protocol} %{DATA:apache.access.ssl.cipher} \"%{WORD:http.request.method} %{DATA:url.original} HTTP/%{NUMBER:http.version}\" %{NUMBER:http.response.body.bytes:long}"],"ignore_missing":true}},{"remove":{"field":"message"}},{"grok":{"field":"source.address","ignore_missing":true,"patterns":["^(%{IP:source.ip}|%{HOSTNAME:source.domain})$"]}},{"rename":{"field":"@timestamp","target_field":"event.created"}},{"date":{"target_field":"@timestamp","formats":["dd/MMM/yyyy:H:m:s Z"],"field":"apache.access.time"}},{"remove":{"field":"apache.access.time"}},{"user_agent":{"field":"user_agent.original","ignore_failure":true}},{"geoip":{"field":"source.ip","target_field":"source.geo","ignore_missing":true}}],"on_failure":[{"set":{"field":"error.message","value":"{{ _ingest.on_failure_message }}"}}]},"filebeat-7.0.0-apache-error-pipeline":{"description":"Pipeline for parsing apache error logs","processors":[{"grok":{"field":"message","patterns":["\\[%{APACHE_TIME:apache.error.timestamp}\\] \\[%{LOGLEVEL:log.level}\\]( \\[client %{IPORHOST:source.address}\\])? %{GREEDYDATA:message}","\\[%{APACHE_TIME:apache.error.timestamp}\\] \\[%{DATA:apache.error.module}:%{LOGLEVEL:log.level}\\] \\[pid %{NUMBER:process.pid:long}(:tid %{NUMBER:process.thread.id:long})?\\]( \\[client %{IPORHOST:source.address}\\])? %{GREEDYDATA:message}"],"pattern_definitions":{"APACHE_TIME":"%{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{YEAR}"},"ignore_missing":true}},{"date":{"target_field":"@timestamp","formats":["EEE MMM dd H:m:s yyyy","EEE MMM dd H:m:s.SSSSSS yyyy"],"ignore_failure":true,"field":"apache.error.timestamp"}},{"remove":{"field":"apache.error.timestamp","ignore_failure":true}},{"grok":{"field":"source.address","ignore_missing":true,"patterns":["^(%{IP:source.ip}|%{HOSTNAME:source.domain})$"]}},{"geoip":{"field":"source.ip","target_field":"source.geo","ignore_missing":true}}],"on_failure":[{"set":{"field":"error.message","value":"{{ _ingest.on_failure_message }}"}}]}}

I have attached the output from _ingest/pipeline/filebeat-7.0.0 * and included the extract from "filebeat-7.0.0-system-auth-pipeline" below:

"filebeat-7.0.0-system-auth-pipeline":{"description":"Pipeline for parsing system authorisation/secure logs","processors":[{"grok":{"field":"message","ignore_missing":true,"pattern_definitions":{"GREEDYMULTILINE":"(.|\n)"},"patterns":["%{SYSLOGTIMESTAMP:system.auth.timestamp} %{SYSLOGHOST:host.hostname} %{DATA:process.name}(?:\[%{POSINT:process.pid:long}\])?: %{DATA:system.auth.ssh.event} %{DATA:system.auth.ssh.method} for (invalid user )?%{DATA:user.name} from %{IPORHOST:source.ip} port %{NUMBER:source.port:long} ssh2(: %{GREEDYDATA:system.auth.ssh.signature})?","%{SYSLOGTIMESTAMP:system.auth.timestamp} %{SYSLOGHOST:host.hostname} %{DATA:process.name}(?:\[%{POSINT:process.pid:long}\])?: %{DATA:system.auth.ssh.event} user %{DATA:user.name} from %{IPORHOST:source.ip}","%{SYSLOGTIMESTAMP:system.auth.timestamp} %{SYSLOGHOST:host.hostname} %{DATA:process.name}(?:\[%{POSINT:process.pid:long}\])?: Did not receive identification string from %{IPORHOST:system.auth.ssh.dropped_ip}","%{SYSLOGTIMESTAMP:system.auth.timestamp} %{SYSLOGHOST:host.hostname} %{DATA:process.name}(?:\[%{POSINT:process.pid:long}\])?: \s%{DATA:user.name} :frowning: %{DATA:system.auth.sudo.error} ;)? TTY=%{DATA:system.auth.sudo.tty} ; PWD=%{DATA:system.auth.sudo.pwd} ; USER=%{DATA:system.auth.sudo.user} ; COMMAND=%{GREEDYDATA:system.auth.sudo.command}","%{SYSLOGTIMESTAMP:system.auth.timestamp} %{SYSLOGHOST:host.hostname} %{DATA:process.name}(?:\[%{POSINT:process.pid:long}\])?: new group: name=%{DATA:group.name}, GID=%{NUMBER:group.id}","%{SYSLOGTIMESTAMP:system.auth.timestamp} %{SYSLOGHOST:host.hostname} %{DATA:process.name}(?:\[%{POSINT:process.pid:long}\])?: new user: name=%{DATA:user.name}, UID=%{NUMBER:user.id}, GID=%{NUMBER:group.id}, home=%{DATA:system.auth.useradd.home}, shell=%{DATA:system.auth.useradd.shell}$","%{SYSLOGTIMESTAMP:system.auth.timestamp} %{SYSLOGHOST:host.hostname}? %{DATA:process.name}(?:\[%{POSINT:process.pid:long}\])?: %{GREEDYMULTILINE:system.auth.message}"]}},{"remove":{"field":"message"}},{"rename":{"target_field":"message","ignore_missing":true,"field":"system.auth.message"}},{"set":{"field":"source.ip","value":"{{system.auth.ssh.dropped_ip}}","if":"ctx.containsKey('system') && ctx.system.containsKey('auth') && ctx.system.auth.containsKey('ssh') && ctx.system.auth.ssh.containsKey('dropped_ip')"}},{"date":{"field":"system.auth.timestamp","target_field":"@timestamp","formats":["MMM d HH:mm:ss","MMM dd HH:mm:ss"],"ignore_failure":true}},{"remove":{"field":"system.auth.timestamp"}},{"geoip":{"field":"source.ip","target_field":"source.geo","ignore_failure":true}},{"script":{"lang":"painless","ignore_failure":true,"source":"if (ctx.system.auth.ssh.event == "Accepted") { if (!ctx.containsKey("event")) { ctx.event = [:]; } ctx.event.type = "authentication_success"; ctx.event.category = "authentication"; ctx.event.action = "ssh_login"; ctx.event.outcome = "success"; } else if (ctx.system.auth.ssh.event == "Invalid" || ctx.system.auth.ssh.event == "Failed") { if (!ctx.containsKey("event")) { ctx.event = [:]; } ctx.event.type = "authentication_failure"; ctx.event.category = "authentication"; ctx.event.action = "ssh_login"; ctx.event.outcome = "failure"; }"}}],"on_failure":[{"set":{"field":"error.message","value":"{{ _ingest.on_failure_message }}"}}]},"filebeat-7.0.0-system-syslog-pipeline":{"description":"Pipeline for parsing Syslog messages.","processors":[{"grok":{"field":"message","patterns":["%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{SYSLOGHOST:host.hostname} %{DATA:process.name}(?:\[%{POSINT:process.pid:long}\])?: %{GREEDYMULTILINE:system.syslog.message}","%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{GREEDYMULTILINE:system.syslog.message}","%{TIMESTAMP_ISO8601:system.syslog.timestamp} %{SYSLOGHOST:host.hostname} %{DATA:process.name}(?:\[%{POSINT:process.pid:long}\])?: %{GREEDYMULTILINE:system.syslog.message}"],"pattern_definitions":{"GREEDYMULTILINE":"(.|\n)*"},"ignore_missing":true}},{"remove":{"field":"message"}},{"rename":{"ignore_missing":true,"field":"system.syslog.message","target_field":"message"}},{"date":{"target_field":"@timestamp","formats":["MMM d HH:mm:ss","MMM dd HH:mm:ss","yyyy-MM-dd'T'HH:mm:ss.SSSSSSZZ"],"ignore_failure":true,"field":"system.syslog.timestamp"}},{"remove":{"field":"system.syslog.timestamp"}}],"on_failure":[{"set":{"field":"error.message","value":"{{ _ingest.on_failure_message }}"}}]},"filebeat-7.0.0-elasticsearch-deprecation-pipeline":{"description":"Pipeline for parsing elasticsearch deprecation logs","processors":[{"rename":{"field":"@timestamp","target_field":"event.created"}},{"grok":{"field":"message","patterns":["^%{CHAR:first_char}"],"pattern_definitions":{"CHAR":"."}}},{"pipeline":{"if":"ctx.first_char != '{'","name":"filebeat-7.0.0-elasticsearch-deprecation-pipeline-plaintext"}},{"pipeline":{"if":"ctx.first_char == '{'","name":"filebeat-7.0.0-elasticsearch-deprecation-pipeline-json"}},{"date":{"formats":["ISO8601"],"ignore_failure":true,"field":"elasticsearch.deprecation.timestamp","target_field":"@timestamp"}},{"remove":{"field":"elasticsearch.deprecation.timestamp"}},{"remove":{"field":["first_char"]}}],"on_failure":[{"set":{"field":"error.message","value":"{{ _ingest.on_failure_message }}"}}]}}

Yes, sorry, I meant the pipeline for the auth fileset. Apache logs already include the timezone in their timestamps, so no conversion is needed.

I cannot see the timezone option in the auth pipeline, it should be there. Lets focus on solving this, for that double check some things:

  • Is var.convert_timezone set to true in the auth fileset? It should be something like:
- module: system
  syslog:
    enabled: true
    var.convert_timezone: true
  auth:
    enabled: true
    var.convert_timezone: true
  • Did you try to restart filebeat with filebeat.overwrite_pipelines: true in the main configuration so pipelines are reinstalled? (Notice that filebeat setup --pipelines does not work with var.convert_timezone due to an existing bug)

Hi @jsoriano , I set filebeat.overwrite_pipelines: true in filebeat.reference.yml and re-ran setup and restarted filebeat. It has not fixed the issues - is this where i change the configuration or do I add it to filebeat.yml?

You should add it to filebeat.yml. This would be only needed once, so after the first run you can remove the line.

An alternative is to run filebeat once with this option in the command line, like filebeat run -e --once -E filebeat.overwrite_pipelines=true.

@jsoriano @steffens

Just uninstalled 7.0.0 and install 7.0.1 and the problem has been resolved.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.