Filebeat Crowdstrike Module doesn't handle unix timestamps of 0 correctly

Hi,

Crowdstrike store events with a ProcessEndTime of "0". For example shortened json:

{
     "event": {
        "ProcessStartTime": 1617278228,
        "ProcessEndTime": 0,
    }
}

Both fields are parsed by the same function in /usr/share/filebeat/module/crowdstrike/falcon/config/pipeline.js Line 392 + 393

convertToMSEpoch(evt, "crowdstrike.event.ProcessStartTime")
convertToMSEpoch(evt, "crowdstrike.event.ProcessEndTime")

ProcessStartTime is correctly converted. But ProcessEndTime returns "0".

So elasticsearch ingest pipeline complains:

{
	:status=>400, 
	:action=>[
		"index", 
		{
			:_id=>nil, 
			:_index=>"logs", 
			:routing=>nil, 
			:pipeline=>"filebeat-7.11.2-crowdstrike-falcon-pipeline"
		}, 
		#<LogStash::Event:0xca0678b>
	], 
	:response=>{
		"index"=>{
			"_index"=>"logs-2021.03.26-000004", 
			"_type"=>"_doc", 
			"_id"=>"pedWjXgBKwCZQLf7YkgS", 
			"status"=>400, 
			"error"=>{
				"type"=>"mapper_parsing_exception", 
				"reason"=>"failed to parse field [crowdstrike.event.ProcessEndTime] of type [date] in document with id 'pedWjXgBKwCZQLf7YkgS'. Preview of field's value: '0'", 
				"caused_by"=>{
					"type"=>"illegal_argument_exception", 
					"reason"=>"failed to parse date field [0] with format [strict_date_optional_time]", 
					"caused_by"=>{
						"type"=>"date_time_parse_exception", 
						"reason"=>"Text '0' could not be parsed at index 0"
					}
				}
			}
		}
	}
}

Seems to be a bug in my eyes. Do you agree? :slight_smile:

Regards,
Marcel

So I guess it depends. What does an end time of 0 mean? It's not over? If so we can add a conditional to that section.

I think this is also an bug in the event handler of crowdstrike. Its a detection event with a blocked process. So StartTime should be equal with EndTime.

But that's not the point. The filebeat crowdstrike module converts unixtime to a regular timestamp. It works with timestamps > 0 but not with 0. I would expect that it returns a 1970 date if it is called with a 0 unixtime. But it returns the input: 0.
This seems not correct.

It doesn't matter what the crowdstrike timestamp means. :slight_smile:

Ya that's valid

Looks like your using 7.11. 7.12 has redone the crowd strike module and no longer has the JS processors. So this issue may be resolved.

Hi,

I tried it with 7.12.0 but there are additionally bugs with ingest pipelines.
Filebeat 7.12.0 has the following piplines:

  • filebeat-7.12.0-crowdstrike-falcon-detection_summary
  • filebeat-7.12.0-crowdstrike-falcon-firewall_match
  • filebeat-7.12.0-crowdstrike-falcon-incident_summary
  • filebeat-7.12.0-crowdstrike-falcon-remote_response_session_end
  • filebeat-7.12.0-crowdstrike-falcon-remote_response_session_start
  • filebeat-7.12.0-crowdstrike-falcon-user_activity_audit

But it want filebeat-7.12.0-crowdstrike-falcon-pipeline which doesn't exists in 7.12.0 anymore. 7.11.2 has this pipeline. :wink:

I cloned the filebeat-7.11.2-crowdstrike-falcon-pipeline to filebeat-7.12.0-crowdstrike-falcon-pipeline and tried it again. Error still exists:

	{
		"index"=>{
			"_index"=>"logs-2021.04.02-000005", 
			"_type"=>"_doc", 
			"_id"=>"6Kvnq3gBrKMtTiHhGKlc", 
			"status"=>400, 
			"error"=>{
				"type"=>"mapper_parsing_exception", 
				"reason"=>"failed to parse field [crowdstrike.metadata.eventCreationTime] of type [date] in document with id '6Kvnq3gBrKMtTiHhGKlc'. Preview of field's value: '1617791728811'", 
				"caused_by"=>{
					"type"=>"illegal_argument_exception", 
					"reason"=>"failed to parse date field [1617791728811] with format [strict_date_optional_time]", 
					"caused_by"=>{
						"type"=>"date_time_parse_exception", 
						"reason"=>"date_time_parse_exception: Text '1617791728811' could not be parsed at index 0"
					}
				}
			}
		}
	}

So filebeat send still 0 unixtimestamps with 7.12.0 and use a not existing pipeline...

Regards,
Marcel

what do you mean its not there? It exists, beats/pipeline.yml at v7.12.0 · elastic/beats · GitHub. What do you mean Filebeat is sending 0 unix timestamps? 1617791728811 is a unix timestamp in milliseconds, but why the pipeline isn't parsing it properly, IDK. Can you post the initial data that you're trying to send through the pipeline and then use the pipeline simulator to show the output and post both?

Also is that the output from an document ingest via pipeline or via search?? It looks like the response from searching and if so, the date won't be in epoch time, its parsed into a date field and should be search as such "crowdstrike.metadata.eventCreationTime": "2020-02-27T19:12:14.000Z", for example.

Copying the pipeline from 7.11 to 7.12 would never work, they are completely different. Looking at the module and trying to install it I have no issue creating all the pipelines, is your ES on 7.12 as well @Firewire2002 ?

Pipelines were imported via "filebeat setup --pipelines" without any error messages.
Elasticsearch is still on 7.11.2 because there are known Bugs in 7.12.0 with external connectors.

Understood. I will try a bit later this week to make sure it works, but there might be cases in which a pipeline might not have backwards compatibility, mostly when Elasticsearch adds new functionality to existing processors, or brand new processors, that are used by the pipelines, I doublechecked that the module is properly configured, so it should import all pipelines.

When we moved all the JS from local beat to ingest pipelines, we added in more checks for the dates and times that you have issues with, so I can confirm they will be fixed when you upgrade to 7.12.

For ProcessEndTime, we only check if its not 0:

  • date:
    field: crowdstrike.event.ProcessEndTime
    target_field: crowdstrike.event.ProcessEndTime
    timezone: UTC
    formats:
    - UNIX_MS
    ignore_failure: true
    if: |
    ctx?.crowdstrike?.event?.ProcessEndTime != null &&
    !(ctx.crowdstrike.event.ProcessEndTime instanceof String) &&
    ctx.crowdstrike.event.ProcessEndTime != 0 &&
    (int)(Math.log10(ctx.crowdstrike.event.ProcessEndTime) + 1) >= 12