SiemConnector with Filebeat problem

Hello, im trying to use crowdstrike Siem Connector with filebeat.
I've been checking some documentation but can't find a way to solve my problem.
I think is kind of working but the field message looks like a bunch of hex.


My filebeat.yml (or the parts i think actually matter)

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: filestream

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/crowdstrike/falconhoseclient/output

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["xx.xx.xx.xx:xxxx"]

cs.falconhoseclient.cfg

http_proxy =

# Output formats
# Supported formats are
#   1.syslog: will output syslog format with flat key=value pairs uses the mapping configuration below.
;             Use syslog format if CEF/LEEF output is required.
#   2.json: will output raw json format received from FalconHose API (default)
output_format = json

and my conf file for filebeat

input {
  beats {
    host => "0.0.0.0"
    port => 9012
    type => "beats"
  }
}
output {
  elasticsearch {
      hosts => "xx.xx.xx.xx:9200"
      manage_template => false
      index => "falcon-crowdstrike-%{+YYYY.MM}"
    }
}


How are you starting logstash? What do you have in your pipelines.yml?

Your messages have the tag _grokparsefailure_sysloginput, this means that you have a syslog input.

Since the file you shared only have a beats input, you could be running logstash pointing to a folder with other configurations.

Also, I would say that when using the Crowdstrike intregration, it is better to send it directly to Elasticsearch, as it uses an ingest pipeline to parse the message, you can also configure this pipeline in logstash, but I'm not sure if you need to do any other changes in the messages.

i have logstash in a docker container.
My pipeline has this line

- pipeline.id: fcrowdstrike
  path.config: "/usr/share/logstash/config/general_logs/crowdstrike.conf"

which points to the conf file shown before.
ill check how the direct connection to Elasticsearch works.
is that a filebeat or siemconnector configuration?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.