Problem with SIEM

Hello everyone,

I have a problem once i have configured SIEM and auditbeat on the "client" machine.

I configured one machine only with auditbeat and the coniguration standars. I will post below the configuration.

The problem is that when I enter Kibana SIEM and fix the fielddata (is there any way to make it more comfortable?), it appears as if I had 4 hosts configured. Attached image.

Here i post the configuration of auditbeat:

###################### Auditbeat Configuration Example #######################
auditbeat.modules:

- module: auditd
  # Load audit rules from separate files. Same format as audit.rules(7).
  audit_rule_files: [ '${path.config}/audit.rules.d/*.conf' ]
  #audit_rules: |
    ## Define audit rules here.
    ## Create file watches (-w) or syscall audits (-a or -A). Uncomment these
    ## examples or add your own rules.

    ## If you are on a 64 bit platform, everything should be running
    ## in 64 bit mode. This rule will detect any use of the 32 bit syscalls
    ## because this might be a sign of someone exploiting a hole in the 32
    ## bit API.
    #-a always,exit -F arch=b32 -S all -F key=32bit-abi

    ## Executions.
    #-a always,exit -F arch=b64 -S execve,execveat -k exec

    ## External access (warning: these can be expensive to audit).
    #-a always,exit -F arch=b64 -S accept,bind,connect -F key=external-access

    ## Identity changes.
    #-w /etc/group -p wa -k identity
    #-w /etc/passwd -p wa -k identity
    #-w /etc/gshadow -p wa -k identity

    ## Unauthorized access attempts.
    #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access
    #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access

- module: file_integrity
  paths:
  - /bin
  - /usr/bin
  - /sbin
  - /usr/sbin
  - /etc

- module: system
  datasets:
    - host    # General host information, e.g. uptime, IPs
    - login   # User logins, logouts, and system boots.
    - package # Installed, updated, and removed packages
    - user    # User information
  period: 1m

- module: system
  datasets:
          - process #Started and stopped processes
          - socket #Opened and closed sockets
  period: 1s
  socket.enable_ipv6: false

  user.detect_password_changes: true

  login.wtmp_file_pattern: /var/log/wtmp*
  login.btmp_file_pattern: /var/log/btmp*

#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

#================================ Outputs =====================================

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["10.10.12.114:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

  #processors:
  #- add_host_metadata: ~
  #- add_cloud_metadata: ~

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

#============================== X-Pack Monitoring ===============================
# auditbeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Auditbeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

#================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: false

Anyone can help me?

Thanks!

Hi @Aleix_Abrie_Prat - if you had to fix the fielddata you probably did not run auditbeat setup before running auditbeat? Can you try that and see if the problem persists?

Hi @cwurm ,

I can't do this. The output configurated in auditbeat is logstash, not elasticsearch :frowning:

How can i do with logstash output configurated?

Thanks!

Hi again @cwurm,

I've changed the output to elasticsearch and kibana and after execute the command auditbeat setup and start the service is working well in Kibana - SIEM.

Now the Index setup is finished and loaded the dashboards in elasticsearch and kibana respectively. Now if I configure auditbeat in another machine, I have to execute the command "auditbeat setup" again?

Thanks in advance

@Aleix_Abrie_Prat ,

you should only need to run the setup once.

thanks
/d

Hi again,

Thanks for the information :slight_smile:

I have one more question to do, sorry :frowning:

Now all is working well, but in the tab of Network i don't see the connections drawed in the map. It seems that there isn't the "plugin" or "processor" (i think in version 7.4 the ingest-geoip is a processor and not more plugin) but i don't know how to add this field to the index...

I am a little newbie on this subject, forgive.

Can you tell me how to ingest de geoip processor in my elasticsearch?

Thanks a lot :smiley:

Sincerely, Aleix.

@Aleix_Abrie_Prat,

Take a look at this page for details on setting up a GeoIP ingest pipeline:
https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-geoip.html

You can use the same process with other beats & ECS compliant sources as well.

Thanks
/d

Thanks a lot! It works :smiley:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.