Community_ID

Hello, I have audit beat v8.0.0 running on CentOS 7.6.1810. I am able to run auditbeat, and I added

  - community_id:
      fields:
        source_ip: my_source_ip
        source_port: my_source_port
        destination_ip: my_dest_ip
        destination_port: my_dest_port
        transport: proto
        icmp_type: my_icmp_type
        icmp_code: my_icmp_code
      target: network.community_id

to the processors section of auditbeats.yml. But when I check the log for the community_id value it isn't there. Any ideas?

Thanks,

Can you share your full configuration and a sample event in JSON that you believe should've been enriched?

I used the example configuration and added the information to the processors section. I am still testing so I start auditbeat by:

./auditbeat -c auditbeat.yml -e -d "*"

###################### Auditbeat Configuration Example #########################

This is an example configuration file highlighting only the most common

options. The auditbeat.reference.yml file from the same directory contains all

the supported options with more comments. You can use it as a reference.

You can find the full configuration reference here:

https://www.elastic.co/guide/en/beats/auditbeat/index.html

#========================== Modules configuration =============================
auditbeat.modules:

  • module: auditd

    Load audit rules from separate files. Same format as audit.rules(7).

    audit_rule_files: [ '${path.config}/audit.rules.d/*.conf' ]
    audit_rules: |

    Define audit rules here.

    Create file watches (-w) or syscall audits (-a or -A). Uncomment these

    examples or add your own rules.

    If you are on a 64 bit platform, everything should be running

    in 64 bit mode. This rule will detect any use of the 32 bit syscalls

    because this might be a sign of someone exploiting a hole in the 32

    bit API.

    #-a always,exit -F arch=b32 -S all -F key=32bit-abi

    Executions.

    #-a always,exit -F arch=b64 -S execve,execveat -k exec

    External access (warning: these can be expensive to audit).

    #-a always,exit -F arch=b64 -S accept,bind,connect -F key=external-access

    Identity changes.

    #-w /etc/group -p wa -k identity
    #-w /etc/passwd -p wa -k identity
    #-w /etc/gshadow -p wa -k identity

    Unauthorized access attempts.

    #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access
    #-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access

  • module: file_integrity
    paths:

    • /bin
    • /usr/bin
    • /sbin
    • /usr/sbin
    • /etc

#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false

#================================ General =====================================

The name of the shipper that publishes the network data. It can be used to group

all the transactions sent by a single shipper in the web interface.

#name:

The tags of the shipper are included in their own field with each

transaction published.

#tags: ["service-X", "web-tier"]

Optional fields that you can specify to add additional information to the

output.

#fields:

env: staging

#============================== Dashboards =====================================

These settings control loading the sample dashboards to the Kibana index. Loading

the dashboards is disabled by default and can be enabled either by setting the

options here or by using the setup command.

#setup.dashboards.enabled: false

The URL from where to download the dashboards archive. By default this URL

has a value which is computed based on the Beat name and version. For released

versions, this URL points to the dashboard archive on the artifacts.elastic.co

website.

#setup.dashboards.url:

#============================== Kibana =====================================

Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

This requires a Kibana endpoint configuration.

setup.kibana:

Kibana Host

Scheme and port can be left out and will be set to the default (http and 5601)

In case you specify and additional path, the scheme is required: http://localhost:5601/path

IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

#host: "localhost:5601"

Kibana Space ID

ID of the Kibana Space into which the dashboards should be loaded. By default,

the Default Space will be used.

#space.id:

#============================= Elastic Cloud ==================================

These settings simplify using auditbeat with the Elastic Cloud (https://cloud.elastic.co/).

The cloud.id setting overwrites the output.elasticsearch.hosts and

setup.kibana.host options.

You can find the cloud.id in the Elastic Cloud web UI.

#cloud.id:

The cloud.auth setting overwrites the output.elasticsearch.username and

output.elasticsearch.password settings. The format is <user>:<pass>.

#cloud.auth:

#================================ Outputs =====================================

Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:

Array of hosts to connect to.

hosts: ["localhost:9200"]

Optional protocol and basic auth credentials.

#protocol: "https"
#username: "elastic"
#password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:

The Logstash hosts

#hosts: ["localhost:5044"]

Optional SSL. By default is off.

List of root certificates for HTTPS server verifications

#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

Certificate for SSL client authentication

#ssl.certificate: "/etc/pki/client/cert.pem"

Client Certificate Key

#ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

Configure processors to enhance or manipulate events generated by the beat.

processors:

  • add_host_metadata: ~
  • add_cloud_metadata: ~
  • community_id:
    fields:
    source_ip: my_source_ip
    source_port: my_source_port
    destination_ip: my_dest_ip
    destination_port: my_dest_port
    transport: proto
    icmp_type: my_icmp_type
    icmp_code: my_icmp_code
    target: network.community_id

#================================ Logging =====================================

Sets log level. The default log level is info.

Available log levels are: error, warning, info, debug

logging.level: debug

At debug level, you can selectively enable logging only for some components.

To enable all selectors use ["*"]. Examples of other selectors are "beat",

"publish", "service".

logging.selectors: ["*"]

#============================== Xpack Monitoring ===============================

auditbeat can export internal metrics to a central Elasticsearch monitoring

cluster. This requires xpack monitoring to be enabled in Elasticsearch. The

reporting is disabled by default.

Set to true to enable the monitoring reporter.

#monitoring.enabled: false

#================================= Migration ==================================

This allows to enable 6.7 migration aliases

#migration.6_to_7.enabled: true

I see that the processor is created in stdout.
2019-05-31T08:35:04.359-0400 DEBUG [processors] processors/processor.go:93 Generated new processors: add_host_metadata=[netinfo.enabled=[false], cache.ttl=[5m0s]], add_cloud_metadata=null, community_id=[target=network.community_id, fields=[source_ip=my_source_ip, source_port=my_source_port, destination_ip=my_dest_ip, destination_port=my_dest_port, transport_protocol=proto, icmp_type=my_icmp_type, icmp_code=my_icmp_code], seed=0]

I assume that any network connection would generate a community_id. I tried pinging a site, establishing an SSH connection.

Thanks,

James

Format your snippets using the </> button otherwise the yaml is difficult to read.

The provided configuration is just an example.

      fields:
        source_ip: my_source_ip
        source_port: my_source_port
        destination_ip: my_dest_ip
        destination_port: my_dest_port
        transport: proto
        icmp_type: my_icmp_type
        icmp_code: my_icmp_code

This requires your event to contain fields called my_source_ip, my_source_port, etc, which Auditbeat is not setting.

I don't think any module in Auditbeat outputs source/destination IPs and protocol, which is the minimum needed for the community ID to be generated.

You should try with Packetbeat, which already generates the community_id.

1 Like

BTW The system/socket dataset will natively add the network.community_id in an upcoming release of Auditbeat. https://github.com/elastic/beats/pull/12231

1 Like

Thank you for the assistance. This has been great.

I built it from master yesterday and I see the changes in the src directory from the pull request, but I don't see a community_id from the beat. I don't see any system or socket in the STDOUT when I ping or ssh to an external device.

Is there a configuration option I'm missing?

Hi @james007 - I think it might be easier to download a snapshot of Auditbeat 7.2 or master from https://console.cloud.google.com/storage/browser/beats-ci-artifacts/snapshots/auditbeat/. Can you check if that works?

Alternatively, can you share more details of how you built Auditbeat (esp. which command you used in which directory), the configuration you're using, and the log output?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.