Loading IDS logs via Elasticsearch and Filebeats issues

I currently have Suricata running on an Ubuntu VM on computer 1 and am attempting to ship logs to an ELK stack on a VM on computer 2. My goal is to have Suricata logs in /var/logs on computer 1 VM shipped via filebeat version 7.5.2 to the ELK stack on Computer 2 VM. I am unable to load Elasticsearch data via my Kibana instance:

The Kibana dashboard appears at http://192.168.1.209:5601/ and

Kibana is working:

 curl -X GET "192.168.1.209:9200/_cat/indices?v"
health status index                uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana_1            x6tRgFh-TTiNXvwhkO2pew   1   0          2            0      7.6kb          7.6kb
green  open   .kibana_task_manager GeQXwPj9Rh2lAd3ieaeCuQ   1   0          2            0     12.6kb         12.6kb

Elasticsearch appears as a cluster when I go to 192.168.1.209:9200:

{
  "name" : "JdAijss",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "H66h2QJIQvWzsQlEOPAjog",
  "version" : {
    "number" : "6.8.6",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "3d9f765",
    "build_date" : "2019-12-13T17:11:52.013738Z",
    "build_snapshot" : false,
    "lucene_version" : "7.7.2",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

My Elasticsearch.yml file is the following:

 ---------------------------------- Network -----------------------------------
#
`# Set the bind address to a specific IP (IPv4 or IPv6):`
#
network.host: 192.168.1.209
#
`# Set a custom port for HTTP:`
#
http.port: 9200
#
`# For more information, consult the network module documentation.`
#


—-


Logstash.yml file:



`# ------------ Metrics Settings --------------`
#
`# Bind address for the metrics REST endpoint`
#
http.host: "192.168.1.209"
#
    # Bind port for the metrics REST endpoint, this option also accept a range
    # (9600-9700) and logstash will pick up the first available ports.
#
`# http.port: 9600-9700`


——

Created suricata.conf in /etc/logstash/conf.d:

put {
  beats {
    port => 5044
    codec => "json_lines"
  }
}

filter {
  if [application] == "suricata" {
    date {
      match => [ "timestamp", "ISO8601" ]
    }
    ruby {
      code => "if event['event_type'] == 'fileinfo'; event['fileinfo']['type']=event['fileinfo']['magic'].to_s.split(',')[0]; end;"
    }
  }

  if [src_ip]  {
    geoip {
      source => "src_ip"
      target => "geoip"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float" ]
    }
    if ![geoip.ip] {
      if [dest_ip]  {
        geoip {
          source => "dest_ip"
          target => "geoip"
          add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
          add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
        }
        mutate {
          convert => [ "[geoip][coordinates]", "float" ]
        }
      }
    }
  }
}

output {
  elasticsearch {
    hosts => ["192.168.1.209:9200"]
  }
}

An output-elasticsearch.conf file is configured in conf.d like this:

output {
  elasticsearch { hosts => ["192.168.1.209:9200"]
    hosts => "192.168.1.209:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

The filebeat.yml on Computer 1 is the following:

filebeat.inputs:

`# Each - is an input. Most options can be set at the input level, so`
    # you can use different inputs for various configurations.
    # Below are the input specific configurations.

- type: log

`  # Change to true to enable this input configuration.`
  enabled: true

`  # Paths that should be crawled and fetched. Glob based paths.`
  paths:
    - "/var/log/suricata/*/eve.json*"
  fields_under_root: true
  fields:
    tags: ["suricata","json"]
- type: log
  paths:
    - "/var/syslog-ng/default.log"
  fields_under_root: true
  fields:
    tags: ["pfsense"]
    #- c:\programdata\elasticsearch\logs\*

For Kibana:

#============================== Kibana =====================================

    # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
    # This requires a Kibana endpoint configuration.
    setup.kibana:

      # Kibana Host
      # Scheme and port can be left out and will be set to the default (http and 5601)
      # In case you specify and additional path, the scheme is required: http://localhost:5601/path
      # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
      #host: "192.168.1.209:5601"

      # Kibana Space ID
      # ID of the Kibana Space into which the dashboards should be loaded. By default,
      # the Default Space will be used.
  #space.id:

And for outputs:

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
`  # Array of hosts to connect to.`
 hosts: ["192.168.1.209:9200"]

`  # Optional protocol and basic auth credentials.`
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
`  # The Logstash hosts`
  #hosts: ["192.168.1.209:9200"]

`  # Optional SSL. By default is off.`
`  # List of root certificates for HTTPS server verifications`
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

`  # Certificate for SSL client authentication`
  #ssl.certificate: "/etc/pki/client/cert.pem"

`  # Client Certificate Key`
  #ssl.key: "/etc/pki/client/cert.key"

elasticsearch.yml.disabled is still listed in etc/filebeat/modules.d, is this the issue? I get this error when I type filebeat into CLI:

"Exiting: error unpacking config data: more than one namespace configured accessing 'output' (source:'/etc/filebeat/filebeat.yml')"

Please format your code/logs/config using the </> button, or markdown style back ticks. It helps to make things easy to read which helps us help you :slight_smile:

1 Like

Reformatted - is it making sense now?

Or is formatting such a mess that I should make a new post?

Can you post your entire filebeat.yml, feel free to use gist/pastebin/etc instead of posting it here.

Also is there a reason you are using 7.5 for filebeat, but 6.8 for Elasticsearch?

Here is the full filebeat.yml - https://pastebin.com/pP1E0XKT

and no I can upgrade Elasticsearch if need be

Assuming that's a 1:1 copy+paste then it looks like you need a space before line 162 to make the indent the right level.

I fixed that thanks and reenabled filebeat w/ systemctl - what should I try next?

Did that not work?

Actually I haven't upgraded Elasticsearch yet - what method would you recommend for a Ubuntu VM? I see Debian and RPM here - https://www.elastic.co/guide/en/elasticsearch/reference/7.6/rolling-upgrades.html

Ubuntu is DEB, so use that via the repos.

I am not sure actually if Filebeat is running properly, when I start w/ systemctl. Nothing comes up when I run pgrep, and I don't see Filebeat when I run "top" in CLI:

oawfwafwfw : ~ $ sudo systemctl enable filebeat

Synchronizing state of filebeat.service with SysV service script with /lib/systemd/systemd-sysv-install.

Executing: /lib/systemd/systemd-sysv-install enable filebeat

oawfwafwfw : ~ $ sudo systemctl start filebeat

oawfwafwfw : ~ $ pgrep Filebeat

I installed the latest Elasticsearch version and now am getting errors:

Job for elasticsearch.service failed because the control process exited with error code. See "systemctl status elasticsearch.service" and "journalctl -xe" for details.

**bill2@bill2-VirtualBox** : **~** $ systemctl status elasticsearch.service

**●** elasticsearch.service - Elasticsearch

Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled; vendor preset: enabled)

Active: **failed** (Result: exit-code) since Sun 2020-03-15 14:03:02 EDT; 14s ago

Docs: http://www.elastic.co

Process: 5159 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet **(code=exited, status=78)**

 Main PID: 5159 (code=exited, status=78)

Mar 15 14:02:05 bill2-VirtualBox systemd[1]: Starting Elasticsearch...

Mar 15 14:02:07 bill2-VirtualBox elasticsearch[5159]: OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a fut

Mar 15 14:03:02 bill2-VirtualBox elasticsearch[5159]: ERROR: [1] bootstrap checks failed

Mar 15 14:03:02 bill2-VirtualBox elasticsearch[5159]: [1]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_

Mar 15 14:03:02 bill2-VirtualBox elasticsearch[5159]: ERROR: Elasticsearch did not exit normally - check the logs at /var/log/elasticsearch/elasticsearch.log

Mar 15 14:03:02 bill2-VirtualBox systemd[1]: **elasticsearch.service: Main process exited, code=exited, status=78/n/a**

Mar 15 14:03:02 bill2-VirtualBox systemd[1]: **Failed to start Elasticsearch.**

Mar 15 14:03:02 bill2-VirtualBox systemd[1]: **elasticsearch.service: Unit entered failed state.**

Mar 15 14:03:02 bill2-VirtualBox systemd[1]: **elasticsearch.service: Failed with result 'exit-code'.**

Should I delete all elasticsearch instances and try again?

Please format your code/logs/config using the </> button, or markdown style back ticks. It helps to make things easy to read which helps us help you :slight_smile:

You need to fix this, reinstalling won't help. Check out Bootstrap Checks | Elasticsearch Guide [8.11] | Elastic

I changed this in the elasticsearch.yaml to reflect the IP of the VM for the ELK stack but still giving the same error:

discovery.seed_hosts: ["192.168.1.209", "host2"]

now gettting the "Kibana server is not ready yet" on the Kibana instance

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.