I currently have Suricata running on an Ubuntu VM on computer 1 and am attempting to ship logs to an ELK stack on a VM on computer 2. My goal is to have Suricata logs in /var/logs on computer 1 VM shipped via filebeat version 7.5.2 to the ELK stack on Computer 2 VM. I am unable to load Elasticsearch data via my Kibana instance:
The Kibana dashboard appears at http://192.168.1.209:5601/ and
Kibana is working:
 curl -X GET "192.168.1.209:9200/_cat/indices?v"
health status index                uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana_1            x6tRgFh-TTiNXvwhkO2pew   1   0          2            0      7.6kb          7.6kb
green  open   .kibana_task_manager GeQXwPj9Rh2lAd3ieaeCuQ   1   0          2            0     12.6kb         12.6kb
Elasticsearch appears as a cluster when I go to 192.168.1.209:9200:
{
  "name" : "JdAijss",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "H66h2QJIQvWzsQlEOPAjog",
  "version" : {
    "number" : "6.8.6",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "3d9f765",
    "build_date" : "2019-12-13T17:11:52.013738Z",
    "build_snapshot" : false,
    "lucene_version" : "7.7.2",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}
My Elasticsearch.yml file is the following:
 ---------------------------------- Network -----------------------------------
#
`# Set the bind address to a specific IP (IPv4 or IPv6):`
#
network.host: 192.168.1.209
#
`# Set a custom port for HTTP:`
#
http.port: 9200
#
`# For more information, consult the network module documentation.`
#
—-
Logstash.yml file:
`# ------------ Metrics Settings --------------`
#
`# Bind address for the metrics REST endpoint`
#
http.host: "192.168.1.209"
#
    # Bind port for the metrics REST endpoint, this option also accept a range
    # (9600-9700) and logstash will pick up the first available ports.
#
`# http.port: 9600-9700`
——
Created suricata.conf in /etc/logstash/conf.d:
put {
  beats {
    port => 5044
    codec => "json_lines"
  }
}
filter {
  if [application] == "suricata" {
    date {
      match => [ "timestamp", "ISO8601" ]
    }
    ruby {
      code => "if event['event_type'] == 'fileinfo'; event['fileinfo']['type']=event['fileinfo']['magic'].to_s.split(',')[0]; end;"
    }
  }
  if [src_ip]  {
    geoip {
      source => "src_ip"
      target => "geoip"
      add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float" ]
    }
    if ![geoip.ip] {
      if [dest_ip]  {
        geoip {
          source => "dest_ip"
          target => "geoip"
          add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
          add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
        }
        mutate {
          convert => [ "[geoip][coordinates]", "float" ]
        }
      }
    }
  }
}
output {
  elasticsearch {
    hosts => ["192.168.1.209:9200"]
  }
}
An output-elasticsearch.conf file is configured in conf.d like this:
output {
  elasticsearch { hosts => ["192.168.1.209:9200"]
    hosts => "192.168.1.209:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}
The filebeat.yml on Computer 1 is the following:
filebeat.inputs:
`# Each - is an input. Most options can be set at the input level, so`
    # you can use different inputs for various configurations.
    # Below are the input specific configurations.
- type: log
`  # Change to true to enable this input configuration.`
  enabled: true
`  # Paths that should be crawled and fetched. Glob based paths.`
  paths:
    - "/var/log/suricata/*/eve.json*"
  fields_under_root: true
  fields:
    tags: ["suricata","json"]
- type: log
  paths:
    - "/var/syslog-ng/default.log"
  fields_under_root: true
  fields:
    tags: ["pfsense"]
    #- c:\programdata\elasticsearch\logs\*
For Kibana:
#============================== Kibana =====================================
    # Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
    # This requires a Kibana endpoint configuration.
    setup.kibana:
      # Kibana Host
      # Scheme and port can be left out and will be set to the default (http and 5601)
      # In case you specify and additional path, the scheme is required: http://localhost:5601/path
      # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
      #host: "192.168.1.209:5601"
      # Kibana Space ID
      # ID of the Kibana Space into which the dashboards should be loaded. By default,
      # the Default Space will be used.
  #space.id:
And for outputs:
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
`  # Array of hosts to connect to.`
 hosts: ["192.168.1.209:9200"]
`  # Optional protocol and basic auth credentials.`
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
`  # The Logstash hosts`
  #hosts: ["192.168.1.209:9200"]
`  # Optional SSL. By default is off.`
`  # List of root certificates for HTTPS server verifications`
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
`  # Certificate for SSL client authentication`
  #ssl.certificate: "/etc/pki/client/cert.pem"
`  # Client Certificate Key`
  #ssl.key: "/etc/pki/client/cert.key"
elasticsearch.yml.disabled is still listed in etc/filebeat/modules.d, is this the issue? I get this error when I type filebeat into CLI:
"Exiting: error unpacking config data: more than one namespace configured accessing 'output' (source:'/etc/filebeat/filebeat.yml')"
