DHCP Logs to Logstash

I plan to use Filebeat to send the DHCP log file to Logstash.
I have the following setup in Logstash to send the file to Elasticsearch:

# Beats -> Logstash -> Elasticsearch pipeline.

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => "10.103.186.210:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

What else do I need to do to have that show up in Kibana correctly? I have looked at the following and used the code stated, but when I do, the services all crash so something is not working correctly, I am assuming it may be due to the version changes maybe and this link being from nearly 2 years ago:

Please share the logs to help with debugging.

So for Elasticsearch.yml:

bootstrap.memory_lock: false
cluster.name: Elastic
http.port: 9200
node.data: true
node.ingest: true
node.master: true
node.max_local_storage_nodes: 1
node.name: SV-MSE-ELTC-001
path.data: D:\Elastic\Elastic-7.9.2\Data
path.logs: D:\Elastic\Elastic-7.9.2\Logs
transport.tcp.port: 9300
xpack.license.self_generated.type: basic
xpack.security.enabled: false
network.host: 0.0.0.0
discovery.seed_hosts: []
discovery.type: single-node

{
  "dhcp": {
    "order": 10,
    "index_patterns": [
      "dhcp-*"
    ],
    "settings": {},
    "mappings": {
      "dhcp": {
        "dynamic_templates": [
          {
            "strings_as_keyword": {
              "mapping": {
                "ignore_above": 1024,
                "type": "keyword"
              },
              "match_mapping_type": "string"
            }
          }
        ],
        "properties": {}
      }
    },
    "aliases": {}
  }
}

I get the following error:

[2020-10-06T01:44:00,876][INFO ][o.e.x.m.MlDailyMaintenanceService] [SV-MSE-ELTC-001] triggering scheduled [ML] maintenance tasks
[2020-10-06T01:44:00,892][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [SV-MSE-ELTC-001] Deleting expired data
[2020-10-06T01:44:00,892][INFO ][o.e.x.m.a.TransportDeleteExpiredDataAction] [SV-MSE-ELTC-001] Completed deletion of expired ML data
[2020-10-06T01:44:00,892][INFO ][o.e.x.m.MlDailyMaintenanceService] [SV-MSE-ELTC-001] Successfully completed [ML] maintenance tasks
[2020-10-06T02:30:00,889][INFO ][o.e.x.s.SnapshotRetentionTask] [SV-MSE-ELTC-001] starting SLM retention snapshot cleanup task
[2020-10-06T02:30:00,889][INFO ][o.e.x.s.SnapshotRetentionTask] [SV-MSE-ELTC-001] there are no repositories to fetch, SLM retention snapshot cleanup task complete
[2020-10-06T09:14:16,198][INFO ][o.e.n.Node               ] [SV-MSE-ELTC-001] stopping ...
[2020-10-06T09:14:16,198][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [SV-MSE-ELTC-001] [controller/5896] [Main.cc@154] ML controller exiting
[2020-10-06T09:14:16,198][INFO ][o.e.x.w.WatcherService   ] [SV-MSE-ELTC-001] stopping watch service, reason [shutdown initiated]
[2020-10-06T09:14:16,198][INFO ][o.e.x.m.p.NativeController] [SV-MSE-ELTC-001] Native controller process has stopped - no new native processes can be started
[2020-10-06T09:14:16,198][INFO ][o.e.x.w.WatcherLifeCycleService] [SV-MSE-ELTC-001] watcher has stopped and shutdown
[2020-10-06T09:14:16,712][INFO ][o.e.n.Node               ] [SV-MSE-ELTC-001] stopped
[2020-10-06T09:14:16,712][INFO ][o.e.n.Node               ] [SV-MSE-ELTC-001] closing ...
[2020-10-06T09:14:16,728][INFO ][o.e.n.Node               ] [SV-MSE-ELTC-001] closed

The Logstash settings I can not test until the Elastic service doesnt crash

Sorry I am kinda confused. What service are crashing? filebeat, logstash or elasticsearch?

and is this issue related to ES crashing?

Hi,
Sorry for the confusion. So that other post was mine which is resolved. I then started to look at pushing DHCP logs to logstash so updated the yml files and conf files as per the original link I provided. When I add the code into the Elasticsearch.yml file the service for elastic fails to run

Thanks for the clarification,

If I understand correctly, you added above codes in elasticsearch.yml right?
If yes, these codes are not meant to be in elasticsearch.yml

In order for Elasticsearch to correctly handle our DHCP data , we need to provide a index template.

Refer to the link you shared, these codes suppose to be in index template.
Which you can create an index template from either one of below methods:

Hope this can help you!

Thanks, that helps, although, I'm still fairly new to ES so not really sure where to proceed with this still, where the file is to edit or create, and then how it ties in with logstash and Elastic?

Is your kibana up and running? the index template is located here

Read up the documentation that will help.
Elastic also provided some free fundamental training perhaps you could watch it to have a better understanding to Elastic Stack.

Awesome, think I got that now. One other thing, I am trying to setup the logstash.conf file for this DHCP output and also winlogbeats and filebeats (dhcp), would the output section of logstash.conf file be like this:

output {
  if [type] == "dhcp"
  {
    elasticsearch {
      hosts => "10.103.186.210:9200"
	  manage_template => false
      index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
      document_type => "%{[@metadata][type]}"
      index => "dchp-%{+YYYY.MM.dd}"
		}
    }
    else  if [type] == "log" {
        elasticsearch {
			hosts => "10.103.186.210:9200"
			manage_template => false
			index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
			document_type => "%{[@metadata][type]}"
		}
	}
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.