Dear Logstash community,
I have challenged myself to capture my network data using TShark and make custom dashboards in Kibana.
I have the following setup:
- 1 Ubuntu 18.04 VM with ElasticSearch, Logstash and Kibana dockerized
 - 1 Ubuntu 18.04 VM with TShark and Filebeat running on the host
 
I run TShark using the specified -T ek flag and export the capture to a rolling CSV file.
Using Filebeat I send the CSV file to Logstash. Logstash will then send the file to Elastic Search which will be queried by Kibana.
My problem is: Kibana shows the Logstash logs as it's main input and per event includes one whole CSV capture line in a single field. How do I solve this that the CSV becomes the main input?
This is my logstash.conf:
input {
  beats {
    port => 5044
  }
}
filter {
	csv {
		source => "message"
		columns => [ "col.Time", "col.Source", "col.Destination", "ip.src", "ip.dst", "tcp.srcport", "tcp.dstport", "col.Protocol", "ip.len", "col.Info" ]
		}
   
	mutate {
      convert => [ "ip-len", "integer"]
	}
	date {
	   match => [ "col.time", "YYYY-MM-DD HH:mm:ss.SSSSSSSSS" ]
       target => "@timestamp"
	}
}  
  
output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" 
    document_type => "%{[@metadata][type]}" 
    user => elastic
    password => changeme
  }
}
This is my filebeat.yml:
filebeat.modules:
- module: system
  syslog:
    enabled: false
  auth:
    enabled: true
    var.paths: ["/home/user/Documents/tsharkcap/tshark.csv"]
output.logstash:
  hosts: ["192.168.234.134:5044"]
I run ElasticSearch with default config:
sudo docker run -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.5.2
I run Kibana with default config:
sudo docker run --link docker-cont:elasticsearch -p5601:5601 docker.elastic.co/kibana/kibana:7.5.2
Thanks in advance,
ELK4Life


