Available Fields - Kibana - Parse data for dashboard built

I've ingested ddos dataset from kaggle.com into kibana and it seems like not all the metadata available is useful for me to create a dashboard visualizing the ddos attacks. What am I missing?

The config file as follows

input {



     file {
        id => "KAGGLE_FILE_INPUT"
        path => ["/usr/share/logstash/Kaggle/*.csv"]
        mode => "read"
        exit_after_read => "true"
        sincedb_path => "/usr/share/logstash/config/Kaggle_file_input_watcher"
    }
}


# The filter part of this file is commented out to indicate that it is optional

filter {
      csv {
        separator => ","
        columns => ["Unnamed: 0","Flow ID","Source IP","Source Port","Destination IP","Destination Port","Protocol","Timestamp","Flow Duration","Total Fwd Packets","Total Backward Packets","Total Length of Fwd Packets","Total Length of Bwd Packets","Fwd Packet Length Max","Fwd Packet Length Min","Fwd Packet Length Mean","Fwd Packet Length Std","Bwd Packet Length Max","Bwd Packet Length Min","Bwd Packet Length Mean","Bwd Packet Length Std","Flow Bytes/s","Flow Packets/s","Flow IAT Mean","Flow IAT Std","Flow IAT Max","Flow IAT Min","Fwd IAT Total","Fwd IAT Mean","Fwd IAT Std","Fwd IAT Max","Fwd IAT Min","Bwd IAT Total","Bwd IAT Mean","Bwd IAT Std","Bwd IAT Max","Bwd IAT Min","Fwd PSH Flags","Bwd PSH Flags","Fwd URG Flags","Bwd URG Flags","Fwd Header Length","Bwd Header Length","Fwd Packets/s","Bwd Packets/s","Min Packet Length","Max Packet Length","Packet Length Mean","Packet Length Std","Packet Length Variance","FIN Flag Count","SYN Flag Count","RST Flag Count","PSH Flag Count","ACK Flag Count","URG Flag Count","CWE Flag Count","ECE Flag Count","Down/Up Ratio","Average Packet Size","Avg Fwd Segment Size","Avg Bwd Segment Size","Fwd Header Length.1","Fwd Avg Bytes/Bulk","Fwd Avg Packets/Bulk","Fwd Avg Bulk Rate","Bwd Avg Bytes/Bulk","Bwd Avg Packets/Bulk","Bwd Avg Bulk Rate","Subflow Fwd Packets","Subflow Fwd Bytes","Subflow Bwd Packets","Subflow Bwd Bytes","Init_Win_bytes_forward","Init_Win_bytes_backward","act_data_pkt_fwd","min_seg_size_forward","Active Mean","Active Std","Active Max","Active Min","Idle Mean","Idle Std","Idle Max","Idle Min","SimillarHTTP","Inbound","Label"]
        remove_field => ["Fwd Header Length", "Fwd Header Length.1"]
      }

    date {
          match => [ "Timestamp", "MMM dd yyyy HH:mm:ss",
                  "MMM  d yyyy HH:mm:ss", "ISO8601" ]
    }

  }

output {
   # stdout { codec => rubydebug }
    elasticsearch {
        hosts => ["http://34.82.167.113:9200"]
        index => "kaggle-events"
        }
}



Hey there Mo,

I'm not sure I completely understand your question. Are you missing some columns? Additionally, you might check out how your index pattern is setup to make sure the correct types are being used for each column

Hi Poffenberger, im a new user of kibana and since my knowledge is limited, it may be a little challenging for me to ask a specific question.

I want to create a enterprise level dashboards to identify ddos attacks for analysis. But I am unsure if I have all the metadata available to achieve that.

Is time, src ip, des ip, message, protocol, flow byte/s, flow id, enough data points to create dashboards for deeper analysis? or does kibana require more points of data.

dataset link: DDoS Dataset | Kaggle

Hmm that seems like a domain specific question that definitely depends on your usecase. You might check out this article about log anomalies written by the Observability team: Inspect log anomalies | Observability Guide [7.13] | Elastic

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.