Issue in mapping of fileds

I am getting netflows from nprobe at Kibana. The template i am using while run nprobe is

%IN_SRC_MAC %OUT_DST_MAC %INPUT_SNMP %OUTPUT_SNMP %SRC_VLAN %IPV4_SRC_ADDR %IPV4_DST_ADDR %L4_SRC_PORT %L4_DST_PORT %IPV6_SRC_ADDR %IPV6_DST_ADDR %IP_PROTOCOL_VERSION %PROTOCOL %L7_PROTO %IN_BYTES %IN_PKTS %OUT_BYTES %OUT_PKTS %FIRST_SWITCHED %LAST_SWITCHED %CLIENT_TCP_FLAGS %SERVER_TCP_FLAGS %CLIENT_NW_LATENCY_MS %SERVER_NW_LATENCY_MS %APPL_LATENCY_MS %OOORDER_IN_PKTS %OOORDER_OUT_PKTS %RETRANSMITTED_IN_PKTS %RETRANSMITTED_OUT_PKTS %SRC_FRAGMENTS %DST_FRAGMENTS %DNS_QUERY %HTTP_URL %HTTP_SITE %TLS_SERVER_NAME %BITTORRENT_HASH"

But what i get in kibana is that field names are not properly mapped but just numbers. The values are rights but thier filed names are not correct. Any one can help/suggest solution.

Hi

this seems to be not a Kibana problem, but a you need to check how your data is inserted in Elasticsearch, so you need to take a closer look an the nprobe configuration. sorry I can't support you here in a proper way, since I don't know nprobe. Just one question between nprobe and Elasticsearch, do you use Logstash in between?

Best,
Matthias

Hi

Thank you very much Matthias for your time, yes it was nprobe issue, I fixed it by using "--json-labels" at the end of my nprobe's command.

No, I'm not using Logstash, just telling the nprobe to sent flows directly to Elasticsearch like this:

sudo nprobe --zmq "tcp://*:5556" -i ens12 -n none -T "@NTOPNG@" --elastic "flows;nprobe-%Y.%m.%d;http://localhost:9200/_bulk" --json-labels

Best,
Ashraf

1 Like

Interesting, thanks for sharing!
Best,
Matthias

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.