Help Looking/ indexing/ extracting data

Hello everyone,

we have data as following:

{
"_index": "logstash-2017.04.12",
"_type": "fluentd",
"_id": "AVtiVXDVEWKEV5H6MuHL",
"_score": null,
"_source": {
"eais_prod": "1 2017-04-12T13:23:30.275Z ip-4-0-6-78 - messageId-socket-a - Logs = ,,2017-04-12,13:23:29.922,0-37fcf132-1f83-11e7-9fb6-064267958b48,something.host.io,operation,200,OK,'9B24D5, 'DS',,INTERNET,,,,,,PROD"
"@timestamp": "2017-04-12T09:23:30-04:00"
},
"fields": {
"@timestamp": [
1492003410000
]
},
"highlight": {
"eais_prod": [
"1 2017-04-12T13:23:30.275Z ip-4-0-6-78 - messageId-socket-a - Logs = ,,2017-04-12,13:23:29.922,0-37fcf132-1f83-11e7-9fb6-064267958b48,something.host.io,operation,200,OK,'9B24D5, 'DS',,INTERNET,,,,,,PROD"
]
},
"sort": [
1492003410000
]
}

we need to split this line
,,2017-04-12,13:23:29.922,0-37fcf132-1f83-11e7-9fb6-064267958b48,something.host.io,operation,200,OK,'9B24D5, 'DS',,INTERNET,124,,,,,PROD"

and do sum or aggregate function on 124 index in the above CSV

we will have multiple data in that format and we want to get lines that have INTERNET and then next field and do a sum on that.

Regards
Aditya

Can you show us what you have when you go to the index pattern in Kibana > Management > Index Patterns. It would us to understand how your data was parsed into fields.

Something like this (you can paste a screenshot in this forum);

What version of Kibana are you using?

It looks like you're loading the data with Logstash. It looks like you probably need to parse the data in Logstash before it goes into Elasticsearch.

Thanks,
Lee

Thanks Lee for the updates
I am very new to EK we use Fluentd instead of Logstash.

our input from application to FluentD is like

Logs = ,,2017-04-12,13:23:29.922,0-37fcf132-1f83-11e7-9fb6-064267958b48,something.host.io,operation,200,OK,'9B24D5, 'DS',,INTERNET,,,,,,PROD

we send that from FluenD to Elastic i think we are using default index.

Logstash*

once we come here we just grep but now we want to improve. the version of kibana is 5.2.2 and i think elastic is also latest. we also have installed X-pack.

Regards
Aditya

Hi Aditya,

I'm afraid I don't know FluentD. You should get that incoming data parsed into fields instead of everything going into "eais_prod" (at least that's what it looks like from here). I'm not sure how to do that with FluentD. Here's an example of where they're talking about creating a mapping https://github.com/uken/fluent-plugin-elasticsearch/issues/33

This question might find someone that can help you more if you post in the Elasticsearch or Logstash forums.

Regards,
Lee

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.