Kibana, Discover: Field data loading is forbidden on srcip

not needed anymore. Ive come to understand it is "by design" whatever it means..

I could work around using "field".raw

in any case.. im still trying to figure how to "field format" thing, lets say for exemple : sent and rcvd
I have only string and uri available as choice, opposed to what the doc say, I should have bytes ip and so on. but I dont.

What is the data type for these fields? Given that you only have "string" and "URI" available, I'd guess it's a string. "bytes" formatter is only available on numbers.

They are steing cause ive used kv to parse the fieldname.

Everything defaulted to string. Now i need to fix it. But im kind of lost...

Yeah, you'd have to fix the parsing/mappings on the Logstash/Elasticsearch side, and re-index your data. Not much you can do in Kibana until then :frowning:

Yep i kneq about the part of the mapping and then need to reindex. Its
just. I dont know how am i suppose to proceed.

Ive downloaded the _mapping file using pretty=1 and then tried to edit
it... Still... Didnt reeally knew what am i suppose to do.

And after that. Im completly lost... I mean. How to create the new index
copy the old one to the new. All that without losing any information...

You can reindex with this - https://gist.github.com/markwalkom/8a7201e3f6ea4354ae06

thx for link !

I take it, there wont be any downtime in my actual log platform right ?
now. theres still the mapping left to "setup"

do you have another trick in your sleeve :wink: ?

Just get the existing mapping and edit to your requirements.

apparentely . its a no go . . I did it. and now even if its not the same index.. I get a message that say Conflict 7 field have more than one .....

im kind of lost, those are my indices right now. and ive made a "custom" mapping that I now have deleted.
apparentely i did something not "ok" ...

yellow open   logstash-2016.02.06   5   1    3781874            0      3.3gb          3.3gb
yellow open   logstash-2016.01.27   5   1      76965            0     74.6mb         74.6mb
yellow open   logstash-2016.02.05   5   1    2987343            0      2.7gb          2.7gb
yellow open   logstash-2016.02.04   5   1    3978768            0      3.6gb          3.6gb
yellow open   logstash-2016.02.03   5   1    2913286            0      2.9gb          2.9gb
yellow open   logstash-2016.02.09   5   1    7351324            0      7.2gb          7.2gb
yellow open   logstash-2016.02.08   5   1    1604763            0      1.3gb          1.3gb
yellow open   logstash-2016.01.28   5   1     625022            0    681.1mb        681.1mb
yellow open   logstash-2016.02.07   5   1    3454373            0        3gb            3gb
yellow open   logstash-2016.01.29   5   1    4402864            0      4.8gb          4.8gb
yellow open   .kibana               1   1         17            5    106.5kb        106.5kb
yellow open   logstash-2016.01.30   5   1     303536            0    285.3mb        285.3mb
yellow open   logstash-2016.02.02   5   1    4068622            0      4.1gb          4.1gb
yellow open   logstash-2016.02.12   5   1    5031841            0      4.9gb          4.9gb
yellow open   logstash-2016.02.01   5   1    4893758            0        5gb            5gb
yellow open   logstash-2016.02.11   5   1    6964840            0      6.9gb          6.9gb
yellow open   logstash-2016.02.10   5   1    7723227            0      7.6gb          7.6gb

now.. the problem .

dstip  	conflict 				
srcip  	conflict 				
rcvdbyte  	conflict 				
rcvdpkt  	conflict 				
sentpkt  	conflict 				
sentbyte  	conflict 

the mapping :

I NOW HAVE DELETED IT. AND DELETED ALSO THE INDEXE FGT-BACKFILL-*

so ... im REALLY sorry to ask , but what am i suppose to do now. I DONT WANT to lose those data... ( trying to build a decent security log machine for audit )

a "little" step by step, would be greatly apreciated.

Thank you!

ok .. ive officialy f*ckt.. lol

http://pastebin.com/nCaaHwPj

I now have only those error logs...

please..

help me resolving this issues ? from what ive read so far. its not suppose to be "that bad of a problem" I just cant... figure it out my self. not enought experience with the whole ELK stack ...

Check your ES logs, it looks like you may have a bad mapping;

error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [srcip]", "caused_by"=>{"type"=>"number_format_exception"

yes, I did check my logs. and found that I have error in mapping.

still .. when i pull my mapping, theres no "error in it" from what i can see.. PLUS even if theres ... how am i suppose to fix it ? thats my actual problem.

thank you for replying. apprciated.

ive read I am suppose to re-index it.
still I have many indices created so far. and i need em ALL back + how I am making sure that the corruption wont follow to the new indices .. ?

I have the whole day to figure it out..

What is the mapping set at for the srcip field?

when i pull it. its set to string. but maybe. im not pulling the right mapping ?

mapping is a few thousand lines.

curl -XGET http://localhost:9200/_mapping?pretty=1 |grep srcip
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
          "srcip" : {
100 1795k  100 1795k    0     0  15.5M      0 --:--:-- --:--:-- --:--:-- 15.6M
          "srcip" : {
          "srcip" : {

little bit hard to know exactly each of them are set to "what" but lemme check "again" thank you

theres three occurence starting at lines 54613 linked to :

"traffic" : {
        "_all" : {
          "enabled" : true,
          "omit_norms" : true
        },
        "dynamic_templates" : [ {
          "message_field" : {
            "mapping" : {
              "fielddata" : {
                "format" : "disabled"
              },
              "index" : "analyzed",
              "omit_norms" : true,
              "type" : "string"
            },
            "match" : "message",
            "match_mapping_type" : "string"
          }
        },
"srcip" : {
            "type" : "long"

then utm type and finaly event type _all

dont know where they are from ?
so , what am i suppose to do now ? it appears theres a lot of duplicate out of nowhere ?

did some research ...

the culprit apear to be : logstash-2016.02.12
Can I delete only this indices ? ? using :

curl -XDELETE 'http://localhost:9200/logstash-2016.02.12' ?

and then it will solve my problem ?

It should, yeah.

tested it, and it worked. now back to square.

i need to figure out how to get my backfill to work now ...

still didnt figure it out yet..

for those interested in helping, since theres no "way" to "close" a topic. ill post the link here.

Thank you!