gh0stid
(Alex)
February 10, 2016, 7:33pm
1
not needed anymore. Ive come to understand it is "by design" whatever it means..
I could work around using "field".raw
in any case.. im still trying to figure how to "field format" thing, lets say for exemple : sent and rcvd
I have only string and uri available as choice, opposed to what the doc say, I should have bytes ip and so on. but I dont.
tbragin
(Tanya Bragin)
February 12, 2016, 4:57am
2
What is the data type for these fields? Given that you only have "string" and "URI" available, I'd guess it's a string. "bytes" formatter is only available on numbers.
gh0stid
(Alex)
February 12, 2016, 5:11am
3
They are steing cause ive used kv to parse the fieldname.
Everything defaulted to string. Now i need to fix it. But im kind of lost...
tbragin
(Tanya Bragin)
February 12, 2016, 5:31am
4
Yeah, you'd have to fix the parsing/mappings on the Logstash/Elasticsearch side, and re-index your data. Not much you can do in Kibana until then
gh0stid
(Alex)
February 12, 2016, 5:48am
5
Yep i kneq about the part of the mapping and then need to reindex. Its
just. I dont know how am i suppose to proceed.
Ive downloaded the _mapping file using pretty=1 and then tried to edit
it... Still... Didnt reeally knew what am i suppose to do.
And after that. Im completly lost... I mean. How to create the new index
copy the old one to the new. All that without losing any information...
warkolm
(Mark Walkom)
February 12, 2016, 8:03am
6
gh0stid
(Alex)
February 12, 2016, 1:45pm
7
thx for link !
I take it, there wont be any downtime in my actual log platform right ?
now. theres still the mapping left to "setup"
do you have another trick in your sleeve ?
warkolm
(Mark Walkom)
February 12, 2016, 8:52pm
8
Just get the existing mapping and edit to your requirements.
gh0stid
(Alex)
February 12, 2016, 9:08pm
9
apparentely . its a no go . . I did it. and now even if its not the same index.. I get a message that say Conflict 7 field have more than one .....
im kind of lost, those are my indices right now. and ive made a "custom" mapping that I now have deleted.
apparentely i did something not "ok" ...
yellow open logstash-2016.02.06 5 1 3781874 0 3.3gb 3.3gb
yellow open logstash-2016.01.27 5 1 76965 0 74.6mb 74.6mb
yellow open logstash-2016.02.05 5 1 2987343 0 2.7gb 2.7gb
yellow open logstash-2016.02.04 5 1 3978768 0 3.6gb 3.6gb
yellow open logstash-2016.02.03 5 1 2913286 0 2.9gb 2.9gb
yellow open logstash-2016.02.09 5 1 7351324 0 7.2gb 7.2gb
yellow open logstash-2016.02.08 5 1 1604763 0 1.3gb 1.3gb
yellow open logstash-2016.01.28 5 1 625022 0 681.1mb 681.1mb
yellow open logstash-2016.02.07 5 1 3454373 0 3gb 3gb
yellow open logstash-2016.01.29 5 1 4402864 0 4.8gb 4.8gb
yellow open .kibana 1 1 17 5 106.5kb 106.5kb
yellow open logstash-2016.01.30 5 1 303536 0 285.3mb 285.3mb
yellow open logstash-2016.02.02 5 1 4068622 0 4.1gb 4.1gb
yellow open logstash-2016.02.12 5 1 5031841 0 4.9gb 4.9gb
yellow open logstash-2016.02.01 5 1 4893758 0 5gb 5gb
yellow open logstash-2016.02.11 5 1 6964840 0 6.9gb 6.9gb
yellow open logstash-2016.02.10 5 1 7723227 0 7.6gb 7.6gb
now.. the problem .
dstip conflict
srcip conflict
rcvdbyte conflict
rcvdpkt conflict
sentpkt conflict
sentbyte conflict
the mapping :
I NOW HAVE DELETED IT. AND DELETED ALSO THE INDEXE FGT-BACKFILL-*
so ... im REALLY sorry to ask , but what am i suppose to do now. I DONT WANT to lose those data... ( trying to build a decent security log machine for audit )
a "little" step by step, would be greatly apreciated.
Thank you!
gh0stid
(Alex)
February 12, 2016, 9:27pm
10
ok .. ive officialy f*ckt.. lol
http://pastebin.com/nCaaHwPj
I now have only those error logs...
gh0stid
(Alex)
February 15, 2016, 2:26pm
11
please..
help me resolving this issues ? from what ive read so far. its not suppose to be "that bad of a problem" I just cant... figure it out my self. not enought experience with the whole ELK stack ...
warkolm
(Mark Walkom)
February 15, 2016, 3:44pm
12
Check your ES logs, it looks like you may have a bad mapping;
error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [srcip]", "caused_by"=>{"type"=>"number_format_exception"
gh0stid
(Alex)
February 15, 2016, 4:02pm
13
yes, I did check my logs. and found that I have error in mapping.
still .. when i pull my mapping, theres no "error in it" from what i can see.. PLUS even if theres ... how am i suppose to fix it ? thats my actual problem.
thank you for replying. apprciated.
ive read I am suppose to re-index it.
still I have many indices created so far. and i need em ALL back + how I am making sure that the corruption wont follow to the new indices .. ?
I have the whole day to figure it out..
warkolm
(Mark Walkom)
February 15, 2016, 4:14pm
14
What is the mapping set at for the srcip
field?
gh0stid
(Alex)
February 15, 2016, 4:22pm
15
when i pull it. its set to string. but maybe. im not pulling the right mapping ?
gh0stid
(Alex)
February 15, 2016, 4:37pm
16
mapping is a few thousand lines.
curl -XGET http://localhost:9200/_mapping?pretty=1 |grep srcip
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 "srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
"srcip" : {
100 1795k 100 1795k 0 0 15.5M 0 --:--:-- --:--:-- --:--:-- 15.6M
"srcip" : {
"srcip" : {
little bit hard to know exactly each of them are set to "what" but lemme check "again" thank you
gh0stid
(Alex)
February 15, 2016, 4:45pm
17
theres three occurence starting at lines 54613 linked to :
"traffic" : {
"_all" : {
"enabled" : true,
"omit_norms" : true
},
"dynamic_templates" : [ {
"message_field" : {
"mapping" : {
"fielddata" : {
"format" : "disabled"
},
"index" : "analyzed",
"omit_norms" : true,
"type" : "string"
},
"match" : "message",
"match_mapping_type" : "string"
}
},
"srcip" : {
"type" : "long"
then utm type and finaly event type _all
dont know where they are from ?
so , what am i suppose to do now ? it appears theres a lot of duplicate out of nowhere ?
did some research ...
the culprit apear to be : logstash-2016.02.12
Can I delete only this indices ? ? using :
curl -XDELETE 'http://localhost:9200/logstash-2016.02.12 ' ?
and then it will solve my problem ?
gh0stid
(Alex)
February 15, 2016, 6:11pm
19
tested it, and it worked. now back to square.
i need to figure out how to get my backfill to work now ...
gh0stid
(Alex)
February 18, 2016, 8:46pm
20
still didnt figure it out yet..
for those interested in helping, since theres no "way" to "close" a topic. ill post the link here.
Thank you!
5 days now ive been playing around, still havent figured out how to get it to work ...
I have 20G of logs that need to be backfilled into ELK stack ... now upgraded to 4.4 kibana 2.2 elastricsearch etc.
heres another roundup of the config setup :
I use a 10- to 49-*.conf input setup 50-output.conf
10-.conf is working as intended. so I have copied it to 11-.conf and played around with it .. no succes
heres 11-.conf
input {
file {
path => ["/var/log/fortigate/fg.log"]
start_positi…