Logstash backfilling two type of logs, problem!

hummm ive tried to completely remove 10-.conf and go directly with 11-.conf and edited 50-.conf so it goes straight without any "check"

still nothing in logstash.log OR logstash.stdout so apparentely ... according to "it" it has nothing to proceed. still the log file in fortigate folder is 53mb...

The reason why I need those field converted is to be able to "field edit" em in setting to lets say : BYTES to be able to do a scripted field doc.rcvdbyte + sentbyte like I saw in another "blog"

Yes, it's quite clear why you want the fields to be numbers. I've already attempted to explain how you can address this. Mappings are explained in greater detail in the ES documentation.

ill take a look at it,

any idea why I dont get anything indexed with my 11.conf ?

thank you.

P.S for those interested
https://jackhanington.com/blog/2014/12/11/create-a-custom-elasticsearch-template/

any idea why I dont get anything indexed with my 11.conf ?

Again, if you show us what the events look like I might be able to help.

Im sorry, but theres absolutely no "stdout" log no "err" log or logfile at all.
Not sure to understand what do you mean by "event" ?

simply nothing happen. no error what so ever and no index created to the name of fgt-backfill-* or whatever else.
im really sorry if I dont understand what you mean , sorry to make you waste time.

Oh, so nothing happens. When that's the case when a file is the input the problem is usually that nothing new is being added to the file. start_position => beginning only matters when the file is new and unseen. To be able to rerun the same file through Logstash, set sincedb_path => "/dev/null" for the file input. This general topic is covered by the file input documentation as well as a couple of times every week in this group.

Tried it. and its a negative.
Sorry to annoye you.

I give up ... now I have 7 mapping conflict. out of nowhere.... ffs I guess im too stupid to use that lol

Why use a grok filter at all? The whole string is a bunch of key/values. Just feed the message field straight to the kv filter.

5 days now ive been playing around, still havent figured out how to get it to work ...

I have 20G of logs that need to be backfilled into ELK stack ... now upgraded to 4.4 kibana 2.2 elastricsearch etc.

heres another roundup of the config setup :

I use a 10- to 49-*.conf input setup 50-output.conf

10-.conf is working as intended. so I have copied it to 11-.conf and played around with it .. no succes

heres 11-.conf

input {
  file {
    path => ["/var/log/fortigate/fg.log"]
    start_position => "beginning"
    sincedb_path => "/tmp/sucemamarde1"
    type => "fgt-backfill"
        }
}
filter{
#grok {
#  match => [
#    "message",
#    "%{GREEDYDATA:kv}"
#  ]
#  remove_field => ["message"]
#}
kv {
      source => "message"
      field_split => " "
      value_split => "="
}
#date {
#  match => ["itime", "UNIX_MS"]
#  locale => "en"
#}
geoip{
source =>"dstip"
database =>"/opt/logstash/GeoLiteCity.dat"
 }
}

Tried different setup, using grok to match only %{GREEDYDATA:kv} --- according to grok debugger I should match the log !

one exemple of logs into fg.log :

"itime=1453486381 date=2016-01-22 time=13:13:01 devid=FG200D3913801116 logid=0001000014 type=traffic subtype=local level=notice vd=root srcip=192.168.0.7 srcport=137 srcintf="port1" dstip=192.168.0.255 dstport=137 dstintf="root" sessionid=781856124 status=deny policyid=0 dstcountry="Reserved" srccountry="Reserved" trandisp=noop service=137/udp proto=17 app=137/udp duration=0 sentbyte=0 rcvdbyte=0"

As Magnus Baeck Said, absolutely everything should be parsed with the KV filter by it self. its only a key value chain!, still it dont...

Nothing relevent is output to logstash.log .err or stdout

heres my 50-output.conf

output {
#if [type] == "fgt-backfill" {
if [path] == "/var/log/fortigate/fg.log" {
  elasticsearch {
  hosts => ["localhost:9200"]
  index => "fgt-backfill-%{+YYYY.MM.dd}"
 }
 stdout { codec => rubydebug }
}
else {
  elasticsearch {
  hosts => ["localhost:9200"]
 }
}
#DEBUG TOUT
#stdout { codec => rubydebug }
}

indices fgt-backfill-* simply doesnt get created at all ... tried with if [path] == XXXX if [type] == "fgt-backfill"
still, a no go. the else work correctly ( meaning my other input are working as intended.

Please. help, I dont know where else, I could ask for help nor. how am i suppose to figure it out by my self...

According to the doc, the index doesnt have to be created PRIOR to logstash indexation.. so im lost.

one exemple of logs into fg.log :

"itime=1453486381 date=2016-01-22 time=13:13:01 devid=FG200D3913801116 logid=0001000014 type=traffic subtype=local level=notice vd=root srcip=192.168.0.7 srcport=137 srcintf="port1" dstip=192.168.0.255 dstport=137 dstintf="root" sessionid=781856124 status=deny policyid=0 dstcountry="Reserved" srccountry="Reserved" trandisp=noop service=137/udp proto=17 app=137/udp duration=0 sentbyte=0 rcvdbyte=0"

Does the log message actually begin and end with a double quote? If yes you can use a mutate filter's gsub option to remove them (a grok filter could also do it).

no
heres the result of a tailf

tailf /var/log/fortigate/fg.log
itime=1453486381 date=2016-01-22 time=13:13:01 devid=FG200D3913801116 logid=0001000014 type=traffic subtype=local level=notice vd=root srcip=172.16.13.73 srcport=137 srcintf="wan2" dstip=172.16.15.255 dstport=137 dstintf="root" sessionid=781856107 status=deny policyid=0 dstcountry="Reserved" srccountry="Reserved" trandisp=noop service=137/udp proto=17 app=137/udp duration=0 sentbyte=0 rcvdbyte=0

The kv filter works just fine.

$ echo 'itime=1453486381 date=2016-01-22 time=13:13:01 devid=FG200D3913801116 logid=0001000014 type=traffic subtype=local level=notice vd=root srcip=172.16.13.73 srcport=137 srcintf="wan2" dstip=172.16.15.255 dstport=137 dstintf="root" sessionid=781856107 status=deny policyid=0 dstcountry="Reserved" srccountry="Reserved" trandisp=noop service=137/udp proto=17 app=137/udp duration=0 sentbyte=0 rcvdbyte=0' | /opt/logstash/bin/logstash -e 'input { stdin {} } filter { kv { } } output { stdout { codec => rubydebug } }'
Settings: Default pipeline workers: 2
Logstash startup completed
{
       "message" => "itime=1453486381 date=2016-01-22 time=13:13:01 devid=FG200D3913801116 logid=0001000014 type=traffic subtype=local level=notice vd=root srcip=172.16.13.73 srcport=137 srcintf=\"wan2\" dstip=172.16.15.255 dstport=137 dstintf=\"root\" sessionid=781856107 status=deny policyid=0 dstcountry=\"Reserved\" srccountry=\"Reserved\" trandisp=noop service=137/udp proto=17 app=137/udp duration=0 sentbyte=0 rcvdbyte=0",
      "@version" => "1",
    "@timestamp" => "2016-02-18T21:42:49.607Z",
          "host" => "hallonet",
         "itime" => "1453486381",
          "date" => "2016-01-22",
          "time" => "13:13:01",
         "devid" => "FG200D3913801116",
         "logid" => "0001000014",
          "type" => "traffic",
       "subtype" => "local",
         "level" => "notice",
            "vd" => "root",
         "srcip" => "172.16.13.73",
       "srcport" => "137",
       "srcintf" => "wan2",
         "dstip" => "172.16.15.255",
       "dstport" => "137",
       "dstintf" => "root",
     "sessionid" => "781856107",
        "status" => "deny",
      "policyid" => "0",
    "dstcountry" => "Reserved",
    "srccountry" => "Reserved",
      "trandisp" => "noop",
       "service" => "137/udp",
         "proto" => "17",
           "app" => "137/udp",
      "duration" => "0",
      "sentbyte" => "0",
      "rcvdbyte" => "0"
}
Logstash shutdown completed

To debug the problem of events not reaching ES, be systematic and remove everything that's not absolutely needed, like the kv filter and the elasticsearch output. Keep the stdout output. Are the messages coming through? Yes? Add the elasticsearch output back and try again. Do you get anything in ES? No? How do you know for sure? Maybe you're just looking at the wrong time interval. What's in the Logstash logs? It's not likely that Logstash has trouble submitting the events to ES yet doesn't log anything about it. Or are you not getting anything through to the stdout output? Then you have a sincedb problem.

To debug the problem of events not reaching ES, be systematic and remove everything that's not absolutely needed, like the kv filter and the elasticsearch output. Keep the stdout output. Are the messages coming through?

negative.

Do you get anything in ES? No? How do you know for sure? Maybe you're just looking at the wrong time interval.
simple, my "fgt-backfill-YYYY-MM-dd" indices doesnt get created, according to my 50-output.conf

What's in the Logstash logs?

my normal 10-network.conf trafic ( syslog logs from network.log in /var/log/network.log ( wich works ok )

Or are you not getting anything through to the stdout output?

nothing but the line that say :
Sending logs to logstash.log ( something similar to that.. )

Then you have a sincedb problem.

tried to use /dev/null as you told me in the past.
then tried a random /tmp/whateverfilename
still not good :\

my normal 10-network.conf trafic ( syslog logs from network.log in /var/log/network.log ( wich works ok )

Why are you using 10-network.conf? Remove. All. Nonessential. Config files.

tried to use /dev/null as you told me in the past.
then tried a random /tmp/whateverfilename
still not good :\

If you increase logging verbosity with --verbose or --debug (don't remember what's required for what we're interested in; --verbose will probably do) Logstash will tell you which files are being monitored, which sincedb file is used, and the file offsets of files involved. The problem could be so simple as a typo in the filename pattern resulting in no files being matched, or that Logstash doesn't have permission to read the file. Read the logs.

Why are you using 10-network.conf? Remove. All. Nonessential. Config files.

its unfortunately now a production server, thats why 10-network.conf still used \ working

f you increase logging verbosity with --verbose or --debug (don't remember what's required for what we're interested in; --verbose will probably do) Logstash will tell you which files are being monitored, which sincedb file is used, and the file offsets of files involved. The problem could be so simple as a typo in the filename pattern resulting in no files being matched, or that Logstash doesn't have permission to read the file. Read the logs.

will give it a go, but, the fg.log has the same permission has network.log
I do not use "pattern" since we already agreed I could simply kv filter the whole thing.

thanks again for the reply. ill be back at the office in 9hours.

its unfortunately now a production server, thats why 10-network.conf still used \ working

Then run a separate Logstash instance on the same machine or debug things on a different machine.

I do not use "pattern" since we already agreed I could simply kv filter the whole thing.

I was referring to the filename pattern in the file input.

heres the debug log.

http://pastebin.com/YJerjPNQ

I dont see anything relevent in the debug log?
Sorry again to bother you :\

thanks for the hint

Here's the interesting part:

{:timestamp=>"2016-02-19T09:17:57.183000-0500", :message=>"_discover_file: /var/log/fortigate/fg.log: skipping because it was last modified more than 86400.0 seconds ago", :level=>:debug, :file=>"filewatch/watch.rb", :line=>"310", :method=>"_discover_file"}

Touch the file or adjust the ignore_older option (which defaults to 86400 seconds).

ok . now it work

the indices is created etc.

but . the timestamp is not correct.
heres the stdout result of a log being catch.

"message" => "itime=1453472132 date=2016-01-22 time=09:15:32 devid=FG200D3913801116 logid=0001000014 type=traffic subtype=local level=notice vd=root srcip=172.16.14.41 srcport=137 srcintf="wan2" dstip=172.16.15.255 dstport=137 dstintf="root" sessionid=780860110 status=deny policyid=0 dstcountry="Reserved" srccountry="Reserved" trandisp=noop service=137/udp proto=17 app=137/udp duration=0 sentbyte=0 rcvdbyte=0",
"@version" => "1",
"@timestamp" => "1970-01-17T19:44:32.132Z",
"path" => "/var/log/fortigate/fg.log",
"host" => "localhost",
"type" => "traffic",
"tags" => [
[0] "_grokparsefailure"
],
"itime" => "1453472132",
"date" => "2016-01-22",
"time" => "09:15:32",
"devid" => "FG200D3913801116",
"logid" => "0001000014",
"subtype" => "local",
"level" => "notice",
"vd" => "root",
"srcip" => "172.16.14.41",
"srcport" => "137",
"srcintf" => "wan2",
"dstip" => "172.16.15.255",
"dstport" => "137",
"dstintf" => "root",
"sessionid" => "780860110",
"status" => "deny",
"policyid" => "0",
"dstcountry" => "Reserved",
"srccountry" => "Reserved",
"trandisp" => "noop",
"service" => "137/udp",
"proto" => "17",
"app" => "137/udp",
"duration" => "0",
"sentbyte" => "0",
"rcvdbyte" => "0"
}

now I need to fix the timestamp somehow ...
im already using :

date {
match => ["itime", "UNIX_MS"]
locale => "en"
}

apparentely .. its a no go :\ ?