Sure, there are other filters you could use (perhaps the csv filter if you make " | " the column separator) but I don't think it'll be much easier.
can i use aggregate filter here?
What? No.
okk.if i use csv filter then is it too lengthy?
can u suggest appropriate filter for this? Because i m getting confuse now
My suggestion is to use the grok filter as described earlier. Start with the very simplest expression, ^%{TIMESTAMP_ISO8601:timestamp}
. Does that work? If yes, continue adding more to the expression, each time validating that it continues to work. Be systematic.
ok.i will try it
it not working. it says that "incorrect config file"
Please try to understand that it's impossible to help with the small amount of details that you give. I need to see the exact configuration you tried (copy/paste the text and format what you paste as preformatted text using the </>
toolbar button) and the exact error message.
The solution is working now.I m able to parse logs now
hi i want to filter logs as per the time taken by particular query to perform.
February 15th 2017, 18:03:37.133 2017-02-15T17:59:12.258+0530 I COMMAND [conn2] command narendra.$cmd command: delete { delete: "inventory", deletes: [ { q: { status: "A" }, limit: 0.0 } ], ordered: true } keyUpdates:0 writeConflicts:0 numYields:0 reslen:25 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } }, Metadata: { acquireCount: { W: 1 } } } protocol:op_command 3ms
February 15th 2017, 18:03:37.133 2017-02-15T17:59:12.257+0530 I WRITE [conn2] remove narendra.inventory query: { status: "A" } ndeleted:7 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { w: 1 } }, Collection: { acquireCount: { w: 1 } } } 2ms
February 15th 2017, 18:03:37.128 2017-02-15T17:48:53.581+0530 I COMMAND [conn2] command narendra.inventory command: find { find: "inventory", filter: { status: { $in: [ "A", "D" ] } } } planSummary: COLLSCAN keysExamined:0 docsExamined:10 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:5 reslen:684 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 41ms
February 15th 2017, 18:03:37.127 2017-02-15T17:46:29.632+0530 I - [conn2] Creating profile collection: narendra.system.profile
February 15th 2017, 18:03:37.126 2017-02-15T17:46:29.632+0530 I COMMAND [conn2] command narendra.inventory command: insert { insert: "inventory", documents: [ { _id: ObjectId('58a4469db28fdcc74588e721'), item: "canvas", qty: 100.0, tags: [ "cotton" ], size: { h: 28.0, w: 35.5, uom: "cm" } } ], ordered: true } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:25 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { W: 1 } } } protocol:op_command 206ms
February 15th 2017, 18:03:37.123 2017-02-15T17:45:57.812+0530 I COMMAND [conn2] command admin.system.users command: saslStart { saslStart: 1, mechanism: "SCRAM-SHA-1", payload: "xxx" } keyUpdates:0 writeConflicts:0 numYields:0 reslen:164 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } protocol:op_command 4ms
these are filtered logs.Now the challenge is sort out those by time
means time taken by query less tan 10ms and greater than 10ms
February 15th 2017, 18:03:37.133 2017-02-15T17:59:12.258+0530 I COMMAND [conn2] command narendra.$cmd command: delete { delete: "inventory", deletes: [ { q: { status: "A" }, limit: 0.0 } ], ordered: true } keyUpdates:0 writeConflicts:0 numYields:0 reslen:25 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 2 } }, Collection: { acquireCount: { w: 2 } }, Metadata: { acquireCount: { W: 1 } } } protocol:op_command 3ms
can i split this message in different fields as timestamp in different field , that command in different field and query in different field and time taken by query in different field?
Yes, use the grok filter. The grok constructor web site should be helpful when creating the expression.
filter {
grok {
match => {"message" => "%{DATA:timestamp} | %(COMMAND|NETWORK) %{GREEDYDATA:message}"}
}
}
mutate{
split => ["message"]
}
i used this file.But it not working
input {
file {
path => "C:/Data/log/mongolog.log"
start_position => "beginning"
}
}
filter {
grok {
match => {"message" => "[%{DATA:timestamp} | %(COMMAND|NETWORK) %{GREEDYDATA:message}]"}
}
}
mutate { timestamp => { "@timestamp" => "timestamp","dd/MM/yy HH:mm:ss:Z"}}
mutate { command => { "Instruction" => "I COMMAND"}}
mutate { message => { "message"=> "query"}}
mutate { query_time => { op_command => "time in ms"}}
date {
match => ["timestamp","dd/MM/yy HH:mm:ss:Z"]
}
}
output {
elasticsearch{ hosts => ["localhost:9200"] index => "log" }
stdout{codec => "rubydebug" }
}
i used this config file. it gives error of logstash pipeline aborted.
I will add the error
now its showing differet error
configuration at
default config whi
h::Runner] ERROR l
file {\npath => "
\n}\nfilter {\ngro
(COMMAND|NETWORK)
timestamp" => "t
"Instruction" =>
"}}\nmutate { quer
["timestamp","
["localhost:920
n", :reason=>"Expe
e 203) after "}
As I've said before: Copy/paste the text and format what you paste as preformatted text using the </> toolbar button. Another general advice is to use the preview pane on the right when posting. Is the text I'm about to post complete and correctly formatted?
%(COMMAND|NETWORK)
This is incorrect. Remove the % sign.
ok.i will correct it
the format you are gave me is works but it not parse as per command.it shows _ grokparse success for all the queries.
February 20th 2017, 10:45:29.087 2017-02-20T10:38:13.437+0530 I COMMAND [conn2] command narendra.inventory command: insert { insert: "inventory", documents: 5, ordered: true } ninserted:5 keyUpdates:0 writeConflicts:0 numYields:0 reslen:25 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { w: 1, W: 1 } } } protocol:op_command 165ms
February 20th 2017, 10:45:29.084 2017-02-20T10:35:21.892+0530 I COMMAND [conn4] command narendra.restaurants command: insert { insert: "restaurants", ordered: false, documents: 1000 } ninserted:1000 keyUpdates:0 writeConflicts:0 numYields:0 reslen:40 locks:{ Global: { acquireCount: { r: 17, w: 17 } }, Database: { acquireCount: { w: 16, W: 1 } }, Collection: { acquireCount: { w: 16, W: 1 } } } protocol:op_query 202ms
February 20th 2017, 10:45:29.084 2017-02-20T10:34:46.189+0530 I COMMAND [conn3] command narendra.grades command: insert { insert: "grades", ordered: false, documents: 287 } ninserted:287 keyUpdates:0 writeConflicts:0 numYields:0 reslen:40 locks:{ Global: { acquireCount: { r: 6, w: 6 } }, Database: { acquireCount: { w: 5, W: 1 } }, Collection: { acquireCount: { w: 5, W: 1 } } } protocol:op_query 196ms
February 20th 2017, 10:45:29.083 2017-02-20T10:33:40.970+0530 I COMMAND [conn2] CMD: drop narendra.users
February 20th 2017, 10:45:29.082 2017-02-20T10:33:20.437+0530 I COMMAND [conn2] CMD: drop narendra.restaurants
February 20th 2017, 10:45:29.075 2017-02-20T10:33:01.582+0530 I COMMAND [conn2] CMD: drop narendra.inventory
February 20th 2017, 10:45:29.072 2017-02-20T10:32:20.895+0530 I COMMAND [conn2] command narendra.grades command: drop { drop: "grades" } keyUpdates:0 writeConflicts:0 numYields:0 reslen:63 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, Database: { acquireCount: { W: 1 } } } protocol:op_command 104ms
February 20th 2017, 10:45:29.067 2017-02-20T10:31:59.015+0530 I COMMAND [conn2] CMD: drop narendra.[object Object]
February 20th 2017, 10:45:29.067 2017-02-20T10:32:20.790+0530 I COMMAND [conn2] CMD: drop narendra.grades
February 20th 2017, 10:45:29.066 2017-02-20T10:31:34.639+0530 I ACCESS [conn3] Successfully authenticated as principal deepak on narendra
February 20th 2017, 10:45:29.065 2017-02-20T10:31:34.637+0530 I ACCESS [conn4] Successfully authenticated as principal deepak on narendra
I want to skip those system log.and just want query logs.
can u pls help me write filter for that.
I tried as per i understand