can we only show query logs from mongodb in kibana through logstash filter?
how?
It'll be easier to answer if we know what both a query and a non-query log entry look like.
This question rings a bell. I think it was asked here a week or two ago.
2017-02-16T17:40:41.218+0530 I COMMAND [conn5] command nik.anisha appName: "MongoDB Shell" command: insert { insert: "anisha", documents: [ { _id: ObjectId('58a596c164efb79d0b586d7b'), name: "Anisha", tecnology: "spark-scala" } ], ordered: true } ninserted:1 keysInserted:1 numYields:0 reslen:29 locks:{ Global: { acquireCount: { r: 3, w: 3 } }, Database: { acquireCount: { w: 2, W: 1 } }, Collection: { acquireCount: { w: 2 } } } protocol:op_command 130ms
This should be query log.
I am trying to split it but split filter gives the same previous result.
2017-02-16T17:06:02.998+0530 I CONTROL [initandlisten]
2017-02-16T17:06:02.998+0530 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2017-02-16T17:06:02.998+0530 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2017-02-16T17:06:02.998+0530 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2017-02-16T17:06:02.998+0530 I CONTROL [initandlisten]
2017-02-16T17:06:02.999+0530 I CONTROL [initandlisten]
2017-02-16T17:06:02.999+0530 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2017-02-16T17:06:02.999+0530 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2017-02-16T17:06:02.999+0530 I CONTROL [initandlisten]
2017-02-16T17:06:02.999+0530 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2017-02-16T17:06:02.999+0530 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2017-02-16T17:06:02.999+0530 I CONTROL [initandlisten]
2017-02-16T17:06:03.004+0530 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2017-02-16T17:06:03.005+0530 I NETWORK [thread1] waiting for connections on port 27017
2017-02-16T17:06:27.219+0530 I NETWORK [conn1] received client metadata from 127.0.0.1:43388 conn1: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.2" }, os: { type: "Linux", name: "CentOS Linux release 7.3.1611 (Core) ", architecture: "x86_64", version: "Kernel 3.10.0-514.6.1.el7.x86_64" } }
2017-02-16T19:56:06.038+0530 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2017-02-16T19:56:06.038+0530 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
2017-02-16T19:56:06.038+0530 I NETWORK [signalProcessingThread] closing listening socket: 7
2017-02-16T19:56:06.038+0530 I NETWORK [signalProcessingThread] closing listening socket: 8
2017-02-16T19:56:06.038+0530 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock
2017-02-16T19:56:06.038+0530 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog...
2017-02-16T19:56:06.038+0530 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture
2017-02-16T19:56:08.777+0530 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down
2017-02-16T19:56:09.762+0530 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
2017-02-16T19:56:09.762+0530 I CONTROL [signalProcessingThread] now exiting
2017-02-16T19:56:09.762+0530 I CONTROL [signalProcessingThread] shutting down with code:0
These fields i don't wants to show.
Okay, so set up a grok filter to parse each line into fields. The timestamp obviously goes into one field, CONTROL/STORAGE/NETWORK/etc into another, the thread name (or logger name?) into a third, and so on. The grok constructor web site can help you craft the grok expression.
grok {
match => { message =>
"%{TIMESTAMP_ISO8601:timestamp} %{MONGO3_SEVERITY:severity} %{MONGO3_COMPONENT:component}%{SPACE}(?:[%{DATA:context}])? %{GREEDYDATA:content}"
}
}
date {
match => ["timestamp", "ISO8601"]
remove_field => "timestamp"
}
if [content] =~ "\d+ms$" {
grok {
match => { content =>
"%{WORD:command} %{WORD:database}\.(?<object>\S+) (?<slow_query>.+?) %{NUMBER:duration:int}ms"
}
add_tag => "slow_query"
remove_field => "content"
}
}
I am a newbie and tried this configuration in logstash but kibana is not showing this results i can only see results on command prompt and it is showing all logs i just want command log and only message field and send them to kibana
please help me to understand elk.
I'm not sure your grok expressions are correct. Please use a stdout { codec => rubydebug }
output and report the output of it so we can see exactly what an event looks like.
This is logstash config
input {
file {
path => "C:/Data/log/mongo.log"
start_position => "beginning"
}
}
filter{
grok {
match => { message =>
"%{TIMESTAMP_ISO8601:@timestamp} %{MONGO3_SEVERITY:severity} %{MONGO3_COMPONENT:component}%{SPACE}(?:[%{DATA:context}])? %{GREEDYDATA:content}" }
}
date {
match => ["timestamp", "ISO8601"]
remove_field => "timestamp"
}
if [content] =~ "\d+ms$" {
grok {
match => { content =>
"%{WORD:command} %{WORD:database}.(?\S+) (?<slow_query>.+?) %{NUMBER:duration:int}ms"
}
add_tag => "slow_query"
remove_field => "content"
}
}
}
output {
elasticsearch{ hosts => ["localhost:9200"] index => "logstash" }
stdout{codec => "rubydebug" }
}
Output is:-
elds:0 reslen:40 locks:{ Global: { acquireCount: { r: 17, w: 17 } }, Database: {
acquireCount: { w: 16, W: 1 } }, Collection: { acquireCount: { w: 16, W: 1 } }
} protocol:op_query 169ms\r",
"tags" => []
}
{
"severity" => "I",
"path" => "C:/Data/log/mongo.log",
"component" => "COMMAND",
"@timestamp" => 2017-02-23T12:22:19.922Z,
"@version" => "1",
"host" => "Admin-PC",
"context" => "conn3",
"message" => "2017-02-23T17:09:47.759+0530 I COMMAND [conn3] command loc
al.zips command: insert { insert: "zips", ordered: false, documents: 1000 } ni
nserted:1000 keyUpdates:0 writeConflicts:0 numYields:0 reslen:40 locks:{ Global:
{ acquireCount: { r: 16, w: 16 } }, Database: { acquireCount: { w: 16 }, acquir
eWaitCount: { w: 1 }, timeAcquiringMicros: { w: 134572 } }, Collection: { acquir
eCount: { w: 16 } } } protocol:op_query 155ms\r",
"content" => "command local.zips command: insert { insert: "zips", orde
red: false, documents: 1000 } ninserted:1000 keyUpdates:0 writeConflicts:0 numYi
elds:0 reslen:40 locks:{ Global: { acquireCount: { r: 16, w: 16 } }, Database: {
acquireCount: { w: 16 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w:
134572 } }, Collection: { acquireCount: { w: 16 } } } protocol:op_query 155ms\r"
,
"tags" => []
}
{
"severity" => "I",
"path" => "C:/Data/log/mongo.log",
"component" => "COMMAND",
"@timestamp" => 2017-02-23T12:22:19.927Z,
"@version" => "1",
"host" => "Admin-PC",
"context" => "conn6",
"message" => "2017-02-23T17:09:47.759+0530 I COMMAND [conn6] command loc
al.zips command: insert { insert: "zips", ordered: false, documents: 1000 } ni
nserted:1000 keyUpdates:0 writeConflicts:0 numYields:0 reslen:40 locks:{ Global:
{ acquireCount: { r: 16, w: 16 } }, Database: { acquireCount: { w: 16 }, acquir
eWaitCount: { w: 1 }, timeAcquiringMicros: { w: 109191 } }, Collection: { acquir
eCount: { w: 16 } } } protocol:op_query 129ms\r",
"content" => "command local.zips command: insert { insert: "zips", orde
red: false, documents: 1000 } ninserted:1000 keyUpdates:0 writeConflicts:0 numYi
elds:0 reslen:40 locks:{ Global: { acquireCount: { r: 16, w: 16 } }, Database: {
acquireCount: { w: 16 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w:
109191 } }, Collection: { acquireCount: { w: 16 } } } protocol:op_query 129ms\r"
,
"tags" => []
}
{
"severity" => "I",
"path" => "C:/Data/log/mongo.log",
"component" => "COMMAND",
"@timestamp" => 2017-02-23T12:22:19.927Z,
"@version" => "1",
"host" => "Admin-PC",
"context" => "conn5",
"message" => "2017-02-23T17:09:47.796+0530 I COMMAND [conn5] command loc
al.zips command: insert { insert: "zips", ordered: false, documents: 1000 } ni
nserted:1000 keyUpdates:0 writeConflicts:0 numYields:0 reslen:40 locks:{ Global:
{ acquireCount: { r: 16, w: 16 } }, Database: { acquireCount: { w: 16 }, acquir
eWaitCount: { w: 1 }, timeAcquiringMicros: { w: 136172 } }, Collection: { acquir
eCount: { w: 16 } } } protocol:op_query 193ms\r",
"content" => "command local.zips command: insert { insert: "zips", orde
red: false, documents: 1000 } ninserted:1000 keyUpdates:0 writeConflicts:0 numYi
elds:0 reslen:40 locks:{ Global: { acquireCount: { r: 16, w: 16 } }, Database: {
acquireCount: { w: 16 }, acquireWaitCount: { w: 1 }, timeAcquiringMicros: { w:
136172 } }, Collection: { acquireCount: { w: 16 } } } protocol:op_query 193ms\r"
,
"tags" => []
}
{
"severity" => "I",
"path" => "C:/Data/log/mongo.log",
"component" => "CONTROL",
"@timestamp" => 2017-02-23T12:57:54.016Z,
"@version" => "1",
"host" => "Admin-PC",
"context" => "thread1",
"message" => "2017-02-23T18:27:53.088+0530 I CONTROL [thread1] Ctrl-C si
gnal\r",
"content" => "Ctrl-C signal\r",
"tags" => []
}
{
"severity" => "I",
"path" => "C:/Data/log/mongo.log",
"component" => "CONTROL",
"@timestamp" => 2017-02-23T12:57:54.019Z,
"@version" => "1",
"host" => "Admin-PC",
"context" => "consoleTerminate",
"message" => "2017-02-23T18:27:53.088+0530 I CONTROL [consoleTerminate]
got CTRL_C_EVENT, will terminate after current cmd ends\r",
"content" => "got CTRL_C_EVENT, will terminate after current cmd ends\r",
"tags" => []
}
{
"severity" => "I",
"path" => "C:/Data/log/mongo.log",
"component" => "FTDC",
"@timestamp" => 2017-02-23T12:57:54.020Z,
"@version" => "1",
"host" => "Admin-PC",
"context" => "consoleTerminate",
"message" => "2017-02-23T18:27:53.088+0530 I FTDC [consoleTerminate]
Shutting down full-time diagnostic data capture\r",
"content" => "Shutting down full-time diagnostic data capture\r",
"tags" => []
}
{
"severity" => "I",
"path" => "C:/Data/log/mongo.log",
"component" => "CONTROL",
"@timestamp" => 2017-02-23T12:57:54.020Z,
"@version" => "1",
"host" => "Admin-PC",
"context" => "consoleTerminate",
"message" => "2017-02-23T18:27:53.094+0530 I CONTROL [consoleTerminate]
now exiting\r",
"content" => "now exiting\r",
"tags" => []
}
{
"severity" => "I",
"path" => "C:/Data/log/mongo.log",
"component" => "NETWORK",
"@timestamp" => 2017-02-23T12:57:54.021Z,
"@version" => "1",
"host" => "Admin-PC",
"context" => "consoleTerminate",
"message" => "2017-02-23T18:27:53.094+0530 I NETWORK [consoleTerminate]
shutdown: going to close listening sockets...\r",
"content" => "shutdown: going to close listening sockets...\r",
"tags" => []
}
{
"severity" => "I",
"path" => "C:/Data/log/mongo.log",
"component" => "NETWORK",
"@timestamp" => 2017-02-23T12:57:54.021Z,
"@version" => "1",
"host" => "Admin-PC",
"context" => "consoleTerminate",
"message" => "2017-02-23T18:27:53.095+0530 I NETWORK [consoleTerminate]
closing listening socket: 404\r",
"content" => "closing listening socket: 404\r",
"tags" => []
}
{
"severity" => "I",
"path" => "C:/Data/log/mongo.log",
"component" => "NETWORK",
"@timestamp" => 2017-02-23T12:57:54.022Z,
"@version" => "1",
"host" => "Admin-PC",
"context" => "consoleTerminate",
"message" => "2017-02-23T18:27:53.095+0530 I NETWORK [consoleTerminate]
shutdown: going to flush diaglog...\r",
"content" => "shutdown: going to flush diaglog...\r",
"tags" => []
}
To skip all events from other components than COMMAND, add this conditional filter:
if [component] != "COMMAND" {
drop { }
}
input {
file {
path => "C:/Data/log/mongodb.log"
start_position => "beginning"
}
}
filter{
grok {
match => { message =>
"%{TIMESTAMP_ISO8601:@timestamp} %{MONGO3_SEVERITY:severity} %{MONGO3_COMPONENT:component}%{SPACE}(?:[%{DATA:context}])? %{GREEDYDATA:content}" }
if [ component ] != "COMMAND" {
drop { }
}
}
mutate
{
remove_field => [ "message" ]
}
}
output {
elasticsearch{ hosts => ["localhost:9000"] index => "log_mongo" }
stdout{codec => "rubydebug" }
}
Is this correct?
I did not get any log after this...
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.