I need to parse mongodb logs in elasticsearch using logstash. I need to index only command that fired in mongodb logs. I have following config file of logstash:-
2017-02-14T14:03:11.569+0530 I CONTROL [main] Hotfix KB2731284 or later update is not installed, will zero-out data files
2017-02-14T14:03:11.569+0530 I CONTROL [initandlisten] MongoDB starting : pid=1584 port=27017 dbpath=C:\data\db\ 64-bit host=Admin-PC
2017-02-14T14:03:11.569+0530 I CONTROL [initandlisten] targetMinOS: Windows Vista/Windows Server 2008
2017-02-14T14:03:11.569+0530 I CONTROL [initandlisten] db version v3.2.1
2017-02-14T14:03:11.569+0530 I CONTROL [initandlisten] git version: a14d55980c2cdc565d4704a7e3ad37e4e535c1b2
2017-02-14T14:03:11.569+0530 I CONTROL [initandlisten] allocator: tcmalloc
2017-02-14T14:03:11.569+0530 I CONTROL [initandlisten] modules: none
2017-02-14T14:03:11.569+0530 I CONTROL [initandlisten] build environment:
2017-02-14T14:03:11.569+0530 I CONTROL [initandlisten] distarch: x86_64
2017-02-14T14:03:11.569+0530 I CONTROL [initandlisten] target_arch: x86_64
2017-02-14T14:03:11.569+0530 I CONTROL [initandlisten] options: { systemLog: { destination: "file", path: "C:\Data\log\mongo.log" } }
2017-02-14T14:03:11.569+0530 I - [initandlisten] Detected data files in C:\data\db\ created by the 'wiredTiger' storage engine, so setting the active storage engine to 'wiredTiger'.
2017-02-14T14:03:11.569+0530 W - [initandlisten] Detected unclean shutdown - C:\data\db\mongod.lock is not empty.
2017-02-14T14:03:11.569+0530 W STORAGE [initandlisten] Recovering data from the last clean checkpoint.
2017-02-14T14:03:11.569+0530 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=4G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),checkpoint=(wait=60,log_size=2GB),statistics_log=(wait=0),
2017-02-14T14:03:12.696+0530 I NETWORK [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2017-02-14T14:03:12.696+0530 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory 'C:/data/db/diagnostic.data'
2017-02-14T14:03:12.697+0530 I NETWORK [initandlisten] waiting for connections on port 27017
Okay, then you can use a grok expression that only matches the COMMAND and NETWORK messages. The grok filter will fail for all other messages, and those events will get a _grokparsefailure tag. Those messages can then be dropped. This should be a reasonable starting point for a grok expression:
if i want starting_position from 10th log then what i should insert in config file???
above solution is running but in kibana it shows all logs.
pls just tell me the startin position if i want logs from 10th line
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.