Multiline config for filebeat

Hi Steffens,

Thank you very much for the help. I have posted my file beat config below:

filebeat:
  prospectors:
    - input_type: log
      paths:
        - /Users/test/wso2.log

multiline:
    negate: true
    pattern: '^TID:'
    match: after

Log lines:

TID: [0] [BAM] [2015-11-27 23:51:19,549] ERROR {org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation} - Failed to write data to database {org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation}
org.h2.jdbc.JdbcSQLException: NULL not allowed for column "CONSUMERKEY"; SQL statement:
INSERT INTO API_RESPONSE_SUMMARY_DAY (time,resourcepath,context,servicetime,total_response_count,version,tzoffset,consumerkey,epoch,userid,apipublisher,api) VALUES (?,?,?,?,?,?,?,?,?,?,?,?) [90006-140]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:327)
at org.h2.message.DbException.get(DbException.java:167)
at org.h2.message.DbException.get(DbException.java:144)
at org.h2.table.Column.validateConvertUpdateSequence(Column.java:294)
at org.h2.table.Table.validateConvertUpdateSequence(Table.java:621)
at org.h2.command.dml.Insert.insertRows(Insert.java:116)
at org.h2.command.dml.Insert.update(Insert.java:82)
at org.h2.command.CommandContainer.update(CommandContainer.java:70)
at org.h2.command.Command.executeUpdate(Command.java:199)
at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:141)
at org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:127)
at org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation.insertData(DBOperation.java:175)
at org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation.writeToDB(DBOperation.java:63)
at org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBRecordWriter.write(DBRecordWriter.java:35)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:589)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:467)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:758)
at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:467)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:758)
at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:467)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:758)
at org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:964)
at org.apache.hadoop.hive.ql.exec.GroupByOperator.processAggr(GroupByOperator.java:781)
at org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:707)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:467)
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:248)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:518)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:419)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:257)

Expected output:

The same as above starting from TID the entire log entry should go as a single event.

Thanks,

Kasi