Multiline config for filebeat

what will be the multiline config for the following stack trace, I have tried everything nothing seems to work.
Any help much appreciated

Stack Trace:

TID: [0] [BAM] [2015-11-27 23:51:19,549] ERROR {org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation} -  Failed to write data to database {org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation}
org.h2.jdbc.JdbcSQLException: NULL not allowed for column "CONSUMERKEY"; SQL statement:
INSERT INTO API_RESPONSE_SUMMARY_DAY (time,resourcepath,context,servicetime,total_response_count,version,tzoffset,consumerkey,epoch,userid,apipublisher,api) VALUES (?,?,?,?,?,?,?,?,?,?,?,?) [90006-140]
        at org.h2.message.DbException.getJdbcSQLException(DbException.java:327)
        at org.h2.message.DbException.get(DbException.java:167)
        at org.h2.message.DbException.get(DbException.java:144)
        at org.h2.table.Column.validateConvertUpdateSequence(Column.java:294)
        at org.h2.table.Table.validateConvertUpdateSequence(Table.java:621)
        at org.h2.command.dml.Insert.insertRows(Insert.java:116)
        at org.h2.command.dml.Insert.update(Insert.java:82)
        at org.h2.command.CommandContainer.update(CommandContainer.java:70)
        at org.h2.command.Command.executeUpdate(Command.java:199)
        at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:141)

Can you share what you have tried so far?

I did try to format your log using 3 backticks before and after your logs. Can you confirm formatted log is correct (especially newlines)? You can check and fix formatting by clicking the edit button (the pencil one).

Just seeing the exception is only half of the story, as one doesn't want multiline to combine normal logs by accident. Maybe you can add some more context here by adding some more log output.

Assuming all log-lines start with 'TID: ' (thread id?) one can try this:

multiline:
  negate: true
  pattern: '^TID:'
  match: after

Which tool is generating the log? With some more context, maybe we can add this as example to our docs.

Thank you very much for the help steffens, for your question, I have given more info below:

The stack trace is more of the same, the exception is generated by one of our internal apps.

rufflin: I did try the following:

multiline:
pattern: "TID:^"
negate: true
match: after
with various combinations and it did not join the lines together.

steffens I will try your solution and update the post.

Thanks,

Kasi

steffens:

I am still them as a single entries, please let me know if you need more info.

Thanks,

kasi

I don't understand. Config given by me should be able to combine all lines into one event.

Check out this script applying the regex to your logs: Go Playground - The Go Programming Language
You can play with pattern, negate and content variables. Having match: after, all lines starting with false indicates a new line event and line starting with true indicate lines being merged into the multiline event.

What's your filebeat config?

Can you post some more log-lines to get some more context?

Can you give an example of expected input and expected output?

Hi Steffens,

Thank you very much for the help. I have posted my file beat config below:

filebeat:
  prospectors:
    - input_type: log
      paths:
        - /Users/test/wso2.log

multiline:
    negate: true
    pattern: '^TID:'
    match: after

Log lines:

TID: [0] [BAM] [2015-11-27 23:51:19,549] ERROR {org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation} - Failed to write data to database {org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation}
org.h2.jdbc.JdbcSQLException: NULL not allowed for column "CONSUMERKEY"; SQL statement:
INSERT INTO API_RESPONSE_SUMMARY_DAY (time,resourcepath,context,servicetime,total_response_count,version,tzoffset,consumerkey,epoch,userid,apipublisher,api) VALUES (?,?,?,?,?,?,?,?,?,?,?,?) [90006-140]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:327)
at org.h2.message.DbException.get(DbException.java:167)
at org.h2.message.DbException.get(DbException.java:144)
at org.h2.table.Column.validateConvertUpdateSequence(Column.java:294)
at org.h2.table.Table.validateConvertUpdateSequence(Table.java:621)
at org.h2.command.dml.Insert.insertRows(Insert.java:116)
at org.h2.command.dml.Insert.update(Insert.java:82)
at org.h2.command.CommandContainer.update(CommandContainer.java:70)
at org.h2.command.Command.executeUpdate(Command.java:199)
at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:141)
at org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:127)
at org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation.insertData(DBOperation.java:175)
at org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBOperation.writeToDB(DBOperation.java:63)
at org.wso2.carbon.hadoop.hive.jdbc.storage.db.DBRecordWriter.write(DBRecordWriter.java:35)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:589)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:467)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:758)
at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:467)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:758)
at org.apache.hadoop.hive.ql.exec.SelectOperator.processOp(SelectOperator.java:84)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:467)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:758)
at org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:964)
at org.apache.hadoop.hive.ql.exec.GroupByOperator.processAggr(GroupByOperator.java:781)
at org.apache.hadoop.hive.ql.exec.GroupByOperator.processOp(GroupByOperator.java:707)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:467)
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:248)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:518)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:419)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:257)

Expected output:

The same as above starting from TID the entire log entry should go as a single event.

Thanks,

Kasi

Also in the output I am seeing in the logstash console from filebeat, I am seeing the following
"message" => "\tat org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:141)",

looks like there is a tab in the beginning of the line, do not know if that will make a difference.

Thanks,

Kasi

It seems like the indentation of your multiline config is not correct. It should be under a prospector, but at the moment it seems to be on the top level.

I reformatted your post above to make it visible.

Thanks I checked my config it is intended only, the lines lost their format when I copied them

Can you try what happens if you use pattern: ^TID: instead of pattern: '^TID:'