Filebeat loads record with error whereas same record works from STDIN

Just getting started with ES and I've hit a snag.

I have logstash being fed data by filebeat on server1.
Logstash then sends data to Elasticsearch on server2.

If I set logstash to read the following line from stdin it load successfully:

2016-12-02_17:08:01.541 [transaction-2] INFO web.engine.TransactionHandler - Transaction Completed : SOCKETID=52290,TXNREFERENCE=1000000000000503,CLIENTID=10000000,RESPONSECODE=00,RESPONSETEXT=APPROVED,DURATION=78,TRANSACTIONTYPE=PURCHASE,INTERFACE=CREDITCARD

If I set filebeat to read this record in from a file it throws the following error in ES:

[webpay_tran_track-2016.12.02/nWK4cPgJSTC2zV18vLJrtA]]], type [["log", "PURCHASE"]]
org.elasticsearch.indices.InvalidTypeNameException: mapping type name [["log", "PURCHASE"]] should not include ',' in it

    at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:296) ~[elasticsearch-5.0.1.jar:5.0.1]
    at org.elasticsearch.index.mapper.MapperService.merge(MapperService.java:277) ~[elasticsearch-5.0.1.jar:5.0.1]
    at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:323) ~[elasticsearch-5.0.1.jar:5.0.1]
    at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:241) ~[elasticsearch-5.0.1.jar:5.0.1]
    at org.elasticsearch.cluster.service.ClusterService.runTasksForExecutor(ClusterService.java:555) [elasticsearch-5.0.1.jar:5.0.1]
    at org.elasticsearch.cluster.service.ClusterService$UpdateTask.run(ClusterService.java:896) [elasticsearch-5.0.1.jar:5.0.1]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:451)[elasticsearch-5.0.1.jar:5.0.1]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:238) [elasticsearch-5.0.1.jar:5.0.1]
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:201) [elasticsearch-5.0.1.jar:5.0.1]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]
    at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]

[2016-12-05T16:31:02,540][DEBUG][o.e.a.b.TransportShardBulkAction] [zzmGnbh] [webpay_tran_track-2016.12.02][2] failed to execute bulk item (index) index {[webpay_tran_track-2016.12.02][["log", "PURCHASE"]][AVjNdsOuDmAdf7iQ47rE], source[{"cl
ientid":"10000000","offset":1061,"resptext":"APPROVED","input_type":"log","txnref":"1000000000000503","source":"C:\webpay\logs\tran_track_engine_temp.log","socketval":"52290","message":"2016-12-02_17:08:01.541 [transaction-2] INFO webpa
y.engine.TransactionHandler - Transaction Completed : SOCKETID=52290,TXNREFERENCE=1000000000000503,CLIENTID=10000000,RESPONSECODE=00,RESPONSETEXT=APPROVED,DURATION=78,TRANSACTIONTYPE=PURCHASE,INTERFACE=CREDITCARD","type"
:["log","PURCHASE"],"respcode":"00","interface":"CREDITCARD","tags":["beats_input_codec_plain_applied"],"duration":"78","@timestamp":"2016-12-02T06:08:01.541Z","@version":"1","beat":{"hostname":"sydwpayapp01","name":"sydwpayapp01","version"
:"5.0.1"},"host":"sydwpayapp01","time":"2016-12-02_17:08:01.541"}]}

the filter for this is:

filter {
grok {
patterns_dir => ["./patterns"]
match => { "message" => "%{TRANTRACK_DATE:time} %{NOTSPACE} %{NOTSPACE} %{NOTSPACE} - %{NOTSPACE} Completed :\s\w*=(%{WORD:socketval})?,\w*=(%{WORD:txnref})?,\w*=(%{WORD:clientid})?,\w*=(%{WORD:respcode})?,\w*=(%{MULTI_WORD:resptext})?,\w*=(%{WORD:duration})?,\w*=(%{TRAN_WITH_SUBTYPE:type})?,\w*=(%{WORD:interface})?" }
overwrite => ["message"]
}
if ([message] =~ "Transaction Start") {
drop {}
}
date {
match => ["time","yyyy-MM-dd_HH:mm:ss.SSS"]
}
}

I have tested this on a single server and it worked fine.

Que?

Hey,

the problem is not the message that does not get processed, but setting the type of the document, which is set to [["log", "PURCHASE"] - and triggering an error. Did you configure anything for the document_type setting in your filebeat configuration?

--Alex

My filebeat config is

filebeat.prospectors:

  • input_type: log

    • C:\webpay\logs\tran_track_engine_temp.log

output.logstash:

The Logstash hosts

hosts: ["localhost:5043"]

Think I've sorted this now. In my logstash elasticsearch output plugin I added:

document_type => "%{[@metadata][type]}"

Found this at: https://www.elastic.co/guide/en/beats/filebeat/current/logstash-output.html

In retrospect seems an obvious place to look (RTFM)

Seems to load now

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.