Elasticsearch crashes on filebeat pipeline

Would like some input Elasticsearch on why our cluster is crashing. Below is an excerpt of the full stack trace of the error as the message body has a limit and I can't paste the full stack trace (How to upload the full log file ?)

[2018-06-20T03:52:58,913][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [ip-172-31-7-59.ap-southeast-2.compute.internal] fatal error in thread [elasticsearch[ip-172-31-7-59.ap-southeast-2.compute.internal][masterService#updateTask][T#1]], exiting
java.lang.StackOverflowError: null
at java.util.HashMap.hash(Unknown Source) ~[?:1.8.0_171]
at java.util.LinkedHashMap.get(Unknown Source) ~[?:1.8.0_171]
at org.elasticsearch.index.mapper.ObjectMapper$TypeParser.parseNested(ObjectMapper.java:214) ~[elasticsearch-6.3.0.jar:6.3.0]
at org.elasticsearch.index.mapper.ObjectMapper$TypeParser.parse(ObjectMapper.java:168) ~[elasticsearch-6.3.0.jar:6.3.0]
.......
.......
at org.elasticsearch.index.mapper.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:278) ~[elasticsearch-6.3.0.jar:6.3.0]
at org.elasticsearch.index.mapper.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:199) ~[elasticsearch-6.3.0.jar:6.3.0]
at org.elasticsearch.index.mapper.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:131) ~[elasticsearch-6.3.0.jar:6.3.0]
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:112) ~[elasticsearch-6.3.0.jar:6.3.0]
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:92) ~[elasticsearch-6.3.0.jar:6.3.0]
at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:736) ~[elasticsearch-6.3.0.jar:6.3.0]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:264) ~[elasticsearch-6.3.0.jar:6.3.0]
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230) ~[elasticsearch-6.3.0.jar:6.3.0]
at org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:630) ~[elasticsearch-6.3.0.jar:6.3.0]
at org.elasticsearch.cluster.service.MasterService.calculateTaskOutputs(MasterService.java:267) ~[elasticsearch-6.3.0.jar:6.3.0]
at org.elasticsearch.cluster.service.MasterService.runTasks(MasterService.java:197) ~[elasticsearch-6.3.0.jar:6.3.0]
at org.elasticsearch.cluster.service.MasterService$Batcher.run(MasterService.java:132) ~[elasticsearch-6.3.0.jar:6.3.0]
at org.elasticsearch.cluster.service.TaskBatcher.runIfNotProcessed(TaskBatcher.java:150) ~[elasticsearch-6.3.0.jar:6.3.0]
at org.elasticsearch.cluster.service.TaskBatcher$BatchedTask.run(TaskBatcher.java:188) ~[elasticsearch-6.3.0.jar:6.3.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:625) ~[elasticsearch-6.3.0.jar:6.3.0]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:244) ~[elasticsearch-6.3.0.jar:6.3.0]
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:207) ~[elasticsearch-6.3.0.jar:6.3.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) ~[?:1.8.0_171]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) ~[?:1.8.0_171]
at java.lang.Thread.run(Unknown Source) [?:1.8.0_171]

This fatal error occurs when ingesting logs to Elasticsearch which goes through a pipeline parser. This crash only occurs when parsing a specific app logs, for other apps it works fine. Below is the parser pipeline which does some simple parsing. Are you able to point why this issue is occurring ?

"standard_mule_log_parser": {
"description": "Pipeline that parses standard format logs and splits log message into fields",
"processors": [
{
"grok": {
"field": "message",
"patterns": [
"%{TIMESTAMP_ISO8601:log-timestamp} %{GRABBETWEENBRACKETS:threadname} %{LOGLVL:loglevel} %{UNTILNEXTSPACECHAR:classname} %{MULTILINE:message}"
],
"pattern_definitions": {
"GRABBETWEENBRACKETS": """[(.)]""",
"LOGLVL": "(INFO|ERROR|WARN|DEBUG|TRACE)",
"UNTILNEXTSPACECHAR": """(\s
[^\s]+)""",
"MULTILINE": "(?m).*"
}
}
},
{
"gsub": {
"field": "message",
"pattern": "-",
"replacement": "",
"ignore_missing": true
}
},
{
"kv": {
"field": "message",
"field_split": ", ",
"value_split": ": ",
"ignore_failure": true
}
},
{
"trim": {
"field": "message",
"ignore_missing": true
}
},
{
"trim": {
"field": "classname",
"ignore_missing": true
}
},
{
"rename": {
"field": " System",
"target_field": "System",
"ignore_missing": true
}
},
{
"grok": {
"field": "source",
"patterns": [
"%{APPNAME:app_name}"
],
"pattern_definitions": {
"APPNAME": "(test|test1)"
}
}
},
{
"convert": {
"field": "Elapsed",
"type": "integer",
"ignore_missing": true
}
},
{
"date": {
"field": "log-timestamp",
"formats": [
"yyyy-MM-dd HH:mm:ss,SSS"
],
"timezone": "Pacific/Auckland"
}
}
]
}

Could you post a complete stacktrace by any chance?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.