Kafka can collect java stack log but elastic search cannot. How to fix?

Kafka can collect java stack log but Elasticsearch cannot. How to fix?

It's not clear what you are asking sorry.

1 Like

The log below can be collected to kafka and also can transfer to logstash

| offset: 6686403225 isValid: true crc: null keySize: -1 valueSize: 2976 CreateTime: 1683261945722 baseOffset: 6686403220 lastOffset: 6686403233 baseSequence: -1 lastSequence: -1 producerEpoch: -1 partitionLeaderEpoch: 2 batchSize: 5985 magic: 2 compressType: SNAPPY position: 19496109 sequence: -1 headerKeys: payload: {"agent":{"version":"7.14.0","id":"xxx","hostname":"filebeat-filebeat-l7j7n","ephemeral_id":"xxx","name":"filebeat-filebeat-l7j7n","type":"filebeat"},"datacenter":"xxx","xxx":"xxx","xxx":"xxx","xxx":"xxx","@timestamp":"2023-05-05T04:45:43.633Z","time":"2023-05-05 12:45:43.633","logger":"main.java.com.gw.datacenter.order.service.OrderServiceImpl","uuid":"xxx","stackTrace":"java.lang.Exception: JDBOrderRecords status error8006\n\tat main.java.com.gw.common.system.parse.JDBOrderHandle.getJDBOrderRecords(JDBOrderHandle.java:148)\n\tat main.java.com.gw.datacenter.order.service.OrderServiceImpl.insertOrder4JDB(OrderServiceImpl.java:201)\n\tat main.java.com.gw.datacenter.order.service.OrderServiceImpl$$FastClassBySpringCGLIB$$5d1c0a15.invoke()\n\tat org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)\n\tat org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:779)\n\tat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)\n\tat org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:750)\n\tat org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)\n\tat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\n\tat org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:750)\n\tat org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:692)\n\tat main.java.com.gw.datacenter.order.service.OrderServiceImpl$$EnhancerBySpringCGLIB$$1ff87b87.insertOrder4JDB()\n\tat main.java.com.gw.common.system.timer.Order4JDBTimer.lambda$execute$0(Order4JDBTimer.java:98)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\n","tags":["beats_input_codec_plain_applied"],"thread":"pool-6-thread-107","ecs":{"version":"1.10.0"},"stack":"main.java.com.gw.datacenter.order.service.OrderServiceImpl:227[insertOrder4JDB]","level":"ERROR","xxx":"uat","log.type":"non-tracing","message":"JDBOrderRecords status error8006","fields":{"tags":"xxx"},"container":{"runtime":"docker","id":"xxx","image":{"name":"xxx"}},"xxx":"xxx"}

But from logstash to Elasticsearch the log doesn't show

What is inserting data into Kafka? Is it in exactly this format there?

What does your Logstash config look like?

Are there any errors in the Logstash logs?

Can you output the final event processed by Logstash to file so we can see what the fully processed event looks like?

Configuration in kafka see below

input {
kafka{
bootstrap_servers => "kafka-ip:port"
topics => ["xxxx","xxxx"]
codec => json
session_timeout_ms => "30000"
max_poll_records => "200"
max_poll_interval_ms => "600000"
fetch_min_bytes => "1"
request_timeout_ms => "305000"
auto_offset_reset => "latest"
group_id => "xxxxxxx"
}
}

out{
if [fields][tags] in ["xxxx","xxxxx"] {
if "dateparsefailure" not in [tags] and "grokparsefailure" not in [tags] and "timestampfailure" not in [tags] {
elasticsearch {
hosts => ["es-ip:port","es-ip:port"]
index => "uat
%{[k8s.container]}
%{+YYYY.MM.dd}"
user => xxxx
password => "xxxxx"
ssl => true
ssl_certificate_verification => true
cacert => "/etc/logstash/ssl/ca.pem"
}
}
else {
file {
path => "/var/log/logstash/error/uat
%{[tags]}_error-%{+YYYY.MM.dd}.log"
codec => rubydebug
}
}
}
}
}

This is the error in logstash

elasticsearch - Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:id=>"2023-05-05T05:52:40.099Z%{fingerprint}", :index=>"%{[fields][tags]}-2023.05", :routing=>nil}, {"message"=>""stack":"main.java.com.gw.datacenter.order.service.OrderServiceImpl:227[insertOrder4JDB]","time":"2023-05-05 12:45:38.560","xxx":"xxx","xxx":"xxx","fields":{"tags":"xxx"},"@timestamp":"2023-05-05T04:45:38.560Z","xxx":"uat","tags":["beats_input_codec_plain_applied"],"datacenter":"xxx","thread":"pool-6-thread-107","uuid":"xxx","message":"JDBOrderRecords status error8006","container":{"image":{"name":"img.xxx"},"id":"xxx","runtime":"docker"},"logger":"main.java.com.gw.datacenter.order.service.OrderServiceImpl","agent":{"hostname":"filebeat-filebeat-l7j7n","type":"filebeat","version":"7.14.0","name":"filebeat-filebeat-l7j7n","ephemeral_id":"xxx","id":"xxx"},"k8s.node":"xxx","level":"ERROR","ecs":{"version":"1.10.0"},"xxx":"xxx","log.type":"non-tracing","stackTrace":"java.lang.Exception: JDBOrderRecords status error8006\n\tat main.java.com.gw.common.system.parse.JDBOrderHandle.getJDBOrderRecords(JDBOrderHandle.java:148)\n\tat main.java.com.gw.datacenter.order.service.OrderServiceImpl.insertOrder4JDB(OrderServiceImpl.java:201)\n\tat main.java.com.gw.datacenter.order.service.OrderServiceImpl$$FastClassBySpringCGLIB$$5d1c0a15.invoke()\n\tat org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)\n\tat org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:779)\n\tat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)\n\tat org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:750)\n\tat org.springframework.aop.interceptor.ExposeInvocationInterceptor.invoke(ExposeInvocationInterceptor.java:97)\n\tat org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)\n\tat org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.proceed(CglibAopProxy.java:750)\n\tat org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:692)\n\tat main.java.com.gw.datacenter.order.service.OrderServiceImpl$$EnhancerBySpringCGLIB$$1ff87b87.insertOrder4JDB()\n\tat main.java.com.gw.common.system.timer.Order4JDBTimer.lambda$execute$0(Order4JDBTimer.java:98)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\n"", "@timestamp"=>2023-05-05T05:52:40.099Z, "path"=>"/usr/share/logstash/bin/bb.txt", "@version"=>"1", "host"=>"xxx"}], :response=>{"index"=>{"_index"=>"%{[fields][tags]}-2023.05", "_type"=>"_doc", "_id"=>"2023-05-05T05:52:40.099Z%{fingerprint}", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [host] tried to parse field [host] as object, but found a concrete value"}}}}

You have a mapping conflict. In the index you are indexing this into the host field is defined as an object, but in the event shown it is a string:

You will need to change the structure of the event as you can not have multiple mappings for the same field.

All the "xxx" is actually us who edited the actual text before posting here to avoid leaking confidential info. Any other possibilities as to why the log cannot go into Elasticsearch?

The error message is very clear. The data (host field should be an object, not a string) does not match what is already in the index and therefore need to be changed.

Try changing "host": "xxx" to "host": {"name":"xxx"} or something similar. You may want to check the mappings to see how the host field is structured in other documents.

Already tried "host": {"name":"xxx"} but still doesn't show in Elasticsearch.

We tried configuring from logstash to output the log to Elasticsearch and local machine.
The log can be created normally in logstash but in Elasticsearch cannot.

Please show what you canged and what the result was. Did the error message change?

Hi @Alwyn_Tiu,

Can you share those logs as code or text snippets rather than images as you did previously?

The problem is solved. We found out that there is a "ignore_above: 1500" in the configuration. After we adjusted to 3500 the log can now be shown in Elasticsearch.

We'd like to ask if ignore_above is a default restriction for the length of the log? Or is there any other to restrictions like block the log above the length being set and then record an error message?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.