My Logstash is used in reading the data of kafka and sending them to Elasticsearch. But now I 'm facing some problems,sucsh as when the index was frozen in ES, the Logstash would throw an exception and then, the Logstash would not continue to read the next offset of the kafka and the same exception would be throwed again and again. Now, I want to know that if there's any configuration I can change to ignore the exceptions of the Logstash,so that the offset of kafka can be increased.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.