Some logs are not being ingested to Elasticsearch index

Hi,

I am experiencing a weird problem with Elasticsearch. Some logs are being ingested and some are not to a particular index. All other indexes are working perfectly fine.

I thought the problem was with index.mapping.total_fields.limit and changed it to 2000 using,

PUT my_index/_settings
{
  "index.mapping.total_fields.limit": 2000
}

I don't have index template. Do I need to include one? Please let me know what the issue could be?

Thanks

Too few datails.

  1. How do you feed data into ES? Logstash?
  2. Can you identify the logs that are not ingested?

Yes, with Logstash.

Syslog -Logstash1-Logstah2-Elasticsearch

yes I can. Format for missing and non-missing is exactly same.

I am thinking there is serious issue with this particular index but I don't see any errors in Elasticsearch log files.

What about logstash logs?

Maybe the format is the same but the values causes the problem.

No error logs in Logstash logs too.

What else I can do to find the issue?

I would try to manually insert the non-ingested doc into ES and see the response.

Okay, but why would index need to perform very slowly if the documents are not ingested? Shouldn't there be any underlying index related issue?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.