Debugging Elasticsearch output in Logstash

Hello.

We have an ELK Stack v7.5.1 deployed in a k8s cluster.
We have Filebeat gathering k8s/docker container logs.
I've confirmed by using stdout that Filebeat is passing the needed logs and Logstash is receiving it.
But I'm not able to find it in Kibana.
My Logstash output config is as follows:

output {
  stdout { codec => rubydebug }
  elasticsearch {
    hosts => ["${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}"]
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
  }
}

I enabled logging at debugging level but I am not seeing any errors in the logs of Elasticsearch or Logstash.
Can someone point me in the right direction to find out the problem?

Thanks!

Hi Jezreel,

Welcome to the Elastic community! :slight_smile:

Could you go to dev tools and check if you are seeing any logs for your index?

GET <index_name>/_search
{
    "query": {
        "match_all": {}
    }
}

Just wanted to see if it is a timestamp issue.

Hi,

Thanks for responding.
Yes I am able to see logs.
Since the logs is being collected by Filebeat, these are logs from all deployed containers.
We just discovered that the logs of a particular application container is not being sent.
The application is generating at least 50~100 logs per minute and I am also seeing it being printed out in Logstash but I am unable to find it in Kibana.

By the way, what do you mean by timestamp issue?

Hmmm, after your response @NerdSec .
I started investigating the timestamp in the logs of Logstash.
Using grep, I am seeing that the value of the field @timestamp is not the current date and time.
Some values are even months old.
I thought the @timestamp field is the date and time when the event was received from filebeat?

Hi,

Yes, it is the timestamp of the container/local system on which the agent is installed. Could you verify that the timestamp in the system is syncd with NTP?

Hi @NerdSec.

So I've traced this. It seems that filebeat is sending the incorrect @timestamp value.
Which is sent to Logstash then Elasticsearch. I believe it is the cause of the problem.
For various reasons, we are not able to add parsing configurations to filebeat at this time.
I am thinking of using ingest pipelines to add/set a different timestamp field.
I'm having trouble implementing this.

How do I access the date generated by my painless script in the set processor?

Anyways, thanks for helping me find the cause of the problem.

So I just found out that the .now() function is not supported.
Any suggestions for work around?

Tried the following:

PUT _ingest/pipeline/indexed_at
{
  "description": "Adds indexed_at timestamp to documents",
  "processors": [
    {
      "script": {
        "lang": "painless",
        "source": """
          ctx._source.indexed_at = ZonedDateTime.ofInstant(Instant.now(), ZoneId.of('Z'));
          """
      }
    }
  ]
}

This will never happen. Are you sure the local time on the server is correct? What is the output of the date command on the terminal?

For the ingest pipeline, I recommend opening a new issue in the Elasticsearch section of the forum. :slight_smile:

Yeah, I checked the value of the date command on both the pod and the EC2 instance.
But when I checked the stdout of filebeat, the @timestamp field is off.
Some dates are even a few months old.

Sample grep'd logs.
image

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.