Streamed Logs not loaded into Kibana

Hi there,

I am attempting to stream logs from AWS Lambda into Elasticsearch/Kibana.

I am doing so via a POST to the Elasticsearch endpoint, + the index and document type (e.g. POST https://example-es-domain.com/alblogs/alb-access-logs/).

I have verified that the lambda is sending out the request by sending to requestbin, and have verified that the information contained in each doc is correct.

However I can't see any of the information sent to Elasticsearch being loaded into Kibana, and was wondering if I was missing something.

The full flow for the operation is:

ALB -> S3 -> AWS Lambda -> Elasticsearch

There are a lot of possible things that could be going wrong, but it seems most likely to me to be something in your network configuration isn't allowing the request through to Elasticsearch. I'd start by looking at your router configuration.

Hi Chris,

Thanks for getting back to me.

I'm using the integrated AWS service, and after doing some troubleshooting set the access to open, so as far as I know there shouldn't be any issues communicating with Elasticsearch.

Have you verified that the documents are in your Elasticsearch index? You could use the CAT endpoints: https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-indices.html

Have you created an Index Pattern mapping in Kibana?

Hi Wylie,

The index I am attempting to create doesn't exist in Elasticsearch, so I'm assuming the documents aren't being ingested by Elasticsearch. As such I can't create an Index Pattern.

I've just tried a manual POST via Postman to the url, and have now gotten this error:

{
    "error": {
        "root_cause": [
            {
                "type": "mapper_parsing_exception",
                "reason": "failed to parse field [timestamp] of type [date] in document with id 'TOBDa2wBWM5KlmuOLUeH'"
            }
        ],
        "type": "mapper_parsing_exception",
        "reason": "failed to parse field [timestamp] of type [date] in document with id 'TOBDa2wBWM5KlmuOLUeH'",
        "caused_by": {
            "type": "illegal_argument_exception",
            "reason": "Invalid format: \"2019-08-06T15:54:46.701974Z\" is malformed at \".701974Z\""
        }
    },
    "status": 400
}

So this looks to be the reason why the documents aren't being accepted into Elasticsearch.

The timestamp field being passed in the request is

"timestamp": "2019-08-06T15:54:46.701974Z",

Could you help me to understand what's wrong with this format? I believe "YYYY-MM-DDTHH:mm:ss.ZZ" format should be supported?

I tried creating the index and mapping the format by running the following before inserting the documents as well, but am still getting the same error:

PUT  alb-access-logs-
{
    "mappings": {
      "alb-access-logs": {
        "properties": {
          "timestamp": {
            "type": "date",
            "format": "yyyy-MM-dd'T'HH:mm:ss.ZZ"
          }
      }
    }
  }
}

Your dates look like they are in the date_nanos format, not in date format: https://www.elastic.co/guide/en/elasticsearch/reference/7.x/date_nanos.html

You have two options:

I believe date_nanos is supported in Discover and Visualize, but other products will round to the nearest millisecond on display.

Thanks Wylie.

I'd like to keep the data I'm trying to load in the same if possible, so I've tried to set up an index with the mapping timestamp to type date_nanos:

PUT  nanos-alb-access-logs-
{
    "mappings": {
      "alb-access-logs": {
        "properties": {
          "timestamp": {
            "type": "date_nanos",
            "format": "yyyy-MM-dd'T'HH:mm:ss.ZZ"
          }
      }
    }
  }
}

However I'm getting an error saying that I need a handler on the field timestamp? Could you help me to format my console command correctly?

{
  "error": {
    "root_cause": [
      {
        "type": "mapper_parsing_exception",
        "reason": "No handler for type [date_nanos] declared on field [timestamp]"
      }
    ],
    "type": "mapper_parsing_exception",
    "reason": "Failed to parse mapping [alb-access-logs]: No handler for type [date_nanos] declared on field [timestamp]",
    "caused_by": {
      "type": "mapper_parsing_exception",
      "reason": "No handler for type [date_nanos] declared on field [timestamp]"
    }
  },
  "status": 400
}

Are you running Elasticsearch 7.0 or greater?

I have just checked and I'm using 6.7, so I guess I will need to look at changing the data I'm importing.

Thanks for all your help.

you could try the following

PUT  alb-access-logs
{
     "mappings": {
        "properties": {
            "timestamp": {
              "type": "date"
           }
      }
   }
}

Test with demo data

 POST alb-access-logs/_doc
 {
    "timestamp": "2019-08-06T15:54:46.701974Z"
 }

You will loose the microseconds for sorting (milliseconds are kept), but since _source equals your input, the textual information is kept.

GET alb-access-logs/_search
{
  "sort" : [
      {"timestamp" : {"order" : "asc", "mode" : "avg"}}
   ]
}

I'm in a hurry, just tested with 7.3, but should work with 6.7 also

You will loose the microseconds for sorting

That's not a concern for me, so if I can get this working with just the "date" type format that would be a good outcome for me.

I just gave this a go but unfortunately got the following error:

{
  "error": {
    "root_cause": [
      {
        "type": "mapper_parsing_exception",
        "reason": "Root mapping definition has unsupported parameters:  [timestamp : {type=date}]"
      }
    ],
    "type": "mapper_parsing_exception",
    "reason": "Failed to parse mapping [properties]: Root mapping definition has unsupported parameters:  [timestamp : {type=date}]",
    "caused_by": {
      "type": "mapper_parsing_exception",
      "reason": "Root mapping definition has unsupported parameters:  [timestamp : {type=date}]"
    }
  },
  "status": 400
}

ah, request needs to be adapted for ES 6.7! will come back to you

Thanks!

Please try the following, tested locally in 6.8, I've used you index + type naming, which is in your case identical.

PUT a pattern for you index

PUT alb-access-logs
{
  "mappings": {
    "alb-access-logs": {
      "properties": {
        "timestamp": {
          "type": "date"
        }
      }
    }
  }
}

POST an example

POST alb-access-logs/alb-access-logs
 {
    "timestamp": "2019-08-06T15:54:46.701974Z"
 }

Let's check the results:

 GET alb-access-logs/_search 
{
  "sort" : [
      {"timestamp" : {"order" : "asc", "mode" : "avg"}}
   ]
}

this should work

This is now working, thank you!

I am going to test with Lambda and will update with the results there.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.