Ingest pipeline drops the field as @timestamp while loading Elasticsearch's server and slowlog using filebeat

Ingest pipeline drops the field as @timestamp while loading Elasticsearch's server and slowlog using filebeat.

  "filebeat-7.9.0-elasticsearch-server-pipeline" : {
"description" : "Pipeline for parsing elasticsearch server logs",
"processors" : [
  {
    "rename" : {
      "field" : "@timestamp",
      "target_field" : "event.created"
    }
  },

So the issue is, I loaded default index template, ingest pipeline and index pattern.

And when I go to discover I do not see any logs for dataset:elasticsearch, this is because index pattern uses the timefield as @timestamp, but the logs ingested via filebeat and elasticsearch-ingest-pipeline does not have that. As the same gets renamed to event.created.

In order to get that in ECS @timestamp field should not be renamed.

Let me know if this is a bug or expected behavior.

Hi!

Are you running Filebeat or Elastic-Agent? Because datasets are introduced with Agent and Ingest Managment.

C.

@ChrsMark I am running the filebeat, And as per existing ECS it should have @timestamp isn't it?

Hi,

@timestamp should be in the document. Here is the flow of all the timestamps.

  1. ES writes a timestamp in the log
  2. Filebeat reads the log entry and records that time as @timestamp
  3. Filebeat send the log entry to the ingest node
  4. ingest node records time as _ingest.timestamp
  5. ingest node runs pipeline.yml
  6. pipeline.yml copies _ingest.timestamp to event.ingested (time from step 4)
  7. pipeline.yml copies @timestamp to event.created (time from step 2)
  8. pipeline.yml starts pipeline-plaintext.yml or pipeline-json.yml depending on if it is plaintext or json log
  9. pipeline-json.yml or pipeline-plaintext.yml, copy the timestamp from Step 1 to @timestamp

So I'm wondering if pipeline-json.yml or pipeline-plaintext.yml are running. Can you share an example log entry?

@Lee_Hinman it's not working as expected.. There is no error as such.. If you refer my post so there is a block on pipeline which is renaming the @timestamp field and that's why its not been ingested to Elastic.
So is that expected behavior..

If you aren't getting @timestamp that would not be expected behavior. Can you share an example log entry?

But the rename is expected behavior. The @timestamp should be set by getting the value from the original ES log entry. So after the block that you mention there should be 2 pipeline processors:

processors:
- rename:
field: '@timestamp'
target_field: event.created
- grok:
field: message
patterns:
- ^%{CHAR:first_char}
pattern_definitions:
  CHAR: .
- pipeline:
if: ctx.first_char != '{'
name: '{< IngestPipeline "pipeline-plaintext" >}'
- pipeline:
if: ctx.first_char == '{'
name: '{< IngestPipeline "pipeline-json" >}'

and then in those 2 pipelines there are lines like:

- date:
if: ctx.event.timezone == null
field: elasticsearch.server.timestamp
target_field: '@timestamp'
formats:
- yyyy-MM-dd'T'HH:mm:ss,SSS
on_failure:
- append:
    field: error.message
    value: '{{ _ingest.on_failure_message }}'
- date:
if: ctx.event.timezone != null
field: elasticsearch.server.timestamp
target_field: '@timestamp'
formats:
- yyyy-MM-dd'T'HH:mm:ss,SSS
timezone: '{{ event.timezone }}'
on_failure:
- append:
    field: error.message
    value: '{{ _ingest.on_failure_message }}'

And elasticsearch.server.timestamp should be populated by the GROK above that:

      LOG_HEADER: \[%{TIMESTAMP_ISO8601:elasticsearch.server.timestamp}\]\[%{LOGLEVEL:log.level}%{SPACE}\]\[%{DATA:elasticsearch.component}%{SPACE}\](%{SPACE})?(\[%{DATA:elasticsearch.node.name}\])?(%{SPACE})?

@Lee_Hinman I don;t see a grok if I get the pipeline by running:
GET _ingest/pipeline/filebeat-7.9.0-elasticsearch-server-pipeline

  "filebeat-7.9.0-elasticsearch-server-pipeline" : {
"description" : "Pipeline for parsing elasticsearch server logs",
"processors" : [
  {
    "rename" : {
      "field" : "@timestamp",
      "target_field" : "event.created"
    }
  },
  {
    "grok" : {
      "pattern_definitions" : {
        "CHAR" : "."
      },
      "field" : "message",
      "patterns" : [
        "^%{CHAR:first_char}"
      ]
    }
  },
  {
    "pipeline" : {
      "name" : "filebeat-7.9.0-elasticsearch-server-pipeline-plaintext",
      "if" : "ctx.first_char != '{'"
    }
  },
  {
    "pipeline" : {
      "if" : "ctx.first_char == '{'",
      "name" : "filebeat-7.9.0-elasticsearch-server-pipeline-json"
    }
  },
  {
    "script" : {
      "params" : {
        "minutes_unit" : "m",
        "seconds_unit" : "s",
        "milliseconds_unit" : "ms",
        "ms_in_one_s" : 1000,
        "ms_in_one_m" : 60000
      },
      "lang" : "painless",
      "source" : """if (ctx.elasticsearch.server.gc != null && ctx.elasticsearch.server.gc.observation_duration != null) {
  if (ctx.elasticsearch.server.gc.observation_duration.unit == params.seconds_unit) {
ctx.elasticsearch.server.gc.observation_duration.ms = ctx.elasticsearch.server.gc.observation_duration.time * params.ms_in_one_s;
  }
  if (ctx.elasticsearch.server.gc.observation_duration.unit == params.milliseconds_unit) {
ctx.elasticsearch.server.gc.observation_duration.ms = ctx.elasticsearch.server.gc.observation_duration.time;
  }
  if (ctx.elasticsearch.server.gc.observation_duration.unit == params.minutes_unit) {
ctx.elasticsearch.server.gc.observation_duration.ms = ctx.elasticsearch.server.gc.observation_duration.time * params.ms_in_one_m;
  }
} if (ctx.elasticsearch.server.gc != null && ctx.elasticsearch.server.gc.collection_duration != null) {
  if (ctx.elasticsearch.server.gc.collection_duration.unit == params.seconds_unit) {
ctx.elasticsearch.server.gc.collection_duration.ms = ctx.elasticsearch.server.gc.collection_duration.time * params.ms_in_one_s;
  }
  if (ctx.elasticsearch.server.gc.collection_duration.unit == params.milliseconds_unit) {
ctx.elasticsearch.server.gc.collection_duration.ms = ctx.elasticsearch.server.gc.collection_duration.time;
  }
  if (ctx.elasticsearch.server.gc.collection_duration.unit == params.minutes_unit) {
ctx.elasticsearch.server.gc.collection_duration.ms = ctx.elasticsearch.server.gc.collection_duration.time * params.ms_in_one_m;
  }
}"""
    }
  },
  {
    "set" : {
      "field" : "event.kind",
      "value" : "event"
    }
  },
  {
    "set" : {
      "value" : "database",
      "field" : "event.category"
    }
  },
  {
    "script" : {
      "lang" : "painless",
      "source" : """def errorLevels = ['FATAL', 'ERROR']; if (ctx?.log?.level != null) {
  if (errorLevels.contains(ctx.log.level)) {
ctx.event.type = 'error';
  } else {
ctx.event.type = 'info';
  }
}"""
    }
  },
  {
    "set" : {
      "field" : "host.name",
      "value" : "{{elasticsearch.node.name}}",
      "ignore_empty_value" : true
    }
  },
  {
    "set" : {
      "value" : "{{elasticsearch.node.id}}",
      "ignore_empty_value" : true,
      "field" : "host.id"
    }
  },
  {
    "remove" : {
      "field" : [
        "elasticsearch.server.gc.collection_duration.time",
        "elasticsearch.server.gc.collection_duration.unit",
        "elasticsearch.server.gc.observation_duration.time",
        "elasticsearch.server.gc.observation_duration.unit"
      ],
      "ignore_missing" : true
    }
  },
  {
    "remove" : {
      "ignore_missing" : true,
      "field" : [
        "elasticsearch.server.timestamp",
        "elasticsearch.server.@timestamp"
      ]
    }
  },
  {
    "remove" : {
      "field" : [
        "first_char"
      ]
    }
  }
],
"on_failure" : [
  {
    "set" : {
      "field" : "error.message",
      "value" : "{{ _ingest.on_failure_message }}"
    }
  }
]
  } 

And there is all default install and setup, as installed filebeat, ran filebeat setup (to load default index and template)..
And then started sending the data to elastic.. So where exactly should I see the grok in the pipeline?

Try:

GET _ingest/pipeline/filebeat-7.9.0-elasticsearch-server-pipeline-plaintext

and

GET _ingest/pipeline/filebeat-7.9.0-elasticsearch-server-pipeline-json

pipelines can call other pipelines, and this one does that with these 2 snippets:

  {
    "pipeline" : {
    "name" : "filebeat-7.9.0-elasticsearch-server-pipeline-plaintext",
    "if" : "ctx.first_char != '{'"
  }
  },
  {
     "pipeline" : {
       "if" : "ctx.first_char == '{'",
       "name" : "filebeat-7.9.0-elasticsearch-server-pipeline-json"
     }
  },

Thanks @Lee_Hinman So I can see there is grok to parse, but I am still not getting as why it is not getting parsed.

I am using all default settings, is there a way you can test in your local and see if that works.

Typical setup..

  1. Install filebeat
  2. Set up filebeat using setup filebeat
  3. Install ES on same server
  4. Enable elasticsearch module in filebeat and start sending data.

As this is all default setup so I expected it to work out of the box..

Can you give an example entry from your server or slowlog? Just one line should do it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.