How to concatenate two fields using add fields processor in filebeat

I have 2 fields with one field carrying date value and another field carrying time value.
I would like to have a single field with both date and time values concatenated.

Could you please suggest?

Any help on this please?

Please provide samples so we can help.

What does the date part look like? What does the time part look like?

What Kind of fields are they in?

What do you want as a result?
Do you want a date field or just a string?

If you want us to help, you need to provide a bit more information.

Okay sure @stephenb

I have the below html code where I used dissect processor to decode the value

<..!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" http:// xxx.dtd>







<..span id="date">05/06/2022<../span><..span id="time"> 12:03:43<../span>



After using the dissect processor , I have the target field value as "test" where dissected '05/06/2022" to date and "12:03:43" to time field. So now I have the fields like below: "05/06/2022"

Now my concern is I would like to have a single field like below:

test.timestamp= "05/06/2022 12:03:43"

Are you using filebeat processor or ingest pipeline / processor?

I am using filebeat processor @stephenb

In that case I think you would use the script processor and hand code the appending, I am not an expert on that.

Perhaps you should take a look at doing these types of parsing in the Ingest Pipeline. I think there are several advantages.

  1. With the ingest pipeline the Logic / parsing is in a central place in Elasticsearch and can be changed as needed without redploying changes to the filebeat.ymls, and changes take effect immediately whether you have 1 filebeat or 1000

  2. There are a richer set of processors available for example there is an append processor that would make this easy.

  3. I also think the ingest pipelines are a bit easier to "code" in once you get used to it... just a thought.

You could also just leave the dissect in filebeat and do the appending in an ingest pipeline if you wanted.

Okay, got you @stephenb .
If I would like to use append processor in the ingest pipeline, could you please suggest how that can append the values by providing me an example?


Ohh Darn!! You would not use the append processor that is for array apologies I did not read closely that is for arrays... let me recheck and get back.

It is probably just a set processor let me look and provide and example! :slight_smile:


Here you go

PUT _ingest/pipeline/discuss-append
  "processors": [
      "set": {
        "field": "test.full_date",
        "value": "{{{}}} {{{test.time}}}"

POST _ingest/pipeline/discuss-append/_simulate
  "docs": [
      "_source": {
        "test": {
          "date": "05/06/2022",
          "time": "12:03:43"

  "docs" : [
      "doc" : {
        "_index" : "_index",
        "_id" : "_id",
        "_source" : {
          "test" : {
            "date" : "05/06/2022",
            "time" : "12:03:43",
            "full_date" : "05/06/2022 12:03:43"
        "_ingest" : {
          "timestamp" : "2022-06-07T16:03:09.612510643Z"

Sure, thank you for this! I will check executing the same and get back to you.

I got the response @stephenb .
But how to search this newly created field "test.full_date" for any index like filebeat, if I have to utilize this field in Kibana?

You need to add the pipeline to the Elasticsearch output section of filebeat.yml. this will execute the pipeline and create the new field at ingest time.

This will add the field to the documents / index at ingest time, then the field will be available in kibana.

You should also create a mapping for this field if you want it to be of type date

Thank you @stephenb

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.