Customizing message field for Ecs logging method of java applications

Hi Team,

I am using the filebeat latest version 7.7.1 to ship the data of my java application from my logfile to elasticsearch ecs logging method.

Data is available in elasticsearch and i am able to track down the logs in APM for every single request.

But my need is based on the message field i have to create lot of visualization dashboards. So here message field seems to be a single source field given example below. Included sample Json document.

{
  "_index": "filebeat-7.7.1-2020.06.15",
  "_type": "_doc",
  "_id": "aDyjt3IBIpD3R6JVoauZ",
  "_version": 1,
  "_score": null,
  "_source": {
    "@timestamp": "2020-06-15T11:00:47.633Z",
    "service.name": "OrchestrationCreateUpdateOKTA",
    "transaction.id": "1400411cb4bc2971",
    "trace.id": "633a6276ce996e51287a5285dc214ab6",
    "input": {
      "type": "log"
    },
    "host": {
      "name": "xxxxxx"
    },
    "message": "Userid:agent003",
    "process.thread.name": "http-nio-9020-exec-7",
    "log": {
      "offset": 812950,
      "file": {
        "path": "D:\\applications\\xxxx\\cxxxxx.log.json"
      }
    },
    "agent": {
      "type": "filebeat",
      "ephemeral_id": "ffa95d46-3a9c-4179-ba45-f3eba2adc261",
      "hostname": "xxxxxx",
      "id": "87980650-d5d3-4741-9e1d-4a5b0a7e3319",
      "version": "7.7.1"
    },
    "ecs": {
      "version": "1.5.0"
    },
    "log.level": "INFO",
    "log.logger": "xxxxxxxx",
    "event.dataset": "xxxxxxxx.log"
  },
  "fields": {
    "suricata.eve.timestamp": [
      "2020-06-15T11:00:47.633Z"
    ],
    "@timestamp": [
      "2020-06-15T11:00:47.633Z"
    ]
  },
  "sort": [
    1592218847633
  ]
}

With Same trace id it is creating individual json documents for each log lines which present in the log file for ex:
"message": "Userid:agent003",.

But in My case i wanted to combine all the log messages of single trace.id in a json document it can be in a another index or shall be in same index like below.

message:{

Userid:agent003,

eventtype:create,

response:success

}

If it would be like this it should be easy for me to query based on fields and creating visualizations for the available data. Could please some one guide me on this please.

Note : Currently while creating visualization message field is not visible in terms or significant terms. Kindly suggest.

Hi,

How do you import the logs - do you use LogStash? In this case the aggregate filter could be what you want. This filter enables you to combine data from multiple entries based on a shared identifier(in your case: the trace ID) so you could aggregate the message fields.

I should note that I haven't used the filter myself yet!

Best regards
Wolfram

Hi @Wolfram_Haussig,

Here i am using filebeat. So have to do the above mentioned logic using filebeat any idea how to achieve this using filebeat.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.