Elasticsearch Ingest Node or Filebeats processors or Logstash to add fields to my logs

I am running Dockerized Elastic Stack on a single AWS EC2 instance, with Packetbeats, Filebeats, Metricbeats, Kibana and Elasticsearch. I've been reading the documentation over this past week and I need help determining if I need to move to using Logstash or Ingest Node to add fields to my container logs, or if there is a way to format my logs within my Docker apps to have Filebeats ship them with the necessary fields. My web API is running in Docker, and I want some information pulled out of my logs and made into fields. For example
User tswift has logged in
User pebbles accessed the endpoint createHospital
A new hospital MyNewHospital was created
For line 1, I would like user to be pulled out as a field
For line 2, I would like user and endpoint to be pulled out as a field
For line 3, I would like hospital to be pulled out as a filed
I know this is possible with Logstash and grok filters, but is there a way to do this with Filebeats or Ingest Node. Or is there a way to format my logs so it doesn't need extra processing?

1 Like

With the default config, sending beats to elastic, you are probably using ingest now, if your are using the beats modules. You can verify with a GET/_nodes/stats/ingest call. If so, you might be able to add code to the supplied ingest pipeline.

Here is what I get back from the curl.

{
  "_nodes": {
    "total": 1,
    "successful": 1,
    "failed": 0
  },
  "cluster_name": "docker-cluster",
  "nodes": {
    "_M3xRqdIRRS4AsK4NZbSHg": {
      "timestamp": 1607371764725,
      "name": "f4d0e0bc40ea",
      "transport_address": "XXXX:9300",
      "host": "XXXX",
      "ip": "XXXX:9300",
      "roles": [
        "data",
        "data_cold",
        "data_content",
        "data_hot",
        "data_warm",
        "ingest",
        "master",
        "ml",
        "remote_cluster_client",
        "transform"
      ],
      "attributes": {
        "ml.machine_memory": "16624467968",
        "xpack.installed": "true",
        "transform.node": "true",
        "ml.max_open_jobs": "20"
      },
      "ingest": {
        "total": {
          "count": 0,
          "time_in_millis": 0,
          "current": 0,
          "failed": 0
        },
        "pipelines": {
          "xpack_monitoring_6": {
            "count": 0,
            "time_in_millis": 0,
            "current": 0,
            "failed": 0,
            "processors": [
              {
                "script": {
                  "type": "script",
                  "stats": {
                    "count": 0,
                    "time_in_millis": 0,
                    "current": 0,
                    "failed": 0
                  }
                }
              },
              {
                "gsub": {
                  "type": "gsub",
                  "stats": {
                    "count": 0,
                    "time_in_millis": 0,
                    "current": 0,
                    "failed": 0
                  }
                }
              }
            ]
          },
          "xpack_monitoring_7": {
            "count": 0,
            "time_in_millis": 0,
            "current": 0,
            "failed": 0,
            "processors": []
          }
        }
      }
    }
  }
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.