Filebeat ILM duplicating index when rolling from hot to warm

We are shipping server events using filebeat and metrics using metricbeat to our Elastic Cloud cluster that has a hot server and a warm server. The cluster is set up with no redundancy, ie just a single hot and a single warm node.

Both metricbeat and filebeat have the same ILM profile - that is to roll over from hot to warm at 40GB. Metricbeat and filebeat are both rolling over OK, but for some reason the filebeat index gets duplicated and the storage becomes 80G when on warm. I also see the status as "green" which usually means that the index is replicated.

How do I prevent the filebeat index from replicating when it rolls over?

Moving this to ES since I believe most relevant configuration uses ES directly with no cloud intervention (also this is the public support for the cloud infrastructure, which is deployable separately as "ECE", not the service itself)

Thanks, just bizarre that ILM is assigning a replicate to the filebeat index when rolling over, whereas metricbeat is rolling over as expected as I don't have any replica nodes in my cloud deployment.

OK, looking at the rolled over indexes closer, it seems like there are two shards being provisioned for the filebeat index and only one for the metricbeat one. I'm still not sure why the total is going to 80GB when rolling over from hot to warm?

Filebeat:

    {
      "_shards": {
        "total": 2,
        "successful": 2,
        "failed": 0
      },
      "stats": {
        "uuid": "trI2LaqtQ92gIjm8zBglPw",
        "primaries": {
          "docs": {
            "count": 152068325,
            "deleted": 0
          },
          "store": {
            "size_in_bytes": 43264273311
          },
    "total": {
          "docs": {
            "count": 304136650,
            "deleted": 0
          },
          "store": {
            "size_in_bytes": 86528546619

Metricbeat:

{
  "_shards": {
    "total": 2,
    "successful": 1,
    "failed": 0
  },
  "stats": {
    "uuid": "9o3tcrG_SW2pjr6rp7nU-Q",
    "primaries": {
      "docs": {
        "count": 43300277,
        "deleted": 16
      },
      "store": {
        "size_in_bytes": 42972823831
      },
"total": {
      "docs": {
        "count": 43300277,
        "deleted": 16
      },
      "store": {
        "size_in_bytes": 42972823831
      },

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.