Parallel Iterations of Bulk

Hi!

I'm trying to have iterations to a parallel bulk operations, basically each operation is ~100K docs, and it's finishing too soon, I want to load it for a longer period of time
So I tried iterations and also target-throughput and also time-period but no success

It just finished 1 iteration and that's it, doesn't continue again, over and over.
how can I achieve that ?

{
  "version": 2,
  "title": "a",
  "description": "a",
  "indices": [
{
  "name": "a1",
  "body": "mapping.json"
},
		{
  "name": "a2",
  "body": "mapping.json"
},
		{
  "name": "a3 ",
  "body": "mapping.json"
},
		{
  "name": "a4",
  "body": "mapping.json"
},
		{
  "name": "a5",
  "body": "mapping.json"
},
		{
  "name": "a6",
  "body": "mapping.json"
},
		{
  "name": "a7",
  "body": "mapping.json"
},
		{
  "name": "a8",
  "body": "mapping.json"
},
		{
  "name": "a9",
  "body": "mapping.json"
}
  ],
  "corpora": [
{
  "name": "data1",
  "documents": [
    {
      "source-file": "data.json",
					"target-index": "a1",
      "document-count": 100117
    }
  ]
},
		{
  "name": "data2",
  "documents": [
    {
      "source-file": "data.json",
					"target-index": "a2",
      "document-count": 100117
    }
  ]
},{
  "name": "data3",
  "documents": [
    {
      "source-file": "data.json",
					"target-index": "a3",
      "document-count": 100117
    }
  ]
},{
  "name": "data4",
  "documents": [
    {
      "source-file": "data.json",
					"target-index": "a4",
      "document-count": 100117
    }
  ]
},{
  "name": "data5",
  "documents": [
    {
      "source-file": "data.json",
					"target-index": "a5",
      "document-count": 100117
    }
  ]
},{
  "name": "data6",
  "documents": [
    {
      "source-file": "data.json",
					"target-index": "a6",
      "document-count": 100117
    }
  ]
},{
  "name": "data7",
  "documents": [
    {
      "source-file": "data.json",
					"target-index": "a7",
      "document-count": 100117
    }
  ]
},{
  "name": "data8",
  "documents": [
    {
      "source-file": "data.json",
					"target-index": "a8",
      "document-count": 100117
    }
  ]
},{
  "name": "data9",
  "documents": [
    {
      "source-file": "data.json",
					"target-index": "a9",
      "document-count": 100117
    }
  ]
}
  ],
  "schedule": [
{
  "parallel": {
				"iterations": 10000000,
    "tasks": [
      {
        "name": "bulk1",
        "clients": 3,
						"target-throughput": 50,
        "operation": {
          "operation-type": "bulk",
          "corpora": "data1",
          "bulk-size": 100
        }
      },
      {
        "name": "bulk",
        "clients": 3,
        "operation": {
          "operation-type": "bulk",
          "corpora": "data2",
          "bulk-size": 100
        }
      },
      {
        "name": "bulk3",
        "clients": 3,
        "operation": {
          "operation-type": "bulk",
          "corpora": "data3",
          "bulk-size": 100
        }
      },
      {
        "name": "bulk4",
        "clients": 3,
        "operation": {
          "operation-type": "bulk",
          "corpora": "data4",
          "bulk-size": 100
        }
      },
      {
        "name": "bulk5",
        "clients": 3,
        "operation": {
          "operation-type": "bulk",
          "corpora": "data5",
          "bulk-size": 100
        }
      },
      {
        "name": "bulk6",
        "clients": 3,
        "operation": {
          "operation-type": "bulk",
          "corpora": "data6",
          "bulk-size": 100
        }
      },
      {
        "name": "bulk7",
        "clients": 3,
        "operation": {
          "operation-type": "bulk",
          "corpora": "data7",
          "bulk-size": 100
        }
      },
      {
        "name": "bulk8",
        "clients": 3,
        "operation": {
          "operation-type": "bulk",
          "corpora": "data8",
          "bulk-size": 100
        }
      },
      {
        "name": "bulk9",
        "clients": 3,
        "operation": {
          "operation-type": "bulk",
          "corpora": "data9",
          "bulk-size": 100
        }
      }
    ]
  }
}
  ]
}

Tried upgrading to 1.3.0 same deal, it basically does only 1 iteration, even though the data file include only raw data, without metadata id's (from my understanding esrally parses the data and add metadata header to each request with id)

did some internet search, and found similar cases, and you told that esrally doesn't support more than 1 iteration on bulk operations, and suggested people duplicate the corpora or make bigger files, hope there is a new way to handle this more elegant.

would love to get some help !
thank you all very much !

Hello,

As described in the docs, iterations inside the parallel element:

iterations (optional, defaults to 1): Allows to define a default value for all tasks of the parallel element.

So this just propagates the iterations property to operations that support it. See the docs and this reply.

If you want to rerun bulk operations you can use a jinja2 for loop together with a comma joiner; here is an example from another track.

Rgs,
Dimitris

@dliappis Thank you very much ! I ended up generating bigger and bigger files.

A question for the jinja solution, when I duplicate the same file, Is there any lag\delay between each time it reads the same file again? because continuity is very important for my testings, This way I can save a lot of disc space

Also a question about target-throughput, when I use it for bulk operation, and looking in my use case where I have around 10 concurrent bulk operations, if I set for each operation a target-throughput, will it slow down the whole thing? because it's a log of timers\sleeps concurrently across 10+ operations and each operation with multiple clients

@dliappis
I have tried playing with the target-throughput, and it seem to be connected\related to bulk-size, I've tried setting the target-throughputto either 1 1000 and not use it at all, and the Median Throughput stays the same, for a parallel 1 bulk operation, with just 1 client.

When you use a Jinja2 for loop there will be the same (small) delay between tasks to the delay you see between executing explicitly defined operations in a regular track.

Tasks inside the parallel element can have their own independent clients and target-throughput and are independent. There is a large number of examples about the parallel element in this part of the documentation that I suggest you take a look at.

This sounds normal. target-throughput is not a property that will "accelerate" the execution of bulk by automatically increasing clients; if you've specified 1 client, it will stick to that to achieve the specified target-throughput with 1 client. If target-throughput is smaller that what 1 client can achieve, Rally will pause the schedule as required to honor this, but it won't automatically increase clients to achieve larger throughputs. You need to scale the number of clients/bulk size yourself (and read up on sizing Elasticsearch) if the target-throughput can't be achieved.

regarding the jinja question, I didn't mean how jinja work, I ment when I generate a track that uses the same file over and over again in the same corpus, will it affect the performance in terms of:

{
  "documents": [
    {
      "source-file": "data.json",
      "target-index": "a3",
      "document-count": 100117
    },
    {
      "source-file": "data.json",
      "target-index": "a3",
      "document-count": 100117
    },
    {
      "source-file": "data.json",
      "target-index": "a3",
      "document-count": 100117
    },
    {
      "source-file": "data.json",
      "target-index": "a3",
      "document-count": 100117
    },
    {
      "source-file": "data.json",
      "target-index": "a3",
      "document-count": 100117
    },
    {
      "source-file": "data.json",
      "target-index": "a3",
      "document-count": 100117
    }
  ]
}

Will there be any lag\performance impact between each document it goes through ?

Second, regarding the target-throughput I was only able to achieve it working with search operations, regarding bulk no-success, because in my opinion it seams that bulk-size over-affect the throughtput, in some way, but still I wasn't able to achieve any results with the target-throughput, as I wasn't trying to accelerate result, on the contrary, I was trying to limit them to a lower number that meets the requirements of my test.
I was trying to meet an exact number for example 50 doc\s, but even by setting the target-throughput to 1, still was getting huge number of throughput of 600-900 docs\s in the test results.

Another question I had in mind, in parallel operations, is there a way to know when a task completed ? because if I have multiple ones, I want to know who finishes earlier, to be able to balance them (in terms of data files sizes - each index have different doc-size that I'm using in the source-files)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.