My request looks like the following... also each index (per day) is 5 shards 1 replica.
POST /pbxcdr-2015.09.01%2Cpbxcdr-2015.09.02%2Cpbxcdr-2015.09.03%2Cpbxcdr-2015.09.04%2Cpbxcdr-2015.09.05%2Cpbxcdr-2015.09.06%2Cpbxcdr-2015.09.07%2Cpbxcdr-2015.09.08%2Cpbxcdr-2015.09.09%2Cpbxcdr-2015.09.10%2Cpbxcdr-2015.09.11%2Cpbxcdr-2015.09.12%2Cpbxcdr-2015.09.13%2Cpbxcdr-2015.09.14%2Cpbxcdr-2015.09.15%2Cpbxcdr-2015.09.16%2Cpbxcdr-2015.09.17%2Cpbxcdr-2015.09.18%2Cpbxcdr-2015.09.19%2Cpbxcdr-2015.09.20%2Cpbxcdr-2015.09.21%2Cpbxcdr-2015.09.22%2Cpbxcdr-2015.09.23%2Cpbxcdr-2015.09.24%2Cpbxcdr-2015.09.25%2Cpbxcdr-2015.09.26%2Cpbxcdr-2015.09.27%2Cpbxcdr-2015.09.28%2Cpbxcdr-2015.09.29%2Cpbxcdr-2015.09.30%2Cpbxcdr-2015.10.01/cdr/_search?search_type=scan&scroll=30s&size=500&ignore_unavailable=1&allow_no_indices=1
{"query":{"filtered":{"query":{"bool":{"must":[{"match":{"dom.domain":"www.google.com"}}]}},"filter":{"range":{"@timestamp":{"gte":"2015-09-01T00:00:00-07:00","lte":"2015-09-30T23:59:59-07:00"}}}}}}
Specifying it the way the docs say to makes it a GET request and therefore eventually truncates the request URI. I just removed it and made the body attribute the scroll_id and then it converted it to a POST request and problem solved.
try {
// Execute a Scroll request
$response = $this->elastic->scroll(
array(
//"scroll_id" => $scroll_id, //...using our previously obtained _scroll_id
"scroll" => "15s", // and the same timeout window
"body" => $scroll_id
)
);
} catch (\Exception $e) {
var_dump($e->getMessage());
}
Adding this here for anyone else who encounters this.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.