Why my buckets's doc_count is limit to 3000?

this is my DSL

{

"query":{
"match_all":{}
},

        "aggs": {
            "range": {
                "date_range": {
                    "field": "@timestamp", "format": 

"yyy.MM.dd.HH.mm.ss",
"ranges": [{"from": "2014.08.12.09.18.45", "to":
"2014.08.12.09.20.50"}]
},
"aggs": {
"over_time": {
"date_histogram": {
"field": "@timestamp",
"interval": "1s",
"format": "yyy.MM.dd.HH.mm.ss"
},
"aggs": {
"total_sent": {
"sum": {"field": "bytes_sent"}
}
}
}
}
}
}
}

thus is my result

{

  • "took":6,
  • "timed_out":false,
  • "_shards":{
    • "total":96,
    • "successful":96,
    • "failed":0
      },
  • "hits":{
    • "total":258002,
    • "max_score":0.0,
    • "hits":[
      ]
      },
  • "aggregations":{
    • "range":{
      • "buckets":[
        • {
          • "key":"2014.08.12.09.18.45-2014.08.12.09.20.50",
          • "from":1.407835125E12,
          • "from_as_string":"2014.08.12.09.18.45",
          • "to":1.40783525E12,
          • "to_as_string":"2014.08.12.09.20.50",
          • "doc_count":12000,
          • "over_time":{
            • "buckets":[
              • {
                • "key_as_string":"2014.08.12.09.18.45",
                • "key":1407835125000,
                • "doc_count":3000,
                • "total_sent":{
                  • "value":6.6126308E7
                    }
                    },
              • {
                • "key_as_string":"2014.08.12.09.18.47",
                • "key":1407835127000,
                • "doc_count":3000,
                • "total_sent":{
                  • "value":9.286586E7
                    }
                    },
              • {
                • "key_as_string":"2014.08.12.09.18.49",
                • "key":1407835129000,
                • "doc_count":3000,
                • "total_sent":{
                  • "value":1.21316184E8
                    }
                    },
              • {
                • "key_as_string":"2014.08.12.09.18.51",
                • "key":1407835131000,
                • "doc_count":3000,
                • "total_sent":{
                  • "value":8.3529544E7
                    }
                    }
                    ]
                    }
                    }
                    ]
                    }
                    }

}

my soft flow is:
Nginx ==pipe==> syslogng ==udp==> logstash ==es_river==> elasticsearch

How can I break the 3000doc/sec ?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1fa43dda-31cd-4f10-95bc-0f31778d9663%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I am not sure to understand if your question is about aggregations or
indexing speed?

On Tue, Aug 12, 2014 at 11:24 AM, 陈浩 humen1@gmail.com wrote:

this is my DSL

{

"query":{
"match_all":{}
},

        "aggs": {
            "range": {
                "date_range": {
                    "field": "@timestamp", "format":

"yyy.MM.dd.HH.mm.ss",
"ranges": [{"from": "2014.08.12.09.18.45", "to":
"2014.08.12.09.20.50"}]
},
"aggs": {
"over_time": {
"date_histogram": {
"field": "@timestamp",
"interval": "1s",
"format": "yyy.MM.dd.HH.mm.ss"
},
"aggs": {
"total_sent": {
"sum": {"field": "bytes_sent"}
}
}
}
}
}
}
}

thus is my result

{

  • "took":6,
  • "timed_out":false,
  • "_shards":{
    • "total":96,
    • "successful":96,
    • "failed":0
      },
  • "hits":{
    • "total":258002,
    • "max_score":0.0,
    • "hits":[
      ]
      },
  • "aggregations":{
    • "range":{
      • "buckets":[
        • {
          • "key":"2014.08.12.09.18.45-2014.08.12.09.20.50",
          • "from":1.407835125E12,
          • "from_as_string":"2014.08.12.09.18.45",
          • "to":1.40783525E12,
          • "to_as_string":"2014.08.12.09.20.50",
          • "doc_count":12000,
          • "over_time":{
            • "buckets":[
              • {
                • "key_as_string":"2014.08.12.09.18.45",
                • "key":1407835125000,
                • "doc_count":3000,
                • "total_sent":{
                  • "value":6.6126308E7
                    }
                    },
              • {
                • "key_as_string":"2014.08.12.09.18.47",
                • "key":1407835127000,
                • "doc_count":3000,
                • "total_sent":{
                  • "value":9.286586E7
                    }
                    },
              • {
                • "key_as_string":"2014.08.12.09.18.49",
                • "key":1407835129000,
                • "doc_count":3000,
                • "total_sent":{
                  • "value":1.21316184E8
                    }
                    },
              • {
                • "key_as_string":"2014.08.12.09.18.51",
                • "key":1407835131000,
                • "doc_count":3000,
                • "total_sent":{
                  • "value":8.3529544E7
                    }
                    }
                    ]
                    }
                    }
                    ]
                    }
                    }

}

my soft flow is:
Nginx ==pipe==> syslogng ==udp==> logstash ==es_river==> elasticsearch

How can I break the 3000doc/sec ?

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/1fa43dda-31cd-4f10-95bc-0f31778d9663%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/1fa43dda-31cd-4f10-95bc-0f31778d9663%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
Adrien Grand

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAL6Z4j6kYQAU6ExwMFwxTuy3LQQx%2BLSh1_-wNO7cxTd9GbF-Ow%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

indexing speed.

在 2014年8月12日,21:27,Adrien Grand adrien.grand@elasticsearch.com 写道:

I am not sure to understand if your question is about aggregations or indexing speed?

On Tue, Aug 12, 2014 at 11:24 AM, 陈浩 humen1@gmail.com wrote:
this is my DSL

{
"query":{
"match_all":{}
},

        "aggs": {
            "range": {
                "date_range": {
                    "field": "@timestamp", "format": "yyy.MM.dd.HH.mm.ss",
                    "ranges": [{"from": "2014.08.12.09.18.45", "to": "2014.08.12.09.20.50"}]
                },  
                "aggs": {
                    "over_time": {
                        "date_histogram": {
                            "field": "@timestamp",
                            "interval": "1s",
                            "format": "yyy.MM.dd.HH.mm.ss"
                        },
                        "aggs": {
                            "total_sent": {
                                "sum": {"field": "bytes_sent"}
                            }
                        }
                    }
                }
            }
        }
    }

thus is my result

{

"took":6,

"timed_out":false,
"_shards":{

"total":96,

"successful":96,
"failed":0

},
"hits":{

"total":258002,

"max_score":0.0,
"hits":[

]
},
"aggregations":{

"range":{

"buckets":[

{

"key":"2014.08.12.09.18.45-2014.08.12.09.20.50",

"from":1.407835125E12,
"from_as_string":"2014.08.12.09.18.45",

"to":1.40783525E12,

"to_as_string":"2014.08.12.09.20.50",

"doc_count":12000,
"over_time":{

"buckets":[

{

"key_as_string":"2014.08.12.09.18.45",

"key":1407835125000,
"doc_count":3000,

"total_sent":{

"value":6.6126308E7
}

},
{

"key_as_string":"2014.08.12.09.18.47",

"key":1407835127000,
"doc_count":3000,

"total_sent":{

"value":9.286586E7
}

},
{

"key_as_string":"2014.08.12.09.18.49",

"key":1407835129000,
"doc_count":3000,

"total_sent":{

"value":1.21316184E8
}

},
{

"key_as_string":"2014.08.12.09.18.51",

"key":1407835131000,
"doc_count":3000,

"total_sent":{

"value":8.3529544E7
}

}
]
}
}
]

}
}
}

my soft flow is:
Nginx ==pipe==> syslogng ==udp==> logstash ==es_river==> elasticsearch

How can I break the 3000doc/sec ?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/1fa43dda-31cd-4f10-95bc-0f31778d9663%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
Adrien Grand

--
You received this message because you are subscribed to a topic in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/elasticsearch/qj4BP7rEnTU/unsubscribe.
To unsubscribe from this group and all its topics, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAL6Z4j6kYQAU6ExwMFwxTuy3LQQx%2BLSh1_-wNO7cxTd9GbF-Ow%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/EFF0A61B-AB24-4C8B-ABE2-A1B2D5FCC5AC%40gmail.com.
For more options, visit https://groups.google.com/d/optout.