I have 57.000 documents in a 1 index (1 shard and 1 replicas). total size of index is 1.2 gb but my logs are so long i mean 1 document has 120 -150 k characters or more.
my elasticsearch is using 8gb of ram and 100mbps net on my elasticsearch server i need some help about querying.
1 ) my search time is 100ms but kibana resonds in 30000 ms etc. why ?
2 ) i am trying to get logs from url like http://192.xxx.xx.xx:9200/sql_test/_search?q=response:"OK"
but page is loading on too much time.
3 ) if i try to get 10000 documents it gives me error like your documents are bigger than 2 gb. how can i get documents bigger than 2 gb ? Specially on kibana ?
Hi,
this will be a limitation due to how Kibana works. Huge documents are not really supported that well in Kibana. The only way that I see that you can work with them is via queries to Elasticsearch, without an UI.
You can use source filters, under index-pattern management, to exclude large fields from the response. That will significantly speed up responses since large amounts of data will be excluded from Elasticsearch _search responses.
For SQL statements, only request columns that are needed and avoid requesting columns with large field sizes.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.