A question from newbie about an index structure

Hi there !

I am completely new in Elasticsearch so I want to query about the things unclear to me.

My first question is about amount of indices.
Basically I am going to get a lot of a different data from different sources thru REST queries and put them into our brand new Elasticsearch instance. So should I divide the data among the many indices (every one for a particular source) or put it together in a big one ? My goal is to get my saved data obtainable as quick as it possible. Actually the two approaches have pluses as well as pitfalls so I have to put initial structure for my base in a right way. Wouldn't you be so kind to explain me which way is preferable in my case ?

Another question is about an index structure. In both cases our team wants to have indices formed on a common way with similar sets of fields. I am asserting that all outside metrics we have gotten , should be put in one common array ,like that -

**Input -**
"metric1":"Green"
"metric2": 7
**after filter**
metrics [{"name":"metric1","Value":"green"},{"name":"metric2","Value":7}]

In other way my colleagues intend to form metrics with adding prefix "field_" to the each of the ones , to make them a little bit common, something like -

    **Input -**
    "metric1":"Green"
    "metric2": 7
    **after filter**
    "field_metric1" : "Green"
    "field_metric2":7 

Which way would be the best to get the data conveniently from indices into analysers like Grafana or so ?

Thanks in advance
Andrii

Depends. I would group similar structured data if they are also relevant (eg a similar source).

Elasticsearch will cope, you don't need to worry too much about that :wink:

Commonly you'd do something like;

{
   "green": 7,
   "red": 5,
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.