Elasticsearch store time series data when there are many properties fields (about 60000 fields)

store 60000 points value every 500 milliseconds with elasticsearch,there are two ways to design elasticsearch mapping:

1.one pointName/one timestamp in one doc,like this:

_id    timestamp      pointName  value 
uuid1  1582130490000  p1         1 
uuid2  1582130490000  p2         2 
uuid   1582130490500  p1         x

2.one timestamp with 60000 pointName in one doc,like this:

_id    timestamp       p1  p2  p3  p4 ... p60000
uuid1  1582130490000   1   2   3   4  ... x
uuid2  1582130490500   1   2   3   4  ... x

i have tested write/query performance in two uper ways,no problem with aggregate query performance.but batch write performance is too slow,it takes about 1500 milliseconds to write 10000 points with 3 elasticsearch clusters.

is there a bad write performance over 1000 fields in elasticsearch? how should i to design elasticsearch mapping to store 60000 points every 500 milliseconds with good write performance?


What is the size of your Elasticsearch cluster? What is the specification of the nodes? What type of storage are you using? Have you followed the guidelines around tuning for indexing speed in the docs?

thank you for your reply!
I'm just testing the write speed of two mapping designs. I'm not sure whether it's better to store in column(one timestamp with 60000 pointName in one doc) or in row(1.one pointName/one timestamp in one doc). Can you give me some suggestions on storing 60000 point mapping design.

As that is a lot of fields I would probably go with option 1. Many small documents may also spread the load better across the cluster.

ok,thank you so much.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.