ES document compression and node configuration

Good Day

I am building a proof of concept for my company of a big data store. I'm leaning towards ES and so far it works well. I've built all the components with node js utilizing elastic's npm package elasticsearch.js

This is what I'm dealing with. Every 5 min I will be ingesting a data file that consist of 50000 rows of data that is broke out to 50000 documents. After indexing a few files to test it appears to grow at a rate of about 50mb every 5 minutes.

Is there a way to compress without hindering performance to much.

Also, I'm breaking the data file into individual 50000 documents. I'm new to ES and the whole NoSql world. Am I doing this right. Should I just combine the 50000 rows into 1 document but I need to be able to query and pull out a particular row from the file. (Thus why I broke it out to 50000 documents so I can query per row)

Also,
For a production environment how many cores and ram would you recommend per node. I'm thinking starting with 6 nodes to start?

Any insight would be so appreciated

I think so.

Impossible to say without significant testing on your side first. May 3 can be enough BTW.
But you need to test that.

It depends on so many factors, like replicas, retention, hardware, queries...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.