HI Team,
I am evaluating Elasticsearch for its multivariate anomaly detection capabilities on data from Manufacturing systems. I have a dataset which is comma separated but whenever I try to upload the full file(around 600k Rows) I get a timeout message. If I upload much smaller chunk of the same dataset(around 3k rows) it uploads successfully but I dont seem to be able to upload the full dataset and cannot find any way to change the timeout value.
Considering that any real-life evaluation is impossible on such small dataset, Can I do anything to avoid the file upload timeout?
Can I pull a larger amount of data directly from an Azure SQL database or push the dataset via API?
My data structure looks like following:
time | v1 | v2 | v3 | v4 | v5 | v6 | v7 | v8 | v9 | v10 | v11 |
---|---|---|---|---|---|---|---|---|---|---|---|
2000-01-01 00:00:00Z | -2 | 1.52 | 10.16 | 0 | 0 | 0 | -15.96 | -22.49 | -11.18 | -13.62 | 92.98 |
2000-01-01 00:00:01Z | -2 | 1.52 | 10.16 | 0 | 0 | 0 | -15.96 | -22.49 | -11.18 | -13.62 | 92.98 |
2000-01-01 00:00:02Z | -2 | 1.52 | 10.13 | 0 | 0 | 0 | -15.96 | -22.49 | -11.18 | -13.62 | 92.98 |
2000-01-01 00:00:03Z | -2 | 1.52 | 10.13 | 0 | 0 | 0 | -15.96 | -22.49 | -11.18 | -13.62 | 92.98 |
![image | 690x90](upload://qYf5KiYzYYTqIHSijVEV4XkgG6K.png) |
Looking forward to your thoughts.
Nouman