I am trying to parse a CSV file that is 22000 rows to Data visualizer and it fails with error , file could not be read. Unexpected token < in JSON at position 0. But when I erase 3000 rows the file passes without a problem. I have tried with several files and the pattern is the same. The files do not exceed the 3 MB.
Do I have to change any parameter in my kibana.yml?
Hi Thodoris,
I imagine the < character is coming from an html based error response being return by the server rather than the expected JSON .
Which version of elasticsearch are you using?
Can you supply a sample of the data? just a few typical lines will be fine.
Let me to follow up on this one! I downloaded a CSV file you mentioned:
WHO-COVID-19-global-data.csv
sha1 621e103a66b15af0f9e5bc93e0f8ddea4c0e3d3f
22374 rows
Tested on the stack version 7.6.0, both elasticsearch and kibana. Using default settings for both kibana.yml and elasticsearch.yml, single-node cluster. I managed to upload the file and create an index without overriding any settings.
Could you please confirm if the file and stack versions are the same as I mentioned? Also, more details on the configurations of elasticsearch and kibana you have might be useful for troubleshooting the issue.
I've experimented with the who covid dataset on a cloud deployment running 7.6, I could not get a timeout when analysing the data, but I did occasionally see one when trying to import it.
So i suspect it is a problem with the amount of data being sent over the network.
Even though the file is not large, it still needs chopping up into chunks that should be small enough to be sent to the Kibana server.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.