I have ben trying to bulk insert a json file into elasticsearch via python (very new to elastic). I had to transform the data a little bit before I put it in elastic. In the end I write my file to a ndjson and try to bulk insert using the following code:
with open("/Users/mariska/Documents/jsontestje14.json") as json_file:
body=json_file.read()
helpers.bulk(es, actions=body, index='jsononfagun6', doc_type='kenteken')
Which yield the error:
Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes
I've tried numerous things to change the format of the file so that it will be accepted by elastic but no success. It currently looks like this (example cause the real file has many more lines):
{
"Kenteken": "WSFT54",
"Voertuigsoort": "Aanhangwagen",
"Merk": "GS",
"Handelsbenaming": "AC-2000 AC",
"Vervaldatum APK": "19/10/2018",
"Datum tenaamstelling": "19/09/2005",
"Bruto BPM": "nan",
"Inrichting": "open laadvloer",
"Aantal zitplaatsen": "nan",
"Eerste kleur": "N.v.t.",
"Tweede kleur": "N.v.t.",
"Aantal cilinders": "nan",
"Cilinderinhoud": "nan",
"Massa ledig voertuig": "5580.0",
"Toegestane maximum massa voertuig": "20000.0",
"Massa rijklaar": "nan",
"Maximum massa trekken ongeremd": "nan",
"Maximum trekken massa geremd": "nan",
"Retrofit roetfilter": "nan",
"Zuinigheidslabel": "nan",
"Datum eerste toelating": "19/09/2005",
"Datum eerste afgifte Nederland": "19/09/2005",
"Wacht op keuren": "Geen verstrekking in Open Data",
"Catalogusprijs": "nan",
"WAM verzekerd": "N.v.t.",
"Maximale constructiesnelheid (brom/snorfiets)": "nan"
}
Several of these all seperated by newlines. It seems to parse every individual letter of every string seperately, but I can't figure out the problem. Hopefully someone can help!