Dear forum,
am new here and have played around with Elasticsearch basics.
Using this great fscrawler to simplify ingestion of files into Elastic.
Issue: Using fscrawler, how to have the data from csv fields properly entered into the corresponding type fields that I created in Elastic?
Next phase would be to read it using Kibana which seems straight forward.
My mapping in Elastic exists now with 14 fields of type "text" and "integer", but all data from the csv ends up in the fscrawler default "content" field, so I realize that I'm missing this last little touch to get it working.
Below a part (spme of my 14 fields excl) of the mappings under docs index (content field in the end...)
Is there a simple solution to this by:
- customizing the fscrawler related configuration/defaults?
- or is there need for logstash? If so, could someone give simple guidelines to use together with fscrawler?
Thx for any assistance,
Johan
{
"docs": {
"mappings": {
"my-type": {
"properties": {
"ACTUAL staff": {
"type": "integer"
},
"AUTHORIZED staff": {
"type": "integer"
},
"On leave": {
"type": "integer"
},
"Others": {
"type": "integer"
},
"Sick": {
"type": "integer"
},
"Total on duty": {
"type": "integer"
},
"Total unavailable": {
"type": "integer"
},
"content": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
[...]