Logstash importing .csv data in Elasticsearch

Hi
I use logstash to import data from a csv file. It was successful but makes over plus columns that not exists in my data.
my config file is
indent preformatted text by 4 spaces
input {
file {
path => "C:/Data/booking/booking.csv"
start_position => "beginning"
#sincedb_path => "/dev/null"
}
}
filter {
csv {
separator => "|"
columns => ["hotelId","HotelName","City"]
remove_field => [ "host", "message", "path", "@timestamp", "@version" ]
}
}
output {
elasticsearch {
hosts => "localhost"
index => "booking"
document_type => "hotel"
}
stdout {}
}

And there is my index info:
indent preformatted text by 4 spaces
{
"booking" : {
"aliases" : { },
"mappings" : {
"hotel" : {
"properties" : {
"City" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"HotelName" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"column4" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"column5" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"column6" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"column7" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"column8" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
},
"hotelId" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
}
},
"settings" : {
"index" : {
"creation_date" : "1543999132912",
"number_of_shards" : "5",
"number_of_replicas" : "1",
"uuid" : "_Xq0aXnZSFyOD3pEYj2nCg",
"version" : {
"created" : "6050199"
},
"provided_name" : "booking"
}
}
}
}

What is column4 ... column8 ?
How can i handle it?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.