Hello I am a foreign developer. Even if the translation is awkward, please understand
My process is Elastic >> logstash >> fileBeat >> Kibana
Logstash reads and uses csv file. The flow creates a csv file containing the headers, and then the data is stacked line by line.
The header of the csv file is "no, item, cherry, apple". All subsequent values are numeric.
ex) no, item, cherry, apple
1, 2, 3, 4
The first header is all text. I want to change all header types to integers when reading. "no" and "item" are fixed, but the "cherry" and "apple" are fluid, so they can't fix csv columns What method should I use? Should I use "ruby filter"? I don't know ruby yet
This is my script now. Thank you for letting us know what you need to add or edit.
input {
beats {
host => "localhost"
port => "5044"
}
}
filter {
csv {
#separator => ","
#autogenerate_column_names => true
skip_header => true
autogenerate_column_names => true
autodetect_column_names => true
}
mutate {
remove_field => [
"@version",
"path",
"host",
"tags",
"offset",
"agent"
]
#convert => ["[%{field}][long]", "integer"]
}
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
#match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
match => [ "timestamp", "dd/MMM/yyyy:HH:mm:ss Z" ]
locale => "ko"
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
index => "blank"
manage_template => false
}
}