Hi to everyone
I have the following problem. I am exporting from a csv file to generate a dashboard with data that is read daily from that file, which is also generated daily. One of the fields that I include in the csv is the "last_login_date" field that I use to extract certain information and display it in a graph.
An example CSV file has the following information, separated by ","
user-id,date,last_login_date,assigned_quota,used_quota,number_files,number_shares,number_uploads,number_downloads
usernameXXXX,2024-02-07 10:05:21,2024-02-07 10:04:55,26843545600,23223408,151,2,3,531
usernameYYYY,2024-02-07 10:05:21,2024-02-07 10:04:55,26843545600,23223408,151,2,3,531
In the logstash I have the following configuration to process the data that comes from that csv file:
input {
file {
path => "/etc/logstash/conf.d/daily/*.csv"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
csv {
skip_header => "true"
separator => ","
columns => [
"user-id",
"date",
"last_login_date",
"assigned_quota",
"used_quota",
"number_files",
"number_shares",
"number_uploads",
"number_downloads"
]
remove_field => ["message"]
}
date {
match => [ "date", "yyyy-MM-dd HH:mm:ss"]
target => ["@timestamp"]
}
date {
match => [ "last_login_date", "yyyy-MM-dd HH:mm:ss"]
target => ["last_login_date"]
}
mutate {convert => ["assigned_quota", "integer"]}
mutate {convert => ["used_quota", "integer"]}
mutate {convert => ["number_files", "integer"]}
mutate {convert => ["number_shares", "integer"]}
mutate {convert => ["number_uploads", "integer"]}
mutate {convert => ["number_downloads", "integer"]}
}
output {
elasticsearch {
hosts => ["elk.mydomain.local:9200"]
index => "nextcloud-usage-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
As you can see, the "last_login_field" field is formatted so that it takes the format that comes from the CSV.
In Kibana, when the file is processed, and I see the generated index, I see the following warning sign:
I don't know how to solve it, can someone help me see where the problem is.
Thanks in advance