I am using a influxdb and are using logstash to filter and send data to the influxdb but I have a problem forwaring the timestamp in my measurements to the influxdb.
I have read a lot of posts that I should use the allow_time_override = true and set the time but I cannot get it to work, I get no data in my database when I try to override the time and I see no errors in the logstash log.
This is from my logstash filter:
influxdb {
host => "${INFLUXDB_HOST}"
port => "${INFLUXDB_PORT}"
user => "admin"
password => "${INFLUXDB_PASS}"
db => "recap"
allow_time_override => true
measurement => "vm.network"
send_as_tags => ["hostname", "infrastructure_provider", "datacentre_location", "devicename", "metric_layer", "test_id", "machine_uuid"]
data_points => {
hostname => "%{[tags][hostname]}"
infrastructure_provider => "%{[tags][infrastructure_provider]}"
datacentre_location => "%{[tags][datacentre_location]}"
devicename => "%{[tags][devicename]}"
metric_layer => "%{[tags][metric_layer]}"
out_packets => "%{[fields][out_packets]}"
out_bytes => "%{[fields][out_bytes]}"
out_errors => "%{[fields][out_errors]}"
out_drops => "%{[fields][out_drops]}"
in_packets => "%{[fields][in_packets]}"
in_bytes => "%{[fields][in_bytes]}"
in_errors => "%{[fields][in_errors]}"
in_drops => "%{[fields][in_drops]}"
test_id => "%{[tags][test_id]}"
machine_uuid => "%{[tags][machine_uuid]}"
time => "%{[time]}"
}
coerce_values => {
out_packets => "integer"
out_bytes => "integer"
out_errors => "integer"
out_drops => "integer"
in_packets => "integer"
in_bytes => "integer"
in_errors => "integer"
in_drops => "integer"
}
}
And the script that sends the data has formatted the data as the json format:
example of output that is sent to the logstash:
"headers" => {
"content_type" => "application/json",
"request_path" => "/",
"request_uri" => "/",
"http_host" => "xxx:yy",
"http_connection" => "keep-alive",
"http_accept" => "/",
"content_length" => "496",
"http_accept_encoding" => "gzip, deflate",
"http_version" => "HTTP/1.1",
"request_method" => "PUT",
"http_user_agent" => "python-requests/2.19.1"
},
"time" => 1530283609242054954,
"measurement" => "vm.network",
"tags" => {
"metric_layer" => "virtual",
"test_id" => "1a4daeb6-fe61-40f4-8429-76481faaa484",
"infrastructure_provider" => "test",
"machine_uuid" => "d1516d65-595f-48e0-8736-61774ac2f5ec",
"devicename" => "VirtualFunctionEthernet0/6/0",
"datacentre_location" => "xxx",
"hostname" => "test"
},
"host" => "xxx",
"@timestamp" => 2018-06-29T14:46:49.300Z,
"@version" => "1"
The time is sent in nanoseconds from 1970-01-01 00:00:00 so it is a very large number but as I have understood it it would be ok since influxdb default input for time is nano_sec but I was worried when I read on the logstash page about allow_time_override:
https://www.elastic.co/guide/en/logstash/2.3/plugins-outputs-influxdb.html#plugins-outputs-influxdb-allow_time_override
"Setting this to true allows you to explicitly set the time column yourself
Note: time must be an epoch value in either seconds, milliseconds or microseconds"
Does it mean That I need to convert the nanosec to microsec?