@timestamp in kibana doesn’t match browser time(local time)


(張皓翔) #1

Hi everyone, I input netflow data to ES through nprobe(a netflow collector), but I found that the @timestamp doesn't match the browser time. It stays on 2018/1/20
for example January 20st 2018, 16:46:00.000 .
I think the @timestamp should match the local time but the it difference to real time about 1 month.


How can I tune the @timestamp?

thank you in advance!


(Christian Dahlqvist) #2

Where is the @timestamp field in the documents set? What is the date/time of that host? If it is derived based on a field, what is the content of that field? Can you show a full event that has this problem?


(張皓翔) #3



I didn't modify the @timestamp field.
this is the nprobe command:
nprobe -T "%IPV4_SRC_ADDR %L4_SRC_PORT %IPV4_DST_ADDR %L4_DST_PORT %PROTOCOL %IN_BYTES %OUT_BYTES %FIRST_SWITCHED %LAST_SWITCHED %IN_PKTS %OUT_PKTS %IP_PROTOCOL_VERSION %APPLICATION_ID %L7_PROTO_NAME %ICMP_TYPE %SRC_IP_COUNTRY %DST_IP_COUNTRY %APPL_LATENCY_MS" --redis localhost --collector-port 5556 --elastic "nProbe;nprobe;http://164.14.124.14:9200/_bulk" -b2 -V 9 -i none -n none --json-labels -t 60
and the netflow data would be stored in Elasticsearch.


(Christian Dahlqvist) #4

What does your Logstash config look like? Also, can you please copy and paste a full event in JSON form, rather than providing a screenshot of a partial event?


(張皓翔) #5

I didn't use the logstash. I only input netflow data to ES through nprobe.
this is my complete doc:

{
  "_index": "nprobe",
  "_type": "nProbe",
  "_id": "AWG79tQxbibg1pFGtH5I",
  "_version": 1,
  "_score": null,
  "_source": {
    "IPV4_SRC_ADDR": "61.221.181.46",
    "L4_SRC_PORT": 80,
    "IPV4_DST_ADDR": "163.19.179.175",
    "L4_DST_PORT": 35767,
    "PROTOCOL": 6,
    "IN_BYTES": 42,
    "OUT_BYTES": 0,
    "FIRST_SWITCHED": 1516503810,
    "LAST_SWITCHED": 1516503810,
    "IN_PKTS": 1,
    "OUT_PKTS": 0,
    "IP_PROTOCOL_VERSION": 4,
    "APPLICATION_ID": "0",
    "L7_PROTO_NAME": "HTTP",
    "ICMP_TYPE": 0,
    "SRC_IP_COUNTRY": "",
    "DST_IP_COUNTRY": "",
    "APPL_LATENCY_MS": 0,
    "@version": "1",
    "@timestamp": "2018-01-22T05:22:51.000Z",
    "NPROBE_IPV4_ADDRESS": "163.19.163.230"
  },
  "fields": {
  "@timestamp": [
      1516598571000
    ]
  },
  "sort": [
    1516598571000
  ]
}

(Christian Dahlqvist) #6

If you are not using Logstash, it sounds like nprobe might be setting the @timestamp field incorrectly.


(張皓翔) #7

Does ES has default _timestamp?
Or I can't tune the @timestamp anymore?

thank you !


(Christian Dahlqvist) #8

Elasticsearch does not assign any default timestamp, so I suspect it comes from the source. You can process documents in Elasticsearch using an ingest node pipeline, and you should be able to assign the timestamp the document is indexed into Elasticsearch if you feel this would be more appropriate than the data coming from probe. If data is delayed coming into Elasticsearch this would however be misleading.


(張皓翔) #9

ok. I got it!

could you teach me how to implement the pipeline setting?
this is the command:

PUT _ingest/pipeline/timestamp 
{
    "description" : "describe pipeline",
    "processors" : [{
                      "set" : {
                                "field": "timestamp",
                                "value": "{{_ingest.timestamp}}"
                              }
                    }]
               }

but there are still no _ingest.timestamp in my doc.

thank you in advance!


(Christian Dahlqvist) #10

Your client application would need to be changed to specify the pipeline when indexing the document, so it may not work.


(張皓翔) #11

could you give me an example.

thank you :grinning:


(Christian Dahlqvist) #12

Have a look in the docs I linked to earlier.


(張皓翔) #13

I refer the doc you mentioned earlier.
I want to update all data(about 10 thousands data) or let the comming data has the timestamp field.
this is my command:

 PUT _ingest/pipeline/timestamp 
{
    "description" : "describe pipeline",
    "processors" : [{
                      "set" : {
                                "field": "timestamp",
                                "value": "{{_ingest.timestamp}}"
                              }
                    }]
 }

PUT logstash-2018.02.22/_doc/my-id?pipeline=my_pipeline_id
{
  "foo": "bar"
}

I am not sure how to use the command here.
Because I want all the data has the timestamp field.
so my concept is like this:
PUT logstash-*/*/*?pipeline=timestamp
but it apparently can't work.
thank you :slight_smile:


(Christian Dahlqvist) #14

For this to work, the application indexing into Elasticsearch would need to indicate which pipeline to use as per the example, which I naturally may not be possible. Sorry for not considering that in the first place.


(張皓翔) #15

your "per the example" mean a doc = a command ?
so it's impossible?
if I have 10 thousands data then I have to query the command 10 thousands times?


(Christian Dahlqvist) #16

It needs to be specified when the document is indexed, which is why it probably will not work. I think the best way is to fix this where the timestamp is generated.


(system) #17

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.