Trying to Grok a field from String to Int

Hi,

I have a log as such:

{
"recordType":"MT",
"callingNumber":"5555555",
"callingImsi":"",
"callingMsc":"",
"billable":"",
"calledNumber":"5555555",
"calledImsi":"425240787504111",
"calledMsc":"972723800111",
"msgSubmissionTime":"1483952084757",
"clientId":"Me",
"gmt1":"-5",
"msgDeliveryTime":"1483952084859",
"originatingProtocol":"SMPP",
"gmt2":"-5",
"campignId":"",
"channel":"",
"destinationProtocol":"MAP",
"terminationCause":"UNEXPECTED_DATA_VALUE",
"transactionId":"0632095307",
"msgLength":"0",
"concatenated":"FALSE",
"concatenatedFrom":"1",
"sequence":"0",
"priority":"",
"deferred":"",
"numOfAttemp":"0"
}

I am trying to make that "msgSubmissionTime" will be read as an Int. The way I saw in my search was:

grok {
  match => ["message", "\msgSubmissionTime\(%{INT:epoch}-0500\)\\/"]
}

This does not work for me... Im stuck with this for quite a while so any help will be great.

P.S.

My end result is to change this field to my timestamp.

So you want to convert the msgSubmissionTime field to an integer? Use the mutate filter's convert option, not the grok filter.

1 Like

I know there will be a lot of material on how to do such a simple thing, but I did try to use mutate unsuccessfully in the past. Is there a chance you can help me out and show me the right syntax for this situation?

The mutate filter's documentation contains an example of how to do exactly what you want to do.

Hi,

I tried right now to open the documentation and use it.

The documentation I used was:

What I used from the documentation is:

filter {
  mutate {
    convert => { "fieldname" => "integer" }
  }
}

My whole configuration to logstash is:

input {
  beats {
    port => 5044
  }
}

filter {
  mutate {
    convert => { "msgSubmissionTime" => "integer" }
  }
}

output {
  elasticsearch {
    hosts => "192.168.1.114:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

The log of the filebeat after this is:

2017-03-16T09:33:14+02:00 INFO Harvester started for file: /var/log/smsc/check.log
2017-03-16T09:33:39+02:00 INFO Non-zero metrics in the last 30s: filebeat.harvester.running=1 registar.states.current=1 filebeat.harvester.open_files=1 publish.events=1 registrar.states.update=1 registrar.writes=1 filebeat.harvester.started=1
2017-03-16T09:34:03+02:00 ERR Failed to publish events caused by: write tcp 192.168.1.114:50191->192.168.1.100:5044: write: connection reset by peer
2017-03-16T09:34:03+02:00 INFO Error publishing events (retrying): write tcp 192.168.1.114:50191->192.168.1.100:5044: write: connection reset by peer
2017-03-16T09:34:09+02:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.published_but_not_acked_events=7 libbeat.logstash.publish.write_bytes=2631 libbeat.logstash.published_and_acked_events=7 libbeat.logstash.publish.write_errors=1 libbeat.logstash.call_count.PublishEvents=2 libbeat.logstash.publish.read_bytes=24 registrar.states.update=7 registrar.writes=1 libbeat.publisher.published_events=7 publish.events=7

Also in Kibana I get:

I dont understand what Im doing wrong with something that is suppose to be simple. :frowning:

I will be happy for help.

I found the reason for this problem, as presumed it was a newbies mistake:

In the following link I found that I cant change an existing feild that already exists in prior indexs without doing doing a procedure (that I dont yet know how to do):

Although you can add new types to an index, or add new fields to a type, you can’t add new analyzers or make changes to existing fields. If you were to do so, the data that had already been indexed would be incorrect and your searches would no longer work as expected.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.