Timestamp from logfile lacks millisec

I have logbeat installed on my windows nodes to forward logfiles to logstash. In logstash, I grab the date/time from the logline and insert it into @timestamp. This is all working, but since the date/time in my logline have no milliseconds, I get multiple events with the identical @timestamp values. So when I display them in kibana, the order gets messed up. How can I enter some kind of incremental number in the milliseconds part of the date to keep them in correct order?
Here is my logstash filter:
filter { grok { match => [ "message", "[\[]%{WORD:logLevel}[\]] %{TIMESTAMP_ISO8601:logTimestamp} %{NUMBER:logThread} (?m)%{GREEDYDATA:logMessage}" ] } mutate { replace => { "message" => "%{logMessage}" } } date { match => [ "logTimestamp", "yyyy-MM-dd HH:mm:ss" ] timezone => "Europe/Brussels" } mutate { remove_field => [ "logMessage", "logTimestamp" ] } }

End a piece of logfile:
[T] 2016-05-26 14:23:13 129 Request end from clienthost=xxxxxxxxx: Login 00:00:00.0091245 [T] 2016-05-26 14:23:13 55 Request begin from clienthost=xxxxxxxxx: GetUserInfo [D] 2016-05-26 14:23:13 55 Get information in session xxxxxxxxx for user xxxxxxxxx [T] 2016-05-26 14:23:13 55 Request end from clienthost=xxxxxxxxx: GetUserInfo 00:00:00.0017817 [T] 2016-05-26 14:23:13 129 Request begin from clienthost=xxxxxxxxx: Logout [D] 2016-05-26 14:23:13 129 Close session xxxxxxxxx of user xxxxxxxxx [T] 2016-05-26 14:23:13 129 Using db connection string: Data Source=xxxxxxxxx [T] 2016-05-26 14:23:13 129 Execute statement: xxxxxxxxx [T] 2016-05-26 14:23:13 129 Rows changed: xxxxxxxxx [I] 2016-05-26 14:23:13 129 Session closed for user xxxxxxxxx: xxxxxxxxx [S] 2016-05-26 14:23:13 129 Write protocol entry xxxxxxxxx [T] 2016-05-26 14:23:13 129 Request end from clienthost=xxxxxxxxx: Logout 00:00:00.0045593

As you see, I get a lot of loglines in the same second and they get out-of-order in kibana.

Thanks for your tips.

I have logbeat installed on my windows nodes

Do you mean Filebeat?

How can I enter some kind of incremental number in the milliseconds part of the date to keep them in correct order?

If you indeed meant Filebeat I believe it includes a @metadata field with the current offset in the file. You could also use the fact that Logstash populates the @timestamp field with the current time when a message is received (millisecond resolution), so if you keep that field around you can use it as the sort key.

I did indeed mean filebeat.

Thanks for the tip about the @metadata field. I'll see what I can do with it.

I know logstash sets the @timestamp, but then loglines from different logfiles don't get interleaved but grouped together making it more difficult to see what lines from the different logs belong together.
If I just replace @timestamp with the timestamp from the logfile, they don't show up in the correct order.
Maybe with the @metadata I can resolve the problem.

Thanks!

I know logstash sets the @timestamp, but then loglines from different logfiles don't get interleaved but grouped together making it more difficult to see what lines from the different logs belong together.

The problem of interleaving logs has nothing to do with what timestamp you use. If you don't want to look at more than one log then use a query that selects exactly which log you want to look at.

I'm sorry, I don't think I made myself clear.

I want the loglines to be interleaved. I have multiple services making calls to eachother and writing to their own logfiles. I want to combine all these logfiles together to be able to follow a process going from one service to the other.
If I use the default @timestamp. I first get a batch of loglines from one service, than from another one. If I overwrite it with the timestamps from my logfiles, they get interleaved (like I want), but multiple lines from the same second get mixed up.
With the @metadata I cen get logs from one service in the correct order, but the order over the multpile services will still not be accurate.
It would be best to get more accurate timestamps written in the logs.

Thanks for you time Magnus.