This interval creates too many buckets to show in the selected time rage

Hello guys.
I have set a elastic search and kibana to receive some syslogs from our OnPrem server.
When I go to the discover section and set the time range to refresh every second. I got the warning
This interval creates too many buckets to show in the selected time range, so it has been scaled to 10 minutes

But the syslogs coming from OnPrem servers are coming on a range of milliseconds.
In my logstash I can see that the timestamp is refereshing every minute at best, which mean I am loosing some syslogs.

Can anyway enlighten me on this issue please and help me to understand how can I make the refresh to work on milliseconds basis please?

Hi @Hamza_El_Aouane,

Can I ask how are you setting the time range "to refresh every second"?

Are you selecting "Second" in this dropdown?

If so, you must know that's only for the graph to show that level of granularity: it will attempt to present 1 bar per second. When the selected time period is too long, 1 second will create too many splits in the graph, and that's why that warning is shown.

If you're after the list in discover to be refreshed (aka, bring more recent data) on every second, you can use the Time Picker to do so:

@afharo
Thank you so much for your reply. Yes I did set the refresh rate at 1 second as you said. but I am still getting the same issue.
Further more, in the quick select I choose last 15minutes and refresh every second..but my data is still refreshing every 10minutes.. so I am able to see the data only after 10min..and the data it shows is about 1sec between each value.

randomly I start getting this error

Error: Batch request failed with status 0
    at search_interceptor_EnhancedSearchInterceptor.handleSearchError (http://localhost:5601/39457/bundles/plugin/data/data.plugin.js:8:174193)
    at t.selector (http://localhost:5601/39457/bundles/plugin/dataEnhanced/dataEnhanced.plugin.js:2:29905)
    at t.error (http://localhost:5601/39457/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:436:94299)
    at t._error (http://localhost:5601/39457/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:436:134260)
    at t.error (http://localhost:5601/39457/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:21:58213)
    at t._error (http://localhost:5601/39457/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:21:58519)
    at t.error (http://localhost:5601/39457/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:21:58213)
    at t._error (http://localhost:5601/39457/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:21:58519)
    at t.error (http://localhost:5601/39457/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:21:58213)
    at t._error (http://localhost:5601/39457/bundles/kbn-ui-shared-deps/kbn-ui-shared-deps.js:436:134260)

This is an example of data of the last 15min and refresh rate of every 30sec
Screenshot 2021-05-14 at 13.37.35

as you can see it doesn't refresh every 30 seconds but every 2mins

I think I'm not clear on what you mean by refresh.

Here are some key concepts I think that will help here:

  • Discover is supposed to show all the entries in your index individually. It doesn't apply any sort of aggregations in your data (i.e.: grouping the items per timeframe).
  • Selecting Refresh every in the TimePicker will only change the frequency Kibana automatically fetches the data from Elasticsearch.
  • On the chart, changing the granularity from Auto to any other will only affect the bar-chart on top, not the table below.

In any case, the Refresh options in Kibana only affect how often the view fetches the data from Elasticsearch, but if 100s of entries occurred in that period, they'll be shown because Kibana does not store any information itself. All the info is alway consumed from Elasticsearch.

If you think that you might be dropping syslog entries on their way to Elasticsearch, I think that you need to look the configuration in Logstash/Elasticsearch.

@afharo
I am so sorry if I didn't make clear my point, but I am totally new to ELK.
My expectation is to see all the syslog coming in, in real time. Let me explain in details what I mean.

The onprem server sends tons of syslog on a specific port. I configured logstash.conf to listen to that port. Now is onPrem sends 100s of syslog data in 1 second, I would like to see all those data displayed matching the millisecond.

But what I am seeing now is, even if the onprem sends 100s od syslog data a second, in my kibana i am seeing only 3 hits every 5min(as I showed in the attachment)

Please please, and I do apology for the basic question, can you just address me on what should I do to see all the data syslog streamed in real-time?

I hope I made my point and aim clear this time.

Hi @Hamza_El_Aouane,

Thank you! I understand it much clearer now. It definitely looks to me like there might be some misconfiguration in Logstash, and it might be dropping many entries. I've changed the category of this discuss entry so my colleagues from Logstash can help.

I think, in preparation to that, it might be helpful if you could share your logstash configuration (feel free to redact any info you might feel PII).

@afharo I really appreciate your help. Sure thing, here is my logstash.conf

input {
  syslog {
    port => 3014
    codec => cef
    syslog_field => "syslog"
    grok_pattern => "<%{POSINT:priority}>%{SYSLOGTIMESTAMP:timestamp} CUSTOM GROK HERE"
 }
}
output {
  elasticsearch {
     hosts => ["localhost:9200"]
         index => "logstash_index"
 }
}

This is the configuration I am using.

I might need to ask you another info and help regarding kibana if it is possible. in my kibana terminal, I keep getting this error after I run the kibana.bat:

[error][dataEnhanced][data_enhanced][plugins] Error while deleting async_search FlZiaEVnWUhaUXZHdHJOY2FRRlpybWcfV2ZleUdrUFhTbjJTbG10UkdJeHM3dzoxMTE1OTIzMg==: resource_not_found_exception

[error][client][connection] Error: write ECONNABORTED
    at afterWriteDispatched (internal/stream_base_commons.js:156:25)
    at writeGeneric (internal/stream_base_commons.js:147:3)
    at Socket._writeGeneric (net.js:785:11)
    at Socket._write (net.js:797:8)
    at writeOrBuffer (internal/streams/writable.js:358:12)
    at Socket.Writable.write (internal/streams/writable.js:303:10)
    at ServerResponse._writeRaw (_http_outgoing.js:349:17)
    at ServerResponse._send (_http_outgoing.js:325:15)
    at ServerResponse.end (_http_outgoing.js:820:10)
    at PassThrough.onend (internal/streams/readable.js:684:10)
    at Object.onceWrapper (events.js:421:28)
    at PassThrough.emit (events.js:315:20)
    at endReadableNT (internal/streams/readable.js:1327:12)
    at processTicksAndRejections (internal/process/task_queues.js:80:21)

any hint about this please? so sorry for the thousands questions.

I think you're probably wanting an experience closer to the "Logs" application, where you can stream the logs as they're being received:

image

Could you give that a try?

@Lukas thank you so much for your help and time.
I followed your instruction and pointed the logs to my logstash.
But I am having the same issue as in the discovery, my logs get refreshed every 2mins rather than in real time. and I get a lot of those failed to find message

@Lukas I am pretty sure the error is coming from my logstash.conf and how is configured

input {
  syslog {
    port => 3014
    codec => cef
    syslog_field => "syslog"
    grok_pattern => "<%{POSINT:priority}>%{SYSLOGTIMESTAMP:timestamp} CUSTOM GROK HERE"
 }
}
output {
  elasticsearch {
     hosts => ["localhost:9200"]
         index => "logstash_index"
 }
}

Any advice how I can configure this to receive all and any data coming through the port 3014?

@Hamza_El_Aouane I suspect what is happening here is that the text CUSTOM GROK HERE in the grok_pattern is stopping the grok match, which means that the priority and timestamp fields won't be populated correctly. The CUSTOM GROK HERE should be replaced with whatever matches the format of your log entries.

Do you have any error logs in your logstash instance? If you search for documents in Elasticsearch, do you see documents with the tag _grokparsefailure_sysloginput - this gets added when grok matching fails when grok_patterns are specified in the syslog input.

What does your log line look like as it comes from syslog? For example, a log line that looked like:

<134>2021-05-14T18:20:00.000Z atypical CEF:0|Company Name|Application Name|Application Version Number|632|Syslog Configuration Updated|3|src=192.168.0.1 suser=user@example.com target=TARGET msg=No Description KeyValueOne=kv1 KeyValueTwo=12345 

would populate these fields correctly with a grok pattern that looked like:

grok_pattern => "<%{POSINT:priority}>%{TIMESTAMP_ISO8601:timestamp} atypical"

but would not populate the priority and timestamp fields, and tag with _grokparsefailure_sysloginput when using the grok_pattern that you specify in your config.

Hi @RobBavey thank you very much for your help, I am really desperate here and don't know anymore what to try.

I looked into each folder and file in my elasticsearch, but nothing..no file _grokparsefailure_sysloginput.

I changed the logstash.conf as you suggested

input {
  syslog {
    port => 3014
    codec => cef
    syslog_field => "syslog"
    grok_pattern => "<%{POSINT:priority}>%{TIMESTAMP_ISO8601:timestamp} atypical"
 }
}
output {
  elasticsearch {
     hosts => ["localhost:9200"]
         index => "logstash_index"
 }
}

the data syslog still comes every 2min or 10min, even if am streaming live the logs.

Sorry what do you mean exactly with this point?

<134>2021-05-14T18:20:00.000Z atypical CEF:0|Company Name|Application Name|Application Version Number|632|Syslog Configuration Updated|3|src=192.168.0.1 suser=user@example.com target=TARGET msg=No Description KeyValueOne=kv1 KeyValueTwo=12345

If you mean the field that I am supposed to see in my syslogs, those are the one I am seeing:




Screenshot 2021-05-14 at 21.01.52

my syslog lines looks exactly the same as the screenshot I sent previously, nothing is different.

here attach you can see my logstash.conf terminal

while in kibana terminal I am getting this error but is not affecting my syslogs.

@Hamza_El_Aouane The atypical was an example - you shouldn't need that in your grok_pattern - "<%{POSINT:priority}>%{SYSLOGTIMESTAMP:timestamp}"is likely sufficient

@RobBavey so I removed the atypical and restarted logstash again. Everything seems to be fine, Exept for the fact that still the log I am receiving are not in real time. They keep coming every 2 mins even if I am in the observabikity=>logs=>elasticsearch logs

@RobBavey @Lukas I did some more digging. and as kibana kept crushing on me due to the error showed in the previous screenshot, I narrowed the problem to logstash.

@RobBavey in one of your previous request, you asked me about the tags, now I can see the, and yes, the is a grok parse failure on all the entries.

as you can se in the screenshot I am having that tag on all my entries.

I followed your advice about the logstash.conf and this is how it looks now

input {
  syslog {
    port => 3014
    codec => cef
    syslog_field => "syslog"
    grok_pattern => "<%{POSINT:priority}>%{TIMESTAMP_ISO8601:timestamp}"
 }
}
output {
  elasticsearch {
     hosts => ["localhost:9200"]
         index => "logstash_index"
 }
}

I know I am annoying with this issue, but I really don't know how to deal with it anymore

@Hamza_El_Aouane Are you still seeing the failure tags for new events? Or is the grok parse failure still happening?

If it is still happening, we'll need to figure out the appropriate grok pattern that will enable your log to be ingested. Can you describe your setup - how is data being pushed into Logstash, and post a sample log line (removing any sensitive information from your log line)

Hi @RobBavey thank you so much for your reply.
Yes the problem still persist, still having the grok parse failure.

OnPrem we have ArcSight and some proxy logs pushed to my virtual machine (windows server 2016).

Logstash.conf looks as follow:

input {
  syslog {
    port => 3014
    codec => cef
    syslog_field => "syslog"
    grok_pattern => "<%{POSINT:priority}>%{TIMESTAMP_ISO8601:timestamp}"
 }
}
output {
  elasticsearch {
     hosts => ["localhost:9200"]
         index => "logstash_index"
 }
}

in my terminal this is the data I am receiving (had to cover sensitive data)

Is there anymore info that you need? please let me know, and thank you so much for your help once again

Thanks for the info - how are you pushing the logs from ArcSight to your logstash node? I'm not 100% familiar with ArcSight, but looking at your message, I don't see any syslog content - the tcp input might be more appropriate - what happens if you do the following:

Replace

      syslog {
        port => 3014
        codec => cef
        syslog_field => "syslog"
        grok_pattern => "<%{POSINT:priority}>%{TIMESTAMP_ISO8601:timestamp}"
     }

with

  tcp {
    codec => cef { delimiter => "\r\n" }
    port => 3014
  }