Difficulty displaying all hits in kibana

Hello,

I send the logs continuously to logstash.
But, the logs are not displayed continuously in kibana.
It shows me 8 hits over the last 7 days. While logs logs have been sent continuously for 7 days.
Can you help me solve this problem?
You will find attached the capture

What do you get when you run this in Dev Tools? GET INDEX-NAME/_count

If it returns 8 then all you are getting is that and need to check your logs and ingestion method.

If it returns what you expect then something is being filtered on your display.

when I run this in Dev Tools? GET INDEX-NAME/_count
I have 132. This corresponds to all the logs that I injected on this index. Including the 8 hits.
He therefore does not send me back what I expect.
The logs have however been sent continuously for 7 days. So I should see the total of all hits for 7 days.
When I filter on the last 15 minutes for example, I only have one hits. Which is not normal as you can see from the attached capture b.
And the logs continuously arrive on logstash. You can see it in capture C.

If you verified the logs are updating frequently already then the next step would be how are you ingesting these? Logstash, Beats, or something else? Next I would check the configuration to ensure nothing is being filtered out there.

These are logs that arrive in JSON format in logstash.
I am using logstash
And the configuration is as follows:

input {
tcp {
port => "5140"
codec => json
type => "syslog"
}
}

filter {
grok {
match => { "message" => %{SYSLOG5424PRI:syslog_index}-\s*%{SYSLOGHOST:syslog_hostname} %{GREEDYDATA:syslog_message}"}
}
json
{
source => "syslog_message"
}
}

output {
stdout { codec => rubydebug }

elasticsearch {
hosts => ["https://xxxxx:9200", "https://xxxxxxx:9200"]
user => "xxxxxxx"
password => "xxxxxxxxxx"
cacert => "/etc/logstash/certs/ca.crt"
index => "jstest-%{+YYYY.MM.dd}"
action => "index"
}
}

Below are the logs I receive in logstash:

<01>-.hostname.{"name":"DefaultProfile","version":"1.0","isoTimeFormat":"yyyy-MM-dd'T'HH:mm:ss.SSSZ","type":"Event","category":"deny","protocolID":"6","sev":"4","src":"10.66.7.32","dst":"192.168.2.111","srcPort":"63298","dstPort":"445","relevance":"5","credibility":"5","startTimeEpoch":"1609264894432","startTimeISO":"2020-12-29T19:01:34.432+01:00","storageTimeEpoch":"1609264894432","storageTimeISO":"2020-12-29T19:01:34.432+01:00","deploymentID":"5c15c102-a647-11ea-8226-00505601062b","devTimeEpoch":"1609264893000","devTimeISO":"2020-12-29T19:01:33.000+01:00","srcPreNATPort":"0","dstPreNATPort":"0","srcPostNATPort":"0","dstPostNATPort":"0","hasIdentity":"false","payload":"<189>timestamp=1609264893.devname="DCL0001FW".devid="FG100FTK20004077".vd="VPN-PARTNER".date=2020-12-29.time=19:01:33.logid="000000001".type="traffic".subtype="forward".level="notice".eventtime=1609264893808550963.tz="+0100".srcip=10.66.7.32.srcport=63298.srcintf="To-GCP".srcintfrole="undefined".dstip=192.168.2.111.dstport=445.dstintf="To-DATALOG_PPD".dstintfrole="undefined".srccountry="Reserved".dstcountry="Reserved".sessionid=2062428193.proto=6.action="deny".policyid=0.policytype="policy".service="SMB".trandisp="noop".duration=0.sentbyte=0.rcvdbyte=0.sentpkt=0.vpn="To-GCP".vpntype="ipsec-static".appcat="unscanned".crscore=30.craction=131072.crlevel="high"\n","eventCnt":"1","hasOffense":"false","domainID":"4","domainName":"Decathlon","eventName":"Firewall.Deny","lowLevelCategory":"Firewall.Deny","highLevelCategory":"Access","eventDescription":"Firewall.Deny","protocolName":"tcp","logSource":"FortiGate.@.192.168.0.3","srcNetName":"Net-10-172-192.Net_10_0_0_0","dstNetName":"Net-10-172-192.Net_192_168_0_0","logSourceType":"Fortinet.FortiGate.Security.Gateway","logSourceGroup":"Other","logSourceIdentifier":"192.168.0.3"}

Looks like nothing is there to filter out records.

Next step is to go back. What is sending data to port 5140?

Fortigate logs are sent to SIEM QRadar. And then, QRadar parses the logs in JSON format to send them to logstash through port 5140.
Can you help me improve my parser to filter records?

What I would do next is run the below and watch the data come in. Is it coming in as fast you expect it to?

If not then it needs to be addressed on the QRadar side.

input {
 tcp {
  port => "5140"
  codec => json
 }
}
output {
 stdout { codec => rubydebug }
}

I deleted the filter with the configuration below, Logs arrive in logstash, but are not parsed:

input {
tcp {
port => "5140"
codec => json
}

output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["https://xxxxxx:9200", "https://xxxxxx:9200"]
index => "ectest-%{+YYYY.MM.dd}"
user => "xxxxx"
cacert => '/etc/logstash/certs/ca.crt'
password => "xxxxxxxx"
document_id => "%{[@metadata][fingerprint]}"

    }

}

Here is the output in kibana:

This would lead me to believe that the issue is prior to logstash. I don't have any experience in QRadar but I would check there.

I deleted the filter with the configuration below, Logs arrive in logstash. But, you see I got 2 hits:

input {
tcp {
port => "5140"
codec => json
}

output {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["https://xxxxxx:9200", "https://xxxxxx:9200"]
index => "ectest-%{+YYYY.MM.dd}"
user => "xxxxx"
cacert => '/etc/logstash/certs/ca.crt'
password => "xxxxxxxx"
document_id => "%{[@metadata][fingerprint]}"
}

Here is the output in kibana:

The logs always arrive continuously. But, you see I got 2 hits

All I can see is you are processing all received logs into Logstash. Are you able to prove more logs are being sent to Logstash that aren't being processed? If so we need to identify which ones aren't in the index to try to understand what is happening with those.

There are only Fortigate logs that send continuously.

Below are the logs I receive in logstash:

<01>-.hostname.{"name":"DefaultProfile","version":"1.0","isoTimeFormat":"yyyy-MM-dd'T'HH:mm:ss.SSSZ","type":"Event","category":"deny","protocolID":"6","sev":"4","src":"10.66.7.32","dst":"192.168.2.111","srcPort":"63298","dstPort":"445","relevance":"5","credibility":"5","startTimeEpoch":"1609264894432","startTimeISO":"2020-12-29T19:01:34.432+01:00","storageTimeEpoch":"1609264894432","storageTimeISO":"2020-12-29T19:01:34.432+01:00","deploymentID":"5c15c102-a647-11ea-8226-00505601062b","devTimeEpoch":"1609264893000","devTimeISO":"2020-12-29T19:01:33.000+01:00","srcPreNATPort":"0","dstPreNATPort":"0","srcPostNATPort":"0","dstPostNATPort":"0","hasIdentity":"false","payload":"<189>timestamp=1609264893.devname="DCL0001FW".devid="FG100FTK20004077".vd="VPN-PARTNER".date=2020-12-29.time=19:01:33.logid="000000001".type="traffic".subtype="forward".level="notice".eventtime=1609264893808550963.tz="+0100".srcip=10.66.7.32.srcport=63298.srcintf="To-GCP".srcintfrole="undefined".dstip=192.168.2.111.dstport=445.dstintf="To-DATALOG_PPD".dstintfrole="undefined".srccountry="Reserved".dstcountry="Reserved".sessionid=2062428193.proto=6.action="deny".policyid=0.policytype="policy".service="SMB".trandisp="noop".duration=0.sentbyte=0.rcvdbyte=0.sentpkt=0.vpn="To-GCP".vpntype="ipsec-static".appcat="unscanned".crscore=30.craction=131072.crlevel="high"\n","eventCnt":"1","hasOffense":"false","domainID":"4","domainName":"Decathlon","eventName":"Firewall.Deny","lowLevelCategory":"Firewall.Deny","highLevelCategory":"Access","eventDescription":"Firewall.Deny","protocolName":"tcp","logSource":"FortiGate.@.192.168.0.3","srcNetName":"Net-10-172-192.Net_10_0_0_0","dstNetName":"Net-10-172-192.Net_192_168_0_0","logSourceType":"Fortinet.FortiGate.Security.Gateway","logSourceGroup":"Other","logSourceIdentifier":"192.168.0.3"}

It is according to these logs received in logstash that I configured the filter below to parse these logs:

filter {
grok {
match => { "message" => %{SYSLOG5424PRI:syslog_index}-\s*%{SYSLOGHOST:syslog_hostname} %{GREEDYDATA:syslog_message}"}
}
json
{
source => "syslog_message"
}
}

But, the problem is, I only have one hit. While I should have several hits in kibana. Since Fatigate logs are sent continuously.