Probleme de synchronisation entre logstash et kibana

Bonjour à tous,

Je souhaite analyser différents types de logs. Qui proviennent de plusieurs agents beats.
Actuellement j'ai deux patterns grok pour les traiter.
Les voici :
match => { "message" => "(?<REQ_TIME>%{YEAR}/%{MONTHNUM}/%{MONTHDAY} %{TIME}) %{WORD:VCS} %{LOGLEVEL:logLevel} %{GREEDYDATA:logMessage}"}
match => { "message" => "%{DATE_EU:mytimestamp} %{TIME:temps} %{WORD:serveur} %{GREEDYDATA:logMessage}"}

Les groks sont justes, je les ai testé sur grok debugger :slight_smile:

Mon probleme est le suivant :
Au debut de mon projet j'avais simplement un pattern, et dans kibana je pouvais filtrer les données et faire un graphique en fonction de la variable "logLevel". Cela me permettait de compter les logs "error" par exemple.

Actuellement j'aimerai faire de meme avec la variable "serveur" mais celle ci n'est pas crée dans kibana, c'est comme ci elle n'existait pas. Alors qu'elle est dans mon grok... :pensive:

Pensez vous pouvoir m'aider ?
Je me tiens disponible si vous avez besoin de plus d'informations :smiley:

Hugo

Hello Hugo !!!

Could you please provide the logstash pipeline config?
I ask you that because when you use two different grok patterns to match, you have to use array for example:

    grok {
            match => {"message" =>    [
                            "%{TIMESTAMP_ISO8601:timestampIn} %{TIMESTAMP_ISO8601:timestampOut}%{SPACE}%{IP:IPV4}",
                            "%{TIMESTAMP_ISO8601:timestampIn} %{TIMESTAMP_ISO8601:timestampOut}%{SPACE}(?<sessionsID>[a-zA-Z0-9._-]+)%{SPACE}%{IP:IPV4}"
                        ]
            }
        }

and you can check on your kibana discovery sections, filtering the docs, to see if is there any tag "_grokparsefailure" that add when the grok is not working.

With that we can start helping you.

Kind regards
Ale

Thanks for your reply,
i have change my logstash filter with your proposition and in my kibana
I have had a filter : tags is tags "_grokparsefailure". I got a lot of results.

So I think there is a problem with my grok, but it compiles in the grok debugger...

This is my pipelines.yml :
- pipeline.id: main
path.config: "/etc/logstash/conf.d/*.conf"

and this is my logstash.conf :

This grok are to match two kind of messages.
Could you please share one example document that have the "_grokparsefailure" ?
Thats means the match you use in the grok filtering is not working, I mean you have another kind of message thats no match ...

but my problem is... why logstash parse this file ?

logs in gc.log is like :
`

[2021-02-12T12:27:32.922+0000][7923][safepoint ] Safepoint "Cleanup", Time since last: 2000532842 ns, Reaching safepoint: 180616 ns, At safepoint: 9000 ns, Total: 189616 ns

`
and the file that i would like parse is :

2017/07/27 18:02:37 VCS INFO V-16-6-15015 (rsxlXXXX) hatrigger:/opt/VRTSvcs/bin/triggers/resfault is not a trigger scripts directory or can not be executed
2017/07/27 18:02:38 VCS INFO V-16-1-50159 User fired command: hares -clear XXXXXXXX_edatdbe  from node rsxlXXXXX
02/11/2021 03:18:54 ANSXXXXX TDPO Linux86-64 ANXXXXX TDP for Oracle: (XXXXX): =>() AXXXXXX The object /rXXXXXXXX_centric/ /k7vmsjds_1_1.bck was not found on the IXX Spectrum Protect Server
02/11/2021 03:21:54 XXXXXXXXI DIAG: sessSendVerb: Error sending Verb, rc: -50
02/11/2021 03:21:54 AXXXXXXE Session rejected: TCP/IP connection failure.
02/11/2021 03:21:54 XXXXXXXX Session rejected: TCP/IP connection failure.

I parse this file with logstash.conf. I showed it to you earlier

I try the groks pattern and the only one that works is te second, so try to leve only that before... and
I get some confuse with the gc.log (garbage collector) of elasticsearch, could you please show all the message, when a _grokparsefailure happen the doc on kibana have a full "message" field with the raw message

for kibana message :

and im confused because in grok debugger my first pattern work on this
`

2017/07/27 18:02:37 VCS INFO V-16-6-15015 (rsxlXXXX) hatrigger:/opt/VRTSvcs/bin/triggers/resfault is not a trigger scripts directory or can not be executed

`
and my second pattern work on this kind of log

> 02/11/2021 03:21:54 AXXXXXXE Session rejected: TCP/IP connection failure

There is the problem, you are grokin some logs and try to grokin to the GC logs and this (as you can see in the message field) dont match with your match

I just understood one thing.
I changed the index of my logstash.conf.
I was created a new index pattern in kibana and kibana offered me to choose a time fields

When i choose "REQ_TIME" I got vcs filters, serveur filters and another informations from grok.

but now, I can't find any more logs lol ! I have the filters but not the logs

Well I think you need before all segment the origin of data.

First you can take a look to this blog that teach you how to separate in different logstash config pipelines to adapt the filtering.

why? because as I see before, you have almost 3 different origin, thats true?

  • Logs that start with YYYY/MM/DD (VCS)
  • Logs that start with MM/DD/YYYY
  • GC logs (garbage collector)

More:
If you are using just one filebeat to collect the data, as I know, Unfortunately, running multiple outputs in Filebeat is not supported, so one easy and good options are create a separated services to run the filebeat with different config files, for example (just in linux) (please adapt it):

copy the original filebeat service into a new other:

root@laboratory:~# cd /lib/systemd/system
root@laboratory:/lib/systemd/system# cp -aR filebeat.service filebeat-vcs.service
root@laboratory:/lib/systemd/system# ls -lhtra filebeat*
-rw-r--r-- 1 root root 616 Jun 14 20:19 filebeat.service
-rw-r--r-- 1 root root 637 Aug 10 09:23 filebeat-vcs.service

adapt the the file to run the filebeat with a new config file:

[Unit]
Description=Filebeat sends log files to Logstash or directly to Elasticsearch.
Documentation=https://www.elastic.co/products/beats/filebeat
Wants=network-online.target
After=network-online.target

[Service]

Environment="BEAT_LOG_OPTS="
Environment="BEAT_CONFIG_OPTS=-c /etc/filebeat/filebeat-vcs.yml"
Environment="BEAT_PATH_OPTS=-path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat/vcs -path.logs /var/log/filebeat/vcs"
ExecStart=/usr/share/filebeat/bin/filebeat -environment systemd $BEAT_LOG_OPTS $BEAT_CONFIG_OPTS $BEAT_PATH_OPTS
Restart=always

[Install]
WantedBy=multi-user.target

Now in the /etc/filebeat/filebeat-vcs.yml you can add another input file and modify the output to the new logstash pipeline.

When you will get the source data separated, you can adapt better the filtering and the output to elastic in diferente index to make better troubleshooting and management of data.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.