Hello,
I use Filebeat to fetch data from Wazuh (HIDS) and send alerts to Logstash.
Then Logstash sends its data to ES and everything usually works fine.
However, sometimes after being away for a few days, I look on Kibana and see that there are parsing errors like 'jsonparsefailure'.
Here is the corresponding message:
{
"_index": "wazuh-alerts-",
"_id": "YZ8ufttsrh",
"_version": 1,
"_score": null,
"_source": {
"@version": "1",
"message": "Jan 3 05:51:14 wazuh-manager filebeat[769437]: 2023-01-03T05:51:14.358Z#011INFO#011[input.harvester]#011log/harvester.go:340#011File is inactive. Closing because close_inactive of 5m0s reached.#011{\"input_id\": \"c5e42179-0a6f-4988-b4f3-c0edcfb1b6dc\", \"source\": \"/var/log/syslog\", \"state_id\": \"native::525914-1804\", \"finished\": false, \"os_id\": \"525914-1804\", \"old_source\": \"/var/log/syslog\", \"old_finished\": true, \"old_os_id\": \"525914-1804\", \"harvester_id\": \"25d29d53-8432-431a-b4af-12579e7f8549\"}Jan 3 05:51:15 wazuh-manager filebeat[769437]: 2023-01-03T05:51:15.369Z#011INFO#011[input.harvester]#011log/harvester.go:309#011Harvester started for paths: [/var/log/messages* /var/log/syslog*]#011{\"input_id\": \"c5e42179-0a6f-4988-b4f3-c0dc\", \"source\": \"/var/log/syslog\", \"state_id\": \"native::525914-1804\", \"finished\": false, \"os_id\": \"525914-1804\", \"old_source\": \"/var/log/syslog\", \"old_finished\": true, \"old_os_id\": \"525914-1804\", \"harvester_id\": \"bbdbb8a9-f915-4587-aae5-ef2b76fd49f0\"}Jan 3 05:51:17 wazuh-manager filebeat[769437]: 2023-01-03T05:51:17.372Z#011ERROR#011[logstash]#011logstash/async.go:280#011Failed to publish events caused by: write tcp 192.168.1.18:35676->192.168.1.23:5044: write: connection reset by peerJan 3 05:51:17 wazuh-manager filebeat[769437]: 2023-01-03T05:51:17.372Z#011INFO#011[publisher]#011pipeline/retry.go:219#011retryer: send unwait signal to consumerJan 3 05:51:17 wazuh-manager filebeat[769437]: 2023-01-03T05:51:17.372Z#011INFO#011[publisher]#011pipeline/retry.go:223#011 doneJan 3 05:51:18 wazuh-manager filebeat[769437]: 2023-01-03T05:51:18.946Z#011ERROR#011[publisher_pipeline_output]#011pipeline/output.go:180#011failed to publish events: write tcp 192.168.1.18:35676->192.168.1.16:5044: write: connection reset by peerJan 3 05:51:18
wazuh-manager filebeat[769437]: 2023-01-03T05:51:18.946Z#011INFO#011[publisher_pipeline_output]#011pipeline/output.go:143#011Connecting to backoff(async(tcp://192.168.1.16:5044))Jan 3 05:51:18 wazuh-manager filebeat[769437]: 2023-01-03T05:51:18.946Z#011INFO#011[publisher]#011pipeline/retry.go:219#011retryer: send unwait signal to consumerJan 3 05:51:18 wazuh-manager filebeat[769437]: 2023-01-03T05:51:18.946Z#011INFO#011[publisher]#011pipeline/retry.go:223#011 doneJan 3 05:51:18 wazuh-manager filebeat[769437]: 2023-01-03T05:51:18.946Z#011INFO#011[publisher_pipeline_output]#011pipeline/output.go:151#011Connection to backoff(async(tcp://192.168.1.16:5044)) establishedJan 3 05:52:01 wazuh-manager CRON[963977]: pam_unix(cron:session): session opened for user root by (uid=0)Jan 3 05:52:01 wazuh-manager CRON[963978]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)Jan 3 05:52:01
wazuh-manager CRON[963977]: pam_unix(cron:session): session closed for user rootJan 3 05:52:03 wazuh-manager filebeat[769437]: 2023-01-03T05:52:03.046Z#011INFO#011[input.harvester]#011log/harvester.go:309#011Harvester started for paths: [/var/log/auth.log* /var/log/secure*]#011{\"input_id\": \"f9c11fja-825c-5606-a9aa-782d29f4d8de\", \"source\": \"/var/log/auth.log\", \"state_id\": \"native::527100-1804\", \"finished\": false, \"os_id\": \"65050-5855\", \"old_source\": \"/var/log/auth.log\", \"old_finished\": true, \"old_os_id\": \"527100-1804\", \"harvester_id\": \"d546427-4ggd-4ey8-9927-3a69cydbd3\"}",
"@timestamp": "2023-01-03T05:53:10.377572603Z",
"tags": [
"_jsonparsefailure",
"wazuh"
],
"type": "wazuh-alerts"
},
"fields": {
"@timestamp": [
"2023-01-03T05:53:10.377Z"
]
},
"sort": [
1672725190377
]
}
I have received a similar message a hundred times. I would first like to know how to fix the parsing problem, before fixing the connection problems displayed. I don't understand why the parsing is badly done, what should I do?
Here is my filebeat file:
###################### Filebeat Configuration Example #########################
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
# filestream is an input for collecting log messages from files.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- "/var/ossec/logs/alerts/alerts.json"
fields_under_root: true
document_type: json
json.message_key: log
json.keys_under_root: true
json.overwrite_keys: true
fields:
beat.type: wazuh_alerts
#- c:\programdata\elasticsearch\logs\*
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: true
# Period on which files under path should be checked for changes
reload.period: 15s
# ------------------------------ Logstash Output -------------------------------
output.logstash:
# The Logstash hosts
hosts: ["192.168.1.16:5044"]
username: "logstash"
password: "xx"
# ================================= Processors =================================
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
# modules
filebeat.modules:
- module: wazuh
alerts:
enabled: true
- module: system
syslog:
enabled: true
setup.template.json.enabled: true
setup.template.json.path: '/etc/filebeat/wazuh-template.json'
setup.template.json.name: 'wazuh'
setup.template.overwrite: true
setup.ilm.enabled: false