Im starting to develop a project with the ESP32. This project generates a web interface to control outputs on the ESP32. I want to collect the logs and process them through the ELK stack, but I am not being successful. The logs are collected in the terminal.
What would be the best strategy to collect and process the logs so that I can later visualize them in the Kibana interface?
You can use LS as a data receiver which is described here for Raspberry Pi and as the LS input you can use the http or TCP plugin.
Another option is to send data directly to ES ingest or post (bulk) data via ES APIs in JSON format.
If I use static sample data in JSON to test the connection, what files will I need to configure?
I have already configured logstash.yml and logstash.conf.
logstash.yml
# The number of threads to use for event processing (default is the number of CPU cores)
pipeline.batch.size: 500
sipeline.batch.delay: 5
#determines how Logstash buffers events
pipeline.buffer.type: direct # Or 'direct', based on your requirements pipeline.ordered: false
# Activate the "dead letter queue" (DLQ) feature for handling failed events
dead_ letter_queue. enable: true
# Path to the dead letter queue
#dead_letter_queue.path: "/var/lib/logstash/dead_letter_queue"
# Queue settings (optional)
#queue. type: "memory" # Options are "memory" or "persisted"
#queue memory. size: 1024mb # Only if using memory queue
# Paths for the configuration files
path.config: "/tmp/logstash-test/logstash.conf"
# Path for the log files
ath dogs: "/var/109/logstash
* Enable/disable logging log. level: "info"
# Options: debug, info, warn, error, fatal
-logstash.conf
input {
file{
path => "/tmp/test.log" start_position =>
"beginning"
sincedb_path => "/dev/null"
codec => json
logstash.conf *
}
}
filter {
json {
source => "message" target => "json_data"
# This will store the parsed>
}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.