Apache Logs Not Parsing with Filebeat Ingestion Pipeline

I have installed Filebeat version 8.10.2 on an Ubuntu server and configured it to send Apache access and error logs to Logstash. While the logs are displayed in Kibana, they are not parsing through their default ingest pipelines, causing issues with data visualization on the dashboard.

To set up Filebeat, I followed the instructions provided in the official documentation, along with additional articles for more specific guidance:

  1. Filebeat Installation and Configuration

I also followed these three articles to load the necessary components for proper log processing:

  1. Load Index Template Manually
  2. Load Kibana Dashboards
  3. Load Ingest Pipelines

Additionally, I referred to this Logstash documentation.

Here are the commands I executed for better clarity:


filebeat setup --index-management -E output.logstash.enabled=false -E output.elasticsearch.username="elastic" -E output.elasticsearch.password="KkXXNnUzgahp" -E 'output.elasticsearch.hosts=["http://X.X.X.X:9200"]


filebeat setup -e -E output.logstash.enabled=false -E output.elasticsearch.username="elastic" -E output.elasticsearch.password="zgXahkXNnUp" -E output.elasticsearch.hosts=["http://X.X.X.X:9200"] -E setup.kibana.host=http://X.X.X.X:5601


filebeat setup --pipelines --modules apache --force-enable-module-filesets

I assumed that after loading the default ingest pipelines, the Apache error and access logs would be parsed without the need for additional configuration in Logstash. However, the logs are still not parsing as expected after applying the default ingestion pipeline.

Here is the detail of log:


Here is logstash config file:

input {
beats {
port => 5044
output {
if [@metadata][pipeline] {
elasticsearch {
hosts => "http://x.x.x.x:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}"
action => "create"
pipeline => "%{[@metadata][pipeline]}"
user => "elastic"
password => "XXzgahpNnU"
} else {
elasticsearch {
hosts => "http://x.x.x.x:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}"
action => "create"
user => "elastic"
password => "XXzgahpNnU"

Despite following the steps and configurations mentioned, the logs are not being parsed correctly, as shown in the provided images. Required experts guidance.

Hi @huzaifa224

My recommendation is to always get the integration working directly from

Filebeat -> Elasticsearch

Directly first and then try to put logstash in the middle.

Also, instead of running all those separate setup commands, it's much easier and less error prone just to run

filebeat setup -e

Which loads all the assets in one command.

So I would clean up all the indices and run filebeat directly to elastic first and see if that works.

Once that works then try logstash in the middle.

Another issue could be is if you're Apache logs are a non-standard format.

i also tried to send data directly to elasticsearch which was working fine, but i need to use logstash in between but it give issues while using it. Does the command "filebeat setup -e" will load index template, dashboards, pipelines?
the apache logs are in standard format

I think that the ingest pipeline is breaking right here.

A lot of Elasticsearch Ingest Pipelines will try to rename the message field to event.original, and since they do not expect the source message to already have an event.original field, this processor will fail and the ingest pipeline will not work.

From version 8.X Logstash comes with ecs compatibility enabled per default, and this will make some inputs to create the field event.original, which will then break the ingest pipelines.

This is a know issue.

You can try the following:

  • set pipeline.ecs_compatibility to disabled on logstash.yml or for the specific pipeline in the pipelines.yml
  • add the setting enrich => none in your beats output.

This will disable the ecs compatibility and tell logstash to not add any fields to the source event, this way the ingest pipeline should work as expected.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.