Creating Dashboard for apache access logs using Filebeat

Hey I want to create Dashboard using filebeat for apache access logs. I have complete 11 nodes on staging out of which 7 nodes are of elasticsearch(3 master nodes, 2 coordination nodes, 2 data nodes), and other 3 nodes are of kafka and one is node is of kibana/logstash. I have set up my filebeat on one of the kafka nodes. In which I have given elasticsearch coordination nodes as output . I am able to able to create the index on elasticsearch named filebeat using filebeat configuration and it is being shown on Kibana UI in index management the thing is after so many attempts I am notable to create dashboard following is my configuration and the error I am facing---------

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common

# options. The filebeat.reference.yml file from the same directory contains all the

# supported options with more comments. You can use it as a reference.

# You can find the full configuration reference here:

# [Filebeat Reference | Elastic](https://www.elastic.co/guide/en/beats/filebeat/index.html)

# For more available modules and options, please see the filebeat.reference.yml sample

# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so

# you can use different inputs for various configurations.

# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.

* type: log

# Unique ID among all inputs, an ID is required.

#id: my-filestream-id

# Change to true to enable this input configuration.

enabled: true

# Paths that should be crawled and fetched. Glob based paths.

paths:
  * "/etc/testinglogs/access-log.11"
#- c:\programdata\elasticsearch\logs*
#logging:
#files:
#path: /var/log/filebeat
#name: filebeat.log

# Exclude lines. A list of regular expressions to match. It drops the lines that are

# matching any regular expression from the list.

#exclude_lines: ['^DBG']

# Include lines. A list of regular expressions to match. It exports the lines that are

# matching any regular expression from the list.

#include_lines: ['^ERR', '^WARN']

# Exclude files. A list of regular expressions to match. Filebeat drops the files that

# are matching any regular expression from the list. By default, no files are dropped.

#prospector.scanner.exclude_files: ['.gz$']

# Optional additional fields. These fields can be freely picked

# to add additional information to the crawled log files for filtering

#fields:

# level: debug

# review: 1

# ============================== Filebeat modules ==============================

  #filebeat.config.modules:


# Glob pattern for configuration loading

#path: /etc/filebeat/modules.d/*.yml

# Set to true to enable config reloading

#reload.enabled: false

# Period on which files under path should be checked for changes

#reload.period: 10s

# ======================= Elasticsearch template setting =======================

#setup.template.settings:
#index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false

# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group

# all the transactions sent by a single shipper in the web interface.

#name:

# The tags of the shipper are included in their own field with each

# transaction published.

#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the

# output.

#fields:

# env: staging

# ================================= Dashboards =================================

# These settings control loading the sample dashboards to the Kibana index. Loading

# the dashboards is disabled by default and can be enabled either by setting the

# options here or by using the `setup` command.

#setup.dashboards.enabled: true

# The URL from where to download the dashboards archive. By default this URL

# has a value which is computed based on the Beat name and version. For released

# versions, this URL points to the dashboard archive on the [artifacts.elastic.co](http://artifacts.elastic.co/)

# website.

#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

## This requires a Kibana endpoint configuration.

setup.kibana:
host: "https://#.#.#.#:5601"
ssl.enabled: true
ssl.verification_mode: none

# protocol: "https"

# ssl.certificate_authorities:

# - /etc/filebeat/kibana/ca/ca.crt

username: "manifest"
password: "manifest"

# Kibana Host

# Scheme and port can be left out and will be set to the default (http and 5601)

# In case you specify and additional path, the scheme is required: http://localhost:5601/path

# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

# Kibana Space ID

# ID of the Kibana Space into which the dashboards should be loaded. By default,

# the Default Space will be used.

#space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and

# `setup.kibana.host` options.

# You can find the `cloud.id` in the Elastic Cloud web UI.

#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and

# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.

#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

#output.file:

# path: "/etc/testinglogs/testing1"

#filename: "testing1"

# overwrite_keys: true

# ---------------------------- Elasticsearch Output ----------------------------

output.elasticsearch:

# Array of hosts to connect to.

hosts: ["https://#.#.#.#:9200", "https://#.#.#.#:9200"]
#index: "mylogs-%{+yyyy.MM.dd}"
username: "manifest"
password: "manifest"
ssl.enabled: true
ssl.verification_mode: none
#ssl.certificate_authorities:

# - /etc/filebeat/ca/ca.crt

# Protocol - either `http` (default) or `https`.

#protocol: "https"

# Authentication credentials - either API key or username/password.

#api_key: "id:api_key"

# ------------------------------ Logstash Output -------------------------------

#output.logstash:

# The Logstash hosts

#hosts: ["localhost:5044"]

# Optional SSL. By default is off.

# List of root certificates for HTTPS server verifications

#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

# Certificate for SSL client authentication

#ssl.certificate: "/etc/pki/client/cert.pem"

# Client Certificate Key

#ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================

processors:

* add_host_metadata: ~

# when.not.contains.tags: forwarded

* add_cloud_metadata: ~
* add_docker_metadata: ~
* add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.

# Available log levels are: error, warning, info, debug

#logging.level: debug

# At debug level, you can selectively enable logging only for some components.

# To enable all selectors use ["*"]. Examples of other selectors are "beat",

# "publisher", "service".

#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================

# Filebeat can export internal metrics to a central Elasticsearch monitoring

# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The

# reporting is disabled by default.

# Set to true to enable the monitoring reporter.

#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this

# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch

# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.

#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the

# Elasticsearch output are accepted here as well.

# Note that the settings should point to your Elasticsearch *monitoring* cluster.

# Any setting that is not set is automatically inherited from the Elasticsearch

# output configuration, so if you have the Elasticsearch output configured such

# that it is pointing to your Elasticsearch monitoring cluster, you can simply

# uncomment the following line.

#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.

#instrumentation:
# Set to true to enable instrumentation of filebeat.
#enabled: false

# Environment in which filebeat is running on (eg: staging, production, etc.)
#environment: ""

# APM Server hosts to report instrumentation results to.
#hosts:
#  - http://localhost:8200

# API Key for the APM Server(s).
# If api_key is set then secret_token will be ignored.
#api_key:

# Secret token for the APM Server(s).
#secret_token:

# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases

## #migration.6_to_7.enabled: true
#filebeat.modules:
#- module: wazuh
#alerts:
#enabled: true
#archives:
# enabled: false
#setup.template.json.enabled: true
#setup.template.json.path: /etc/filebeat/wazuh-template.json
#setup.template.json.name: wazuh
#setup.template.overwrite: true
#setup.ilm.enabled: false

## 

My apache access log file is uploaded on this node under the abovementioned directory and my kibana url is https://#.#.#.# without port

following is the error i am facing

2023-05-09T11:48:17.365+0530 ERROR instance/beat.go:1026 Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to https://#.#.#.#:5601/api/status fails: fail to execute the HTTP GET request: Get "https://#.#.#.#:5601/api/status": Forbidden. Response: 

and i have tested my elasticsearch connection by commenting the kibana part and i am able to create index on elasticsearch and it isshown on kibana UI and elasticsearch cluster is fine and status is green

1 Like

Hi @kriti_dabas

1st I tried to better format your post (which no one else will take the time to do), it was very hard to read / poorly formatted and in general the harder to read/understand the post the less likely it will get answers... In general, if you take a little extra time to make the post readable... perhaps someone will take the time to answer.

Even with that, all the filbeat.yml indenting is incorrect so it is hard to tell if it is valid yml.

Now What version are you on? Please Answer this.

What documentation Are you following? Please answer this.

And if I understand the key issue is that you can send data but you are not getting the dashboards etc.

Did you read the filebeat quick start guide?

What command did you run when you got that error?
You should always show the command plus the error.

Have you / were you trying to run setup? This is the command to load the dashboards and the templates, mappings etc which you need to do BEFORE you start sending data otherwise the data will not be properly parsed.

filebeat setup -e

so all that said if you go to the server where filebeat is running and run

curl -k -v https://ipofkibana:5601/api/status

What is the result?

2 Likes

Versions ----

1- filebeat version - 7.17.9
2- Kibana version - 7.17.2
3- elasticsearch master nodes - 7.17.5 (3nodes)
elasticsearch data nodes- 7.17.5/7.17.8 (2nodes)
elasticsearch coordination nodes- 7.17.8(2nodes)

For reference I read this document - ELK Stack Example Apache Logs Import · GitHub
But as I have cluster of elasticsearch and have 11 nodes on staging I wasn't able to follow it properly.
I googled about it and followed the methods showed in elastic community site.

Yes my key issue that I am able to send data on Kibana UI but not able to create Dashboards

Yes I have read the quick start guide

I ran this command ---> filebeat setup -e

ERROR instance/beat.go:1026 Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at https://#.#.#.#:9200: Get "https://#.#.#.#:9200": Forbidden]
Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at https://#.#.#.#:9200: Get "https://#.#.#.#:Forbidden]

and then ran this command ---> filebeat setup --dashboards

ERROR instance/beat.go:1026 Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to https://#.#.#.#:5601/api/status fails: fail to execute the HTTP GET request: Get "https://#.#.#.#:5601/api/status": Forbidden. Response: .

then ran this command ---> sudo filebeat -e -c /etc/filebeat/filebeat.yml

ERROR [reload] cfgfile/list.go:108 Error creating runner from config: failed to create input: Can only start an input when all related states are finished:

I ran this command ---> curl -k -v https://kibanaIP/api/status

  • Uses proxy env variable https_proxy == 'http://col:col123@#.#.#.#:8080'
  • Trying #.#.#.#..
  • TCP_NODELAY set
  • Connected to #.#.#.# (#.#.#.#) port 8080 (#0)
  • allocate connect buffer!
  • Establish HTTP proxy tunnel to kibanaIP:443
  • Proxy auth using Basic with user 'col'

CONNECT kibanaIP:443 HTTP/1.1
Host: kibanaIP:443
Proxy-Authorization: Basic YW5hczphbmFzNDU2
User-Agent: curl/7.61.1
Proxy-Connection: Keep-Alive

< HTTP/1.1 503 Service Unavailable
< Server: squid/4.15
< Mime-Version: 1.0
< Date: Fri, 12 May 2023 05:59:44 GMT
< Content-Type: text/html;charset=utf-8
< Content-Length: 3593
< X-Squid-Error: ERR_CONNECT_FAIL 65
< Vary: Accept-Language
< Content-Language: en
<

  • Received HTTP code 503 from proxy after CONNECT
  • CONNECT phase completed!
  • Closing connection 0
    curl: (56) Received HTTP code 503 from proxy after CONNECT

I have tested filebeat using this also
(Filebeat quick start: installation and configuration | Filebeat Reference [8.11] | Elastic)
It is not working

I have enable cd /etc/filebeat/modules.d/apache.yml enabled as well as there were many examples of it on elastic discuss..

1 Like

Again, you're post is nearly very hard to read. :frowning:

The first set of docs you linked to are incredibly old and I would not use them. It says they're for version 6 and only tested up to 7.5 that's years old.

It looks like you have proxies and they're getting in the way / not functioning as you wish...

I really can't help you with the proxies. It looks like the ports are wrong. It looks like it's trying to do 443... You're going to need to straighten that out or take them out of the equation

Basically your beats can't connect with Kibana because the proxy is in the way / not configured correctly is what it looks like to me

1 Like

Thankyou so much stephen if there is a proxy error, I am trying to fix it.
But are you able to spot any other error as I have mentioned the output of commands that you have asked for.
And I am new here sorry for the inconvenience.
I will try and post it accordingly, I am learning to do so.
Do i need to add kibana url in the vim /etc/enviornment ?
How to configure vim /etc/enviornment for kibana to connect with filebeat and my kibana has nginx as well

1 Like

It is hard for me to tell with the proxy

I would only follow our documentation / blogs not just random docs on Google.

The version of the Docs is important Lots of Changes between 6.X 7.x (before 7.10), 7.11+, and 8.x

Apologies, but I don't know what that means, again with the proxies you are going to need to find your proxy expert... I am certainly not that.

1 Like

Preformatted text CAN YOU SUGGEST DOCUMENTATION OR BLOGS RELATED Preformatted text TO MY SET UP AS EXPLAINED IN MY POST?
Preformatted text THIS IS THE OUTPUT OF THE COMMAND NOW :-1:

curl -k -v https://#.#.#.#:5601/api/status
*   Trying #.#.#.#...
* TCP_NODELAY set
* connect to #.#.#.# port 5601 failed: Connection refused
* Failed to connect to #.#.#.# port 5601: Connection refused
* Closing connection 0
curl: (7) Failed to connect to #.#.#.# port 5601: Connection refused
1 Like

AND NOW THERE IS NO PROXY SETTINGS MENTIONED IN MY SETUP, I HAVE COMMENTED ALL OF THEM AND THERE IS NO FIREWALL RUNNING.

Hi @kriti_dabas

I fixed your post to format the code hit the edit button so you can see what I did

Also is there a reason you are typing in all CAPS that is equivalent to yelling at us all...

So that is still a connectivity issue or perhaps SSL.

Is Kibana actually running on https?

Try

curl -k -v http://#.#.#.#:5601/api/status

To comment code put
3 backticks ` the line before and line after

Apologies I have not been able to help.

Generally we start from the beginning and make sure elasticsearch and Kibana are properly configured.

Perhaps you can share the Kibana.yml

Can you reach Kibana through a browser.

Perhaps open a new topic with a very descriptive title like connecting to Kibana through nginix proxy and perhaps someone who's an expert in that can help you.

Thankyou for your help and more importantly for guiding how to post with a proper format. I am learning from the best I assume.
I am able to access my Kibana using the URL as mentioned in the filebeat yaml file through any browser and Kibana is running on https and will share my kibana yaml file. I hope you can give me some lead. I appreciate your help.

Clarification you only need to add the 3 ` before and after code snippet not all normal text.

Something does not make sense.

If you can access Kibana through a browser then you should be able to access it through curl ... Unless your browser has proxy settings for something.

The results of the curl above shows that there is no connectivity... That's something I think are going to have to figure out on your own or ask somebody else on your network or laptop or server to help you with.

I can only tell you that if the curl does not work then the file beat set up will not work either.

Okay. Thank you

Kibana configuration

server.publicBaseUrl: "https://kibana ip"
elasticsearch.hosts:
  - "https://ec1:9200"
  - "https://ec2:9200"
elasticsearch.username: "manifest"
elasticsearch.password: "manifest"
elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/certs/ca/ca.crt" ]
elasticsearch.ssl.verificationMode: full
logging.rotate:
  enabled: true
  everyBytes: 10485760
logging.dest: /var/log/kibana/kibana.log
xpack.encryptedSavedObjects.encryptionKey: "Security"
xpack.security.encryptionKey: "Security"


You appear to be missing

server.host: 0.0.0.0

server.host
This setting specifies the host of the back end server. To allow remote users to connect, set the value to the IP address or DNS name of the Kibana server. Use 0.0.0.0 to make Kibana listen on all IPs (public and private). Default: "localhost"

Without it, Cabana is only available on localhost

whenever i add server.host, I am not able to access kibana through its url and it shows 502 bad Gateway nginx/1.14.1. Below is my configuration of nginx

 listen 443 ssl http2;
 listen [::]:443 ssl http2;


    server_name kibanaip;
    ssl_certificate /etc/ssl/certs/kibana.crt;
    ssl_certificate_key /etc/ssl/private/kibana.key
   location / {
        proxy_pass http://127.0.0.1:5601;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
   }
}
server {
    listen 80;
    listen [::]:80;
    server_name kibana ip;
    return 301 https://kibanaip;
}

Apologies... I would open a thread specifically topic and title Kibana and Nginx proxy issues etc

perhaps someone else will the this one says dashboard so no one will look at it.... I

I really can't help you I have installed Kibana literally 100s of times.. but do not use proxy...

okey

Hey stephen now my filebeat is able to connect with kibana, I have resolved the issue and there is logs by the name of filebeat. There are plenty of dashboards also shown on my Kibana UI but how am i supposed to add the index pattern by the name of filebeat to all these dashboards .

Please share your latest version of the filebeat.yml

I'm also not clear. Are you saying you have dashboards but no data is in it?

Which dashboard do you expect to have data in it?

What kind of logs are you shipping?

Are you using modules?

Can you provide screenshots?

More information you provide, the more we can try to help