Error in filebeat when sending logs to kibana

Hello,
I have set up Elasticsearch (6.7.1) and kibana on my local machine.
Have installed Filebeat and logstash on a VM for testing and for shipping logs from VM to my local machine.

Filebeat config:

#=========================== Filebeat inputs =============================
setup.template.overwrite: true
filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
   # - /var/log/*.log
   - C:\Test\*.*
    #- c:\programdata\elasticsearch\logs\*
#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["VMIP:5044"]
#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  hosts: ["http://localmachineIP:9200"]

Logstash config:

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://localmachineIP:9200"]
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}

I am getting this error in Filebeat logs:

error loading C:\Program Files\Filebeat\kibana\7\dashboard\osquery-rootkit.json: blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];. Response: {"objects":[{"id":"6ec10290-f4aa-11e7-8647-534bb4c21040-ecs","type":"visualization",

Filebeat folder have full permissions to read and write.
I do not have enough disk space. Can this be a reason?
Any help would be appreciated.
Thanx.

Your index is probably locked because you don't have enough disk space. Try freeing up some space and then manually resetting the index lock from the Dev Tools Console:

PUT /your-index/_settings
{
  "index.blocks.read_only_allow_delete": null
}

See the documentation about disk-based shard allocation if you want to know why this happens.

Hello,
I made some space in C drive on my VM and installed elasticsearch and kibana on a new machine.
I have added the ip of my new machine in filebeat.yml
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  hosts: ["http://NewMachineIP:5601"]

Still its giving me this error
|INFO|[publisher]|pipeline/module.go:97|Beat name: VM_HBOOTWALA|
|---|---|---|---|
|2019-04-16T10:53:32.034+1000|INFO|kibana/client.go:118|Kibana url: http://localhost:5601|
|2019-04-16T10:53:34.047+1000|ERROR|instance/beat.go:802|Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://localhost:5601/api/status fails: fail to execute the HTTP GET request: Get http://localhost:5601/api/status: dial tcp 127.0.0.1:5601: connectex: No connection could be made because the target machine actively refused it.. Response:|

Is it still because of space? I have 6.5gb free in C drive in my VM.
What am I doing wrong?
Thanx.

in firebeat.yml
setup.kibana:
host: "https://xxxxxxx:5601"

allso need run firebat setup

You mean filebeat setup?
Install filebeat again or just stop and start service?
Thanx.

Getting the same error.

this is not hosts:
try host: "http://NewMachineIP:5601"

Worked with this host: IP:5601

So now I can see the index pattern filebeat-* in kibana but no logs when I click on Discover. I tried changing the time as well but no results.
Technically, I should see logstash index since I am pushing logs from filebeat to logstash then to elastic search.

I can see the logs now in kibana under filebeat index.
Just curious, why is there no logstash index and only filebeat?

you use filebeat ->es not filebeat-> logstash->es
this no logstash* index

The output events are written to the Filebeat index because you've set index to use the name passed in the Beats metadata: index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}". For a more detailed explanation, see the section about versioned beats indices in the docs. If you're planning to use the pre-built Beats dashboards, you generally do want to use this setting.

If you don't specify the index setting in the elasticsearch output stage, the name defaults to logstash-%{+YYYY.MM.dd}.

I just mentioned index => "logstash" in logstash config file.
This is the reason for the confusion that I can see filebeat-* index in kibana.

use filebeat setup
it will add the index...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.