Elasticsearch index name is ".kibana" instead of "filebeat-*"

Hi,
I've just setup Elasticsearch (6.0.0), Filebeat(6.0.1) and Kibana(6.0.0) under java8.
Kibana seems to look after an index named "filebeat-*", but when I check the index name in elasticsearch it shows something like ".kibana"

I changed my filebeat.yml to this :
output.elasticsearch:

Array of hosts to connect to.
hosts: ["localhost:9200"]

hosts: ["127.0.0.1:9200"]
template:

Template name. By default the template name is filebeat
name: "filebeat"

but still see ".kibana" as elasticsearch index name.
On the other side, when I start kibana, it looks for an index named "filebeat-".
but can't change it to something ".kibana*"

could you please help.

Thanks

HI @telkhayat,

Could you please share the full filebeat.yml file here??

Regards,
Harsh bajaj

Hi @harshbajaj16
Almost I've changed nothing... thanks in advance.

# configuration file.

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log

  # Change to true to enable this prospector configuration.
  # enabled: false
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    #- /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*
    - c:\programdata\ngnix\logs\access*.log

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false

#================================ General =====================================
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging


#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "localhost:5601"
  # host: "127.0.0.1:5601"

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]
  # hosts: ["127.0.0.1:9200"]
  template:
  # Template name. By default the template name is filebeat
  name: "filebeat"
  # Path to template file
  # path: "filebeat.template.json"
  
  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

Hi @telkhayat,

I seems ok please share the filebeat logs while starting filebeat service.
log path: /var/log/filebeat/filebeat

Regards,
Harsh Bajaj

1 Like
HI @harshbajaj16
Thanks for your replay... there is my filebeat log.
 
2019-04-25T11:16:44+02:00 INFO Home path: [C:\Program Files\Filebeat] Config path: [C:\Program Files\Filebeat] Data path: [C:\Program Files\Filebeat\data] Logs path: [C:\Program Files\Filebeat\logs]
2019-04-25T11:16:44+02:00 INFO Metrics logging every 30s
2019-04-25T11:16:44+02:00 INFO Beat UUID: 5a7bafed-49b2-427d-87a5-c9589d16bf84
2019-04-25T11:16:44+02:00 INFO Setup Beat: filebeat; Version: 6.0.1
2019-04-25T11:16:44+02:00 INFO Elasticsearch url: http://localhost:9200
2019-04-25T11:16:44+02:00 INFO Beat name: DEGETEL-L0348
2019-04-25T11:16:44+02:00 INFO Elasticsearch url: http://localhost:9200
2019-04-25T11:16:44+02:00 INFO Connected to Elasticsearch version 6.0.0
2019-04-25T11:16:44+02:00 INFO Template already exists and will not be overwritten.


it seems to me that filebeat didn't load the template index. running "http://127.0.0.1:9200/_cat/indices?v" I got :
health status index   uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana x6mcfowxRm2SDbGydqyutw   1   1         86           52    213.7kb        213.7kb

and on the other side, filebeat doesn't harvests any data : 
2019/04/25 09:57:15.715344 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.memstats.gc_next=4194304 beat.memstats.memory_alloc=1571296 beat.memstats.memory_total=8912144 filebeat.harvester.open_files=0 filebeat.harvester.running=0 libbeat.config.module.running=0 libbeat.pipeline.clients=2 libbeat.pipeline.events.active=0 registrar.states.current=0 



kibana is looking for an index named filebeat-*. No data was loaded

Hi @telkhayat,

I found there are some difference in filebeat and elasticsearch version in filebeat logs i.e. filebeat version 6.0.1 and elasticsearch version is 6.0.0.

Mayb this can be the cause.

Regards,
Harsh Bajaj

1 Like
Thanks @harshbajaj16
indeed, I'm using Elasticsearch 6.0.1, Filebeat 6.0.1 and Kibana 6.0.1 under java8.
Eventually I managed to solve this issue.
I didn't pay attention that my filebeat.yml path wasn't the good one.
thanks for your help.


filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- type: log

  # Change to true to enable this prospector configuration.
  # enabled: false
  enabled: true
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    # - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*
    - C:\ProgramData\ngnix\logs\*

HI @telkhayat,

i'm not able to understand if you were using same version why the filebeat logs showing connected to elasticsearch 6.0.0.

2 Likes
Hi @harshbajaj16

Yeah you're right... my Elasticsearch version is 6.0.0 and kibana as well...
And thanks it's working now
1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.