Filebeat on centos8?

I'm trying to start filebeat on centos 8 server.
My elk server is on 6.8.3 version.

Here my test :
filebeat 7.6.1 installed by rpm
the config file is OK
filebeat starts correctly but I don't see any log in elasticsearch/kibana

I have removed the 7.6.1 and installed 6.8.3 (like my ELK server), installation and config files are good but I can't start filebeat.

When I see the product compatibilty, I don't see centos8 for filebeat version.

I'm wondering if my problem is really the product version or if it's something else.

somebody has already successfully sent logs to elk with filebeat on a centos 8 server ?

Thank you for your reply.

If it is not in the support matrix it is not supported. However, it does not mean that it is not working on CentOS 8. It just means that we are not testing it.

Could you please share your configuration formatted using </> and the debug logs (./filebeat -e -d "*")?

thank you for your reply.

Here the debug log.
I see a lot a lines particullary those lines :

2020-03-19T08:04:49.847-0400    DEBUG   [elasticsearch] elasticsearch/client.go:523     Bulk item insert failed (i=41, status=500): {"type":"string_index_out_of_bounds_exception","reason":"String index out of range: 0"}
2020-03-19T08:04:49.847-0400    DEBUG   [elasticsearch] elasticsearch/client.go:523     Bulk item insert failed (i=42, status=500): {"type":"string_index_out_of_bounds_exception","reason":"String index out of range: 0"}
2020-03-19T08:04:49.847-0400    DEBUG   [elasticsearch] elasticsearch/client.go:523     Bulk item insert failed (i=43, status=500): {"type":"string_index_out_of_bounds_exception","reason":"String index out of range: 0"}
2020-03-19T08:04:45.903-0400    ERROR   pipeline/output.go:121  Failed to publish events: temporary bulk send failure
2020-03-19T08:04:45.903-0400    INFO    pipeline/output.go:95   Connecting to backoff(elasticsearch(http://P_ADRESS:9200))
2020-03-19T08:04:45.903-0400    DEBUG   [elasticsearch] elasticsearch/client.go:733     ES Ping(url=http://IP_ADRESS:9200)
2020-03-19T08:04:45.903-0400    INFO    [publisher]     pipeline/retry.go:173   retryer: send wait signal to consumer
2020-03-19T08:04:45.903-0400    INFO    [publisher]     pipeline/retry.go:175     done
2020-03-19T08:04:45.906-0400    DEBUG   [elasticsearch] elasticsearch/client.go:756     Ping status code: 200
2020-03-19T08:04:45.906-0400    INFO    elasticsearch/client.go:757     Attempting to connect to Elasticsearch version 6.8.3
2020-03-19T08:04:45.906-0400    DEBUG   [elasticsearch] elasticsearch/client.go:775     GET http://IP_ADRESS:9200/_license?human=false  <nil>
2020-03-19T08:04:45.948-0400    DEBUG   [license]       licenser/check.go:31    Checking that license covers %sBasic
2020-03-19T08:04:45.948-0400    INFO    [license]       licenser/es_callback.go:50      Elasticsearch license: Basic
2020-03-19T08:04:45.949-0400    DEBUG   [elasticsearch] elasticsearch/client.go:775     GET http://IP_ADDRESS:9200/_cat/templates/filebeat  <nil>
2020-03-19T08:04:46.058-0400    DEBUG   [multiline]     multiline/multiline.go:175      Multiline event flushed because timeout reached.

Here the config file :


- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
    - /var/log/*.log

  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#============== template ============================ "filebeat"
setup.template.pattern: "filebeat-*"

#-------------------------- Elasticsearch output ------------------------------
  # Array of hosts to connect to.
  hosts: ["IP_ADRESS:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

  index: "filebeat-test-%{[beat.version]}-%{+yyyy.MM.dd}"

Even if I remove the, the setup.template.pattern and the index name, it's not working.

Filebeart starts but I see nothing on ELK server.
I'm using 7.6.1 version installed by yum.

Have you run ./filebeat setup? Also, could you please provide more context after the multiline debug log?

I didn't executed ./filebeat setup.

I have googled the error String index out of range: 0" when using include_fields processor and I have some news. I test since 2 hours, very strange issue.

When I use 7.6.1, if I change %{[beat.version]} to %{[agent.version]}, filebeat sends logs to elk,
I see the new index name filebeat-test-7.6.1-2020.03.19 and I see the number of documents increasing
but I can't research data for my server.
In "discover" section, I don't see my server.
I have some mapping conflict with the version of this agent but why I can't find my server ?
I already have mapping conflict with another index and I had the possibility to search with this warning.

Other test :
I have removed filebeat 7.6.1 and deleted the directories : /usr/share/filebeat/ /var/lib/filebeat/
I have installed filebeat 6.8.3 and now I can start filebeat.
It creates the index name (for a while, it's filebeat-test-2020.03.19) but I have no documents :frowning:

when I run ./filebeat -e -d "*"
I don't see error, I think it sent logs to elk, here an example :

I2020-03-19T15:47:14.800+0100    DEBUG   [publish]       pipeline/processor.go:309       Publish event: {
  "@timestamp": "2020-03-19T14:47:14.796Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "doc",
    "version": "6.8.3",
    "pipeline": "filebeat-6.8.3-system-syslog-pipeline"
  "beat": {
    "name": "server",
    "hostname": "server",
    "version": "6.8.3"
  "log": {
    "file": {
      "path": "/var/log/messages-20200301"
  "prospector": {
    "type": "log"
  "input": {
    "type": "log"
  "fileset": {
    "module": "system",
    "name": "syslog"
  "event": {
    "dataset": "system.syslog"
  "host": {
    "containerized": false,
    "architecture": "x86_64",
    "os": {
      "name": "CentOS Linux",
      "codename": "Core",
      "platform": "centos",
      "version": "8 (Core)",
      "family": "redhat"
    "id": "f42914dcfdb74052bd6026f96c682efc",
    "name": "server"
  "source": "/var/log/messages-20200301",
  "offset": 11903077,
  "message": "Feb 27 09:28:00 tlinf006 influxd[3833]: [httpd] - metricsesxuser [27/Feb/2020:09:28:00 -0500] \"POST /write?db=metricsesx HTTP/1.1\" 204 0 \"-\" \"Telegraf/1.12.4\" 5b084eee-596d-11ea-b586-000c29a29d52 976"

Sometime, I have this :

2020-03-19T15:47:14.804+0100    DEBUG   [publish]       pipeline/client.go:201  Pipeline client receives callback 'onDroppedOnPublish' for event: %+v{2020-03-19 15:47:14.800702215 +0100 CET m=+12.736744087 {"pipeline":"filebeat-6.8.3-system-syslog-pipeline"} {"beat":{"hostname":"server_name","name":"server_name","version":"6.8.3"},"event":{"dataset":"system.syslog"},"fileset":{"module":"system","name":"syslog"},"host":{"architecture":"x86_64","containerized":false,"id":"f42914dcfdb74052bd6026f96c682efc","name":"server_name","os":{"codename":"Core","family":"redhat","name":"CentOS Linux","platform":"centos","version":"8 (Core)"}},"input":{"type":"log"},"log":{"file":{"path":"/var/log/messages-20200223"}},"message":"Feb 20 08:50:00 tlinf006 influxd[3833]: [httpd] - metricsesxuser [20/Feb/2020:08:50:00 -0500] \"POST /write?db=metricsesx HTTP/1.1\" 204 0 \"-\" \"Telegraf/1.12.4\" e327b549-53e7-11ea-91fd-000c29a29d52 1332","offset":11808824,"prospector":{"type":"log"},"source":"/var/log/messages-20200223"} { false 0xc42026e340 /var/log/messages-20200223 11809035 2020-03-19 15:47:05.239440054 +0100 CET m=+3.175481921 -1ns log map[] 100902053-64768}}
2020-03-19T15:47:14.804+0100    DEBUG   [publish]       pipeline/client.go:201  Pipeline client receives callback 'onDroppedOnPublish' for event: %+v{2020-03-19 15:47:14.797354791 +0100 CET m=+12.733396655 null {"log":{"file":{"path":"/var/log/messages-20200308"}},"message":"Mar  5 08:13:40 tlinf006 influxd[3833]: [httpd] - metricsesxuser [05/Mar/2020:08:13:40 -0500] \"POST /write?db=metricsesx HTTP/1.1\" 204 0 \"-\" \"Telegraf/1.12.4\" 218e9488-5ee3-11ea-9521-000c29a29d52 1161","offset":11731751,"source":"/var/log/messages-20200308"} { false 0xc42026e680 /var/log/messages-20200308 11731962 2020-03-19 15:47:05.240800594 +0100 CET m=+3.176842470 -1ns log map[] 100943419-64768}}

And when I look to elk server, I have mapping conflict. How it's possible ? I have other filebeat agent with 6.8.3 agent, so I don't understand.

Is it due to the previous installation of 7.6.1 ?

Thank you for your help.

I have noticed I have the same issue with filebeat 7.6.1 on windows with a dev environnment in 6.8.3.

I have removed this filebeat, installed the 6.8.3 and no way to start it by windows service (error 1067). With the command line, it was OK.
I have resolved this issue by deleting a directory called "registry" : C:\ProgramData\filebeat\registry

Then I am able to start it by windows service.

So, I think I have to do the same thing on linux server with my filebeat.

But I have a trouble: the mapping conflict and I want to solve it first because, at this moment, due to the 7.6.1 version, no more data are coming !!!

When I look my indices on index management, for my indices of the day, the mapping version is 7.6.1.
I have deleted all filebeat version 7.6.1 and stopped filebeat on each server.
I have removed all indices of the day and refresh the index pattern, no mapping conflict.

I have only server with filebeat 6.8.3 in my stack.
When I run one of this filebeat, the index is created but the mapping version is 7.6.1 !
And no data are coming in this index and I have mapping conflict again.

How I can solve this ? Do I need to delete my index pattern ? What consequence ?

I found the solution :slight_smile:

I have remove the template name "filebeat-7.6.1", restart each agent because some agent don't found the template name, and it was OK !!

I don't remember everything, but I think, when I have installed for the first time the filebeat 7.6.1, I think it was with default parameter so it creates a new template, then the following indices are created with this new template and we have this strange behaviour.

Now, all good :slightly_smiling_face:

Have a nice week-end !

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.