Message :Invalid Frame Type, received: 84 et 69 dans logstash


(de Bellabre Yves) #1

Bonjour,

j'utilise un docker elk qui reçoit les logs d'un serveur postgresql via filbeat. Logstash crache cette erreur à chaque fois que filebeat lui envoi un evènement. Les 2 os sont redhat 7. Les FW sont désactivés, SE linux aussi.

  • filebeat version 5.1.1
  • Elasticsearch 5.1.1,
  • Logstash 5.1.1
  • Kibana 5.1.1.

Ci -dessous ma conf logstash pour les input et les output .
Quelqu'un aurait-il une idée du pourquoi de cette erreur ?
Merci

$ cat 02-beats-input.conf
input {
  beats {
    port => 5044
    ssl => false
    ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt"
    ssl_key => "/etc/pki/tls/private/logstash-beats.key"
  }
}

$ cat 30-output.conf
output {
  elasticsearch {
    hosts => ["localhost"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

La conf de filbeat (Là où sont les logs de la base)

filebeat.prospectors:
- input_type: log
  encoding: utf-8
  paths:
    - /base/pocpg/data/pg_log/*.log
  include_lines: ['^ERR', '^WARN', 'FATAL']
- input_type: log
  paths:
    - "/var/log/apache2/*"
  fields:
    apache: true
  fields_under_root: true

Le fichier Registry de filebeat

$ cat /var/lib/filebeat/registry
[{"source":"/base/pocpg/data/pg_log/postgresql-Fri.log","offset":1702453,"FileStateOS":`{"inode":1076082378,"device":64775},"timestamp":"2017-01-13T15:04:09.410183375+01:00","ttl":-1},{"source":"/base/pocpg/data/pg_log/postgresql-Mon.log","offset":3336008,"FileStateOS":{"inode":1076082381,"device":64775},"timestamp":"2017-01-13T14:35:34.479778435+01:00","ttl":-1},{"source":"/base/pocpg/data/pg_log/postgresql-Sat.log","offset":3334943,"FileStateOS":{"inode":1076082379,"device":64775},"timestamp":"2017-01-13T14:35:34.47977876+01:00","ttl":-1},{"source":"/base/pocpg/data/pg_log/postgresql-Sun.log","offset":3330375,"FileStateOS":{"inode":1076082380,"device":64775},"timestamp":"2017-01-13T14:35:34.479779316+01:00","ttl":-1},{"source":"/base/pocpg/data/pg_log/postgresql-Thu.log","offset":3354031,"FileStateOS":{"inode":1076082375,"device":64775},"timestamp":"2017-01-13T14:35:34.479779636+01:00","ttl":-1},{"source":"/base/pocpg/data/pg_log/postgresql-Tue.log","offset":3381376,"FileStateOS":{"inode":1076082382,"device":64775},"timestamp":"2017-01-13T14:35:34.479779898+01:00","ttl":-1},{"source":"/base/pocpg/data/pg_log/postgresql-Wed.log","offset":3370121,"FileStateOS":{"inode":1076082358,"device":64775},"timestamp":"2017-01-13T14:35:34.47978017+01:00","ttl":-1}]`

Logstash logs:

$ tail -f logstash.stdout
[2017-01-13T13:41:57,058][ERROR][org.logstash.beats.BeatsHandler] Exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
[2017-01-13T13:41:57,058][ERROR][org.logstash.beats.BeatsHandler] Exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 84
[2017-01-13T13:42:57,061][ERROR][org.logstash.beats.BeatsHandler] Exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
[2017-01-13T13:42:57,063][ERROR][org.logstash.beats.BeatsHandler] Exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 84
[2017-01-13T13:43:57,065][ERROR][org.logstash.beats.BeatsHandler] Exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
[2017-01-13T13:43:57,065][ERROR][org.logstash.beats.BeatsHandler] Exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 84
[2017-01-13T13:44:57,068][ERROR][org.logstash.beats.BeatsHandler] Exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
[2017-01-13T13:44:57,068][ERROR][org.logstash.beats.BeatsHandler] Exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 84
[2017-01-13T13:45:57,071][ERROR][org.logstash.beats.BeatsHandler] Exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69
[2017-01-13T13:45:57,071][ERROR][org.logstash.beats.BeatsHandler] Exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 84

(David Pilato) #2

Merci de formatter ton post avec </> pour le rendre plus lisible.

Quelle est la conf de sortie de filebeat?


(de Bellabre Yves) #3

Merci pour ta célérité.
Je suis pas très doué pour le formatage du post :confused: . Je vais reéssayer


(David Pilato) #4

J'ai édité ton message pour te montrer.

Ainsi que ta réponse car ça n'allait pas.

Ajoute à la suite en tant que réponse ton output filebeat car en fait tu ne l'avais pas ajouté.


(de Bellabre Yves) #5
Je l'ai mis en mode DEBUG . Je redémarre filbeat

Pour info J'ai des messages d'erreur FATAL dans les logs Postgres toutes les 2 minutes pour mon jeu de tests

     2017-01-13 16:20:01.106 CET >FATAL:  database "opm" does not exist
    < 2017-01-13 16:20:15.209 CET >FATAL:  database "MigrationDB" does not exist
    < 2017-01-13 16:20:41.274 CET >LOG:  duration: 13.939 ms  statement: BEGIN;SET statement_timeout=30000;COMMIT;SELECT


$tail -f /var/log/filebeat/filbeat

Il publie bien

2017-01-13T16:10:05+01:00 DBG  Check file for harvesting: /base/pocpg/data/pg_log/postgresql-Fri.log
2017-01-13T16:10:05+01:00 DBG  Update existing file for harvesting: /base/pocpg/data/pg_log/postgresql-Fri.log,     offset: 1857436
2017-01-13T16:10:05+01:00 DBG  Harvester for file is still running: /base/pocpg/data/pg_log/postgresql-Fri.log
2017-01-13T16:10:05+01:00 DBG  Check file for harvesting: /base/pocpg/data/pg_log/postgresql-Mon.log
2017-01-13T16:10:05+01:00 DBG  Update existing file for harvesting: /base/pocpg/data/pg_log/postgresql-Mon.log, offset: 3336008
2017-01-13T16:10:05+01:00 DBG  File didn't change: /base/pocpg/data/pg_log/postgresql-Mon.log
2017-01-13T16:10:05+01:00 DBG  Check file for harvesting: /base/pocpg/data/pg_log/postgresql-Sat.log
2017-01-13T16:10:05+01:00 DBG  Update existing file for harvesting: /base/pocpg/data/pg_log/postgresql-Sat.log, offset: 3334943
2017-01-13T16:10:05+01:00 DBG  File didn't change: /base/pocpg/data/pg_log/postgresql-Sat.log
2017-01-13T16:10:05+01:00 DBG  Prospector states cleaned up. Before: 7, After: 7
_2017-01-13T16:10:05+01:00 DBG  1 events out of 1 events sent to logstash. Continue sending_
2017-01-13T16:10:05+01:00 DBG  send completed
2017-01-13T16:10:05+01:00 DBG  Events sent: 1
2017-01-13T16:10:05+01:00 DBG  Processing 1 events
2017-01-13T16:10:05+01:00 DBG  Registrar states cleaned up. Before: 7, After: 7
2017-01-13T16:10:05+01:00 DBG  Write registry file: /var/lib/filebeat/registry
2017-01-13T16:10:05+01:00 DBG  Registry file updated. 7 states written.
2017-01-13T16:10:05+01:00 DBG  End of file reached: /base/pocpg/data/pg_log/postgresql-Fri.log; Backoff now.
2017-01-13T16:10:09+01:00 DBG  End of file reached: /base/pocpg/data/pg_log/postgresql-Fri.log; Backoff now.
2017-01-13T16:10:10+01:00 DBG  Flushing spooler because of timeout. Events flushed: 0
2017-01-13T16:10:15+01:00 DBG  Flushing spoole2017-01-13T16:10:15+01:00 DBG  Start next scan
2017-01-13T16:10:15+01:00 DBG  Prospector states cleaned up. Before: 0, After: 0
2017-01-13T16:10:15+01:00 DBG  Run prospector
2017-01-13T16:10:15+01:00 DBG  Start next scan
2017-01-13T16:10:15+01:00 DBG  Check file for harvesting: /base/pocpg/data/pg_log/postgresql-Fri.log
2017-01-13T16:10:15+01:00 DBG  Update existing file for harvesting: /base/pocpg/data/pg_log/postgresql-Fri.log, offset: 1857436
2017-01-13T16:10:15+01:00 DBG  Harvester for file is still running: /base/pocpg/data/pg_log/postgresql-Fri.log
2017-01-13T16:10:15+01:00 DBG  Check file for harvesting: /base/pocpg/data/pg_log/postgresql-Mon.log
2017-01-13T16:10:15+01:00 DBG  Update existing file for harvesting: /base/pocpg/data/pg_log/postgresql-Mon.log, offset: 3336008
2017-01-13T16:10:15+01:00 DBG  File didn't change: /base/pocpg/data/pg_log/postgresql-Mon.log
2017-01-13T16:10:15+01:00 DBG  Check file for harvesting: /base/pocpg/data/pg_log/postgresql-Sat.log
2017-01-13T16:10:15+01:00 DBG  Update existing file for harvesting: /base/pocpg/data/pg_log/postgresql-Sat.log, offset: 3334943
2017-01-13T16:10:15+01:00 DBG  File didn't change: /base/pocpg/data/pg_log/postgresql-Sat.log
2017-01-13T16:10:15+01:00 DBG  Check file for harvesting: /base/pocpg/data/pg_log/postgresql-Sun.log
2017-01-13T16:10:15+01:00 DBG  Update existing file for harvesting: /base/pocpg/data/pg_log/postgresql-Sun.log, offset: 3330375

(David Pilato) #6

Pardon. Je n'étais pas clair.

Je disais qu'on ne voit pas la configuration filebeat complète. Tu peux la poster entièrement STP ?


(de Bellabre Yves) #7

La voici

cat /etc/filebeat/filebeat.yml
###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

#=========================== Filebeat prospectors =============================

filebeat.prospectors:
- input_type: log
  encoding: utf-8
  paths:
    - /base/pocpg/data/pg_log/*.log
  include_lines: ['^ERR', '^WARN', 'FATAL']
- input_type: log
  paths:
    - "/var/log/apache2/*"
  fields:
    apache: true
  fields_under_root: true

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ["^DBG"]

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ["^ERR", "^WARN"]

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: [".gz$"]

  # Optional additional fields. These field can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after


#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

#================================ Outputs =====================================

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  # hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["srisvm-pocoracle:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

(David Pilato) #8

Dans la sortie filebeat, il y a:

output.logstash:
  hosts: ["srisvm-pocoracle:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
  #ssl.certificate: "/etc/pki/client/cert.pem"
  #ssl.key: "/etc/pki/client/cert.key"

Ce qui montre que filebeat envoie en clair vers logstash et non en https.

Mais dans LS la conf beats dit l'inverse:

input {
  beats {
    port => 5044
    ssl => false
    ssl_certificate => "/etc/pki/tls/certs/logstash-beats.crt"
    ssl_key => "/etc/pki/tls/private/logstash-beats.key"
  }
}

Il faut rendre ça cohérent.


(de Bellabre Yves) #9

Je ne comprends pas ? Si le ssl est désactivé des 2 cotés ca doit être bon , non ? Vu comme c'est écrit j'ai l'impression qu'on n'utilise pas les certificats. (false d'un coté et # de l'autre pour commenter) ? Où est mon erreur d’interprétation ?
Merci


(David Pilato) #10

ooops. Je n'avais pas vu le ssl => false

Pour plus de sureté, tu peux juste mettre:

input {
  beats {
    port => 5044
  }
}

Et au lieu d'envoyer vers elasticsearch en output, juste utiliser stdout avec un codec debug ?


(de Bellabre Yves) #11
[quote="dadoonet, post:10, topic:71528"]
stdout avec un codec debug
[/quote]

        J'ai mis çà 
        cat 02-beats-input.conf
        input {
          beats {
            port => 5044
          }

        }
        root@f963667ac0e3:/etc/logstash/conf.d# cat 30-output.conf
        output {
          stdout { codec => rubydebug }
        }

J'ai relancé filbeat et logstash au cas où . Voilà ce qu'on a en sortie.  Cela veut-il dire que c'est elastic qui ne prend pas la sortie de logstash ?

       tail -f logstash.stdout
             "version" => "5.1.1"
        },
              "host" => "srisvm-pocpostgre.isocel.info",
            "source" => "/base/pocpg/data/pg_log/postgresql-Fri.log",
           "message" => "< 2017-01-13 17:10:15.362 CET >FATAL:  database \"MigrationDB\" does not exist",
              "type" => "log",
              "tags" => [
            [0] "beats_input_codec_plain_applied"
        ]
    }

(David Pilato) #12

Donc, c'est une bonne nouvelle.

Regardons maintenant ce que tu as côté output logstash:

output {
  elasticsearch {
    hosts => ["localhost"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Tu peux aussi mettre les logs de démarrage de Elasticsearch? Y a t'il une conf particulière ?


(David Pilato) #13

Mais j'ai quand même un doute sur ce que tu fais.

Car la trace:

[2017-01-13T13:41:57,058][ERROR][org.logstash.beats.BeatsHandler] Exception: org.logstash.beats.BeatsParser$InvalidFrameProtocolException: Invalid Frame Type, received: 69

Montre un pb dans la communication entre FB et LS.

Par ailleurs, pourquoi utilises-tu logstash ?


(de Bellabre Yves) #14

Rien de particulier dans Elasticsearh (J'ai pris un containeru car j'avais
la même erreur quand je faisais moi-même l'install des 3 produits)
Je débute avec ces produits mais si tu as un idée d'u produit autre que
logstash je suis preneur.

Je met le log de Elastic sur le forum

Cordialement,

Yves de Bellabre


(de Bellabre Yves) #15
Log de elastic search au dernier redémarrage
[2017-01-13T16:07:23,027][INFO ][o.e.n.Node               ] initialized
[2017-01-13T16:07:23,028][INFO ][o.e.n.Node               ] [Hu2MefO] starting ...
[2017-01-13T16:07:23,200][INFO ][o.e.t.TransportService   ] [Hu2MefO] publish_address {172.17.0.2:9300}, bound_addresses {[::]:9300}
[2017-01-13T16:07:23,205][INFO ][o.e.b.BootstrapCheck     ] [Hu2MefO] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-01-13T16:07:26,274][INFO ][o.e.c.s.ClusterService   ] [Hu2MefO] new_master {Hu2MefO}{Hu2MefOkSA-WBwzchEIVRg}{97oiJMqlSCeG_qHrscV2Nw}{172.17.0.2}{172.17.0.2:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-01-13T16:07:26,290][INFO ][o.e.h.HttpServer         ] [Hu2MefO] publish_address {172.17.0.2:9200}, bound_addresses {[::]:9200}
[2017-01-13T16:07:26,290][INFO ][o.e.n.Node               ] [Hu2MefO] started
[2017-01-13T16:07:26,446][INFO ][o.e.g.GatewayService     ] [Hu2MefO] recovered [2] indices into cluster_state
[2017-01-13T16:07:26,756][INFO ][o.e.c.r.a.AllocationService] [Hu2MefO] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[filebeat-2017.01.13][0]] ...]).
^C
root@f963667ac0e3:/var/log/elasticsearch# cat  elasticsearch.log
[2017-01-13T13:35:34,494][INFO ][o.e.c.m.MetaDataCreateIndexService] [Hu2MefO] [filebeat-2017.01.13] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2017-01-13T13:35:34,621][INFO ][o.e.c.m.MetaDataMappingService] [Hu2MefO] [filebeat-2017.01.13/Pen3dvVdQ-2BXI9Htf3_Fw] create_mapping [log]
[2017-01-13T16:03:05,152][INFO ][o.e.n.Node               ] [Hu2MefO] stopping ...
[2017-01-13T16:03:05,255][INFO ][o.e.n.Node               ] [Hu2MefO] stopped
[2017-01-13T16:03:05,255][INFO ][o.e.n.Node               ] [Hu2MefO] closing ...
[2017-01-13T16:03:05,264][INFO ][o.e.n.Node               ] [Hu2MefO] closed
[2017-01-13T16:03:16,229][INFO ][o.e.n.Node               ] [] initializing ...
[2017-01-13T16:03:16,296][INFO ][o.e.e.NodeEnvironment    ] [Hu2MefO] using [1] data paths, mounts [[/var/lib/elasticsearch (/dev/mapper/vg-root)]], net usable_space [487.6gb], net total_space [495.7gb], spins? [possibly], types [xfs]
[2017-01-13T16:03:16,296][INFO ][o.e.e.NodeEnvironment    ] [Hu2MefO] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-01-13T16:03:16,306][INFO ][o.e.n.Node               ] node name [Hu2MefO] derived from node ID [Hu2MefOkSA-WBwzchEIVRg]; set [node.name] to override
[2017-01-13T16:03:16,307][INFO ][o.e.n.Node               ] version[5.1.1], pid[64], build[5395e21/2016-12-06T12:36:15.409Z], OS[Linux/3.10.0-327.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_111/25.111-b14]

(de Bellabre Yves) #16
[2017-01-13T16:03:16,943][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [aggs-matrix-stats]
[2017-01-13T16:03:16,943][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [ingest-common]
[2017-01-13T16:03:16,943][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [lang-expression]
[2017-01-13T16:03:16,943][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [lang-groovy]
[2017-01-13T16:03:16,943][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [lang-mustache]
[2017-01-13T16:03:16,944][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [lang-painless]
[2017-01-13T16:03:16,944][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [percolator]
[2017-01-13T16:03:16,944][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [reindex]
[2017-01-13T16:03:16,944][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [transport-netty3]
[2017-01-13T16:03:16,944][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [transport-netty4]
[2017-01-13T16:03:16,944][INFO ][o.e.p.PluginsService     ] [Hu2MefO] no plugins loaded
[2017-01-13T16:03:18,697][INFO ][o.e.n.Node               ] initialized
[2017-01-13T16:03:18,697][INFO ][o.e.n.Node               ] [Hu2MefO] starting ...
[2017-01-13T16:03:18,855][INFO ][o.e.t.TransportService   ] [Hu2MefO] publish_address {172.17.0.2:9300}, bound_addresses {[::]:9300}
[2017-01-13T16:03:18,860][INFO ][o.e.b.BootstrapCheck     ] [Hu2MefO] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-01-13T16:03:21,921][INFO ][o.e.c.s.ClusterService   ] [Hu2MefO] new_master {Hu2MefO}{Hu2MefOkSA-WBwzchEIVRg}{M38mY2DMRtqt7JuCcSFoww}{172.17.0.2}{172.17.0.2:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-01-13T16:03:21,936][INFO ][o.e.h.HttpServer         ] [Hu2MefO] publish_address {172.17.0.2:9200}, bound_addresses {[::]:9200}
[2017-01-13T16:03:21,936][INFO ][o.e.n.Node               ] [Hu2MefO] started
[2017-01-13T16:03:22,098][INFO ][o.e.g.GatewayService     ] [Hu2MefO] recovered [2] indices into cluster_state
[2017-01-13T16:03:22,394][INFO ][o.e.c.r.a.AllocationService] [Hu2MefO] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[filebeat-2017.01.13][3]] ...]).
[2017-01-13T16:07:14,071][INFO ][o.e.n.Node               ] [Hu2MefO] stopping ...
[2017-01-13T16:07:14,250][INFO ][o.e.n.Node               ] [Hu2MefO] stopped
[2017-01-13T16:07:14,251][INFO ][o.e.n.Node               ] [Hu2MefO] closing ...
[2017-01-13T16:07:14,260][INFO ][o.e.n.Node               ] [Hu2MefO] closed
[2017-01-13T16:07:20,433][INFO ][o.e.n.Node               ] [] initializing ...
[2017-01-13T16:07:20,503][INFO ][o.e.e.NodeEnvironment    ] [Hu2MefO] using [1] data paths, mounts [[/var/lib/elasticsearch (/dev/mapper/vg-root)]], net usable_space [487.6gb], net total_space [495.7gb], spins? [possibly], types [xfs]
[2017-01-13T16:07:20,503][INFO ][o.e.e.NodeEnvironment    ] [Hu2MefO] heap size [1.9gb], compressed ordinary object pointers [true]
[2017-01-13T16:07:20,517][INFO ][o.e.n.Node               ] node name [Hu2MefO] derived from node ID [Hu2MefOkSA-WBwzchEIVRg]; set [node.name] to override
[2017-01-13T16:07:20,521][INFO ][o.e.n.Node               ] version[5.1.1], pid[64], build[5395e21/2016-12-06T12:36:15.409Z], OS[Linux/3.10.0-327.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_111/25.111-b14]
[2017-01-13T16:07:21,222][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [aggs-matrix-stats]
[2017-01-13T16:07:21,222][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [ingest-common]
[2017-01-13T16:07:21,222][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [lang-expression]
[2017-01-13T16:07:21,222][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [lang-groovy]
[2017-01-13T16:07:21,222][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [lang-mustache]
[2017-01-13T16:07:21,222][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [lang-painless]
[2017-01-13T16:07:21,222][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [percolator]
[2017-01-13T16:07:21,222][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [reindex]
[2017-01-13T16:07:21,222][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [transport-netty3]
[2017-01-13T16:07:21,222][INFO ][o.e.p.PluginsService     ] [Hu2MefO] loaded module [transport-netty4]
[2017-01-13T16:07:21,223][INFO ][o.e.p.PluginsService     ] [Hu2MefO] no plugins loaded
[2017-01-13T16:07:23,027][INFO ][o.e.n.Node               ] initialized
[2017-01-13T16:07:23,028][INFO ][o.e.n.Node               ] [Hu2MefO] starting ...
[2017-01-13T16:07:23,200][INFO ][o.e.t.TransportService   ] [Hu2MefO] publish_address {172.17.0.2:9300}, bound_addresses {[::]:9300}
[2017-01-13T16:07:23,205][INFO ][o.e.b.BootstrapCheck     ] [Hu2MefO] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-01-13T16:07:26,274][INFO ][o.e.c.s.ClusterService   ] [Hu2MefO] new_master {Hu2MefO}{Hu2MefOkSA-WBwzchEIVRg}{97oiJMqlSCeG_qHrscV2Nw}{172.17.0.2}{172.17.0.2:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)
[2017-01-13T16:07:26,290][INFO ][o.e.h.HttpServer         ] [Hu2MefO] publish_address {172.17.0.2:9200}, bound_addresses {[::]:9200}
[2017-01-13T16:07:26,290][INFO ][o.e.n.Node               ] [Hu2MefO] started
[2017-01-13T16:07:26,446][INFO ][o.e.g.GatewayService     ] [Hu2MefO] recovered [2] indices into cluster_state
[2017-01-13T16:07:26,756][INFO ][o.e.c.r.a.AllocationService] [Hu2MefO] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[filebeat-2017.01.13][0]] ...]).

(David Pilato) #17

Ok. Rien de particulier.

Je suppose que tout ça est sur la même machine...

Réessaye.

Si tu ne fais aucune transformation dans Logstash, alors tu peux envoyer directement de filebeat vers elasticsearch.


(de Bellabre Yves) #18

OK . J'essaiera ça la semaine prochaine si j'ai du temps . je te tiendrai au courant de toute façon.
Bon week-end et merci.


(de Bellabre Yves) #19

Bonjour,
Pas de problème de Filebeat à elasticsearch sans passer par logstash...
Merci


(system) #20

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.