Could you share the full error you get or is that it? Please include the lines before and after. Could you also share your full config file and the version of packetbeat you use?
Root Cause as i found:
when following is added to elasticsearch output in packetbeat.yml
index: "packetbeat-%{[beat.version]}-%{+yyyy.MM.dd.HH}"
Environment:
elasticsearch version - 6.2.4 packetbeat version - 6.2.4
Full Log:
2018-06-04T00:37:40.893+0530 ERROR pipeline/output.go:92 Failed to publish events: temporary bulk send failure
2018-06-04T00:37:40.893+0530 DEBUG [elasticsearch] elasticsearch/client.go:666 ES Ping(url=http://localhost:9200)
2018-06-04T00:37:40.894+0530 DEBUG [elasticsearch] elasticsearch/client.go:689 Ping status code: 200
2018-06-04T00:37:40.894+0530 INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.2
2018-06-04T00:37:40.894+0530 DEBUG [elasticsearch] elasticsearch/client.go:708 HEAD http://localhost:9200/_template/packetbeat-6.2.4 <nil>
2018-06-04T00:37:40.895+0530 INFO template/load.go:73 Template already exists and will not be overwritten.
2018-06-04T00:37:40.896+0530 DEBUG [elasticsearch] elasticsearch/client.go:303 PublishEvents: 1 events have been published to elasticsearch in 1.245631ms.
2018-06-04T00:37:40.896+0530 DEBUG [elasticsearch] elasticsearch/client.go:507 Bulk item insert failed (i=0, status=500): {"type":"string_index_out_of_bounds_exception","reason":"String index out of range: 0"}
Config:
#============================== Network device ================================
# Select the network interface to sniff the data. On Linux, you can use the
# "any" keyword to sniff on all connected interfaces.
packetbeat.interfaces.device: any
#========================== Transaction protocols =============================
packetbeat.protocols:
- type: http
# Configure the ports where to listen for HTTP traffic. You can disable
# the HTTP protocol by commenting out the list of ports.
ports: [80, 8080, 8000, 5000, 8002]
send_all_headers: true
send_response: true
send_request: true
include_body_for: ["application/json"]
- type: tls
# Configure the ports where to listen for TLS traffic. You can disable
# the TLS protocol by commenting out the list of ports.
ports: [443]
#==================== Elasticsearch template setting ==========================
# Set to false to disable template loading.
setup.template.enabled: true
# Overwrite existing template
setup.template.overwrite: true
setup.template.name: "packetbeat-%{[beat.version]}"
setup.template.pattern: "packetbeat-%{[beat.version]}-*"
setup.template.fields: "fields.yml"
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#================================ Processors ===================================
processors:
#- include_fields.fields: ["http.request.body"]
- include_fields.fields: ["http.request.body","http.response.body","direction","http.request.headers.api_id","http.request.headers.api_name",
"http.request.headers.api_publisher","http.request.headers.application_id","http.request.headers.application_name","http.request.headers.request_id",
"http.request.headers.resource","http.request.headers.user_id","http.request.headers.version","http.response.code","http.response.phrase","ip"," path",
"responsetime","status",'bytes_out','bytes_in','client_ip']
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
index: "packetbeat-%{[beat.version]}-%{+yyyy.MM.dd.HH}"
I think the problem is that you remove the beat.version from the event with your include_fields processor so the index is invalid. Add beat.version to your include list and try again.
hello,
i have a similar problem but with filebeat.
2018-06-12T11:04:48.747Z ERROR pipeline/output.go:92 Failed to publish events: temporary bulk send failure
2018-06-12T11:04:48.749Z INFO elasticsearch/client.go:690 Connected to Elasticsearch version 6.2.4
2018-06-12T11:04:48.751Z INFO template/load.go:73 Template already exists and will not be overwritten.
No change with the index name to default.
But i think i've found it.
I've change now from: DOCKER_OPTS="--log-opt max-size=100m --log-opt max-file=5"
to: DOCKER_OPTS="--log-opt max-size=500m"
for now it is gone.
it's a setting to protect the server of disk space problems due to docker logging.
(default is unlimited logging size, and with the file num it rolls files)
maybe there is a bug / unexpected behavior of filebeat if logging options are set to create some files.
maybe only with autodiscovery which is experimental.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.