Hello Team,
We have an issue where filebeat is not sending some logs to elasticsearch so we can see it on kibana.
The error we have received for the same is as below.
Entity Too Large\u003c/title\u003e\u003c/head\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:29.343311185Z"}
{"log":"\u003cbody\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:29.343317172Z"}
{"log":"\u003ccenter\u003e\u003ch1\u003e413 Request Entity Too Large\u003c/h1\u003e\u003c/center\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:29.343319241Z"}
{"log":"\u003chr\u003e\u003ccenter\u003enginx\u003c/center\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:29.343321505Z"}
{"log":"\u003c/body\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:29.343323615Z"}
{"log":"\u003c/html\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:29.343325355Z"}
{"log":"\n","stream":"stderr","time":"2020-09-03T17:50:29.343327201Z"}
{"log":"2020-09-03T17:50:29.340Z\u0009INFO\u0009pipeline/output.go:95\u0009Connecting to backoff(elasticsearch(https://es-eks.euw1.rpe-internal.com:443))\n","stream":"stderr"
,"time":"2020-09-03T17:50:29.343328987Z"}
{"log":"2020-09-03T17:50:29.340Z\u0009INFO\u0009[publisher]\u0009pipeline/retry.go:196\u0009retryer: send unwait-signal to consumer\n","stream":"stderr","time":"2020-09-03T17:50:
29.343331418Z"}
{"log":"2020-09-03T17:50:29.340Z\u0009INFO\u0009[publisher]\u0009pipeline/retry.go:198\u0009 done\n","stream":"stderr","time":"2020-09-03T17:50:29.343333578Z"}
{"log":"2020-09-03T17:50:29.340Z\u0009INFO\u0009[publisher]\u0009pipeline/retry.go:173\u0009retryer: send wait signal to consumer\n","stream":"stderr","time":"2020-09-03T17:50:29
.343335698Z"}
{"log":"2020-09-03T17:50:29.340Z\u0009INFO\u0009[publisher]\u0009pipeline/retry.go:175\u0009 done\n","stream":"stderr","time":"2020-09-03T17:50:29.343340914Z"}
{"log":"2020-09-03T17:50:29.352Z\u0009INFO\u0009elasticsearch/client.go:757\u0009Attempting to connect to Elasticsearch version 7.6.1\n","stream":"stderr","time":"2020-09-03T17:5
0:29.35290055Z"}
{"log":"2020-09-03T17:50:29.376Z\u0009INFO\u0009template/load.go:89\u0009Template filebeat already exists and will not be overwritten.\n","stream":"stderr","time":"2020-09-03T17:
50:29.377012369Z"}
{"log":"2020-09-03T17:50:29.376Z\u0009INFO\u0009[index-management]\u0009idxmgmt/std.go:295\u0009Loaded index template.\n","stream":"stderr","time":"2020-09-03T17:50:29.377041608Z
"}
{"log":"2020-09-03T17:50:29.379Z\u0009INFO\u0009pipeline/output.go:105\u0009Connection to backoff(elasticsearch(https://es-eks.euw1.rpe-internal.com:443)) established\n","str
eam":"stderr","time":"2020-09-03T17:50:29.379411308Z"}
{"log":"2020-09-03T17:50:29.379Z\u0009INFO\u0009[publisher]\u0009pipeline/retry.go:196\u0009retryer: send unwait-signal to consumer\n","stream":"stderr","time":"2020-09-03T17:50:
29.3794334Z"}
{"log":"2020-09-03T17:50:29.379Z\u0009INFO\u0009[publisher]\u0009pipeline/retry.go:198\u0009 done\n","stream":"stderr","time":"2020-09-03T17:50:29.379459664Z"}
{"log":"2020-09-03T17:50:29.388Z\u0009ERROR\u0009elasticsearch/client.go:350\u0009Failed to perform any bulk index operations: 413 Request Entity Too Large: \u003chtml\u003e\r\n"
,"stream":"stderr","time":"2020-09-03T17:50:29.388429314Z"}
{"log":"\u003chead\u003e\u003ctitle\u003e413 Request Entity Too Large\u003c/title\u003e\u003c/head\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:29.388459044Z"}
{"log":"\u003cbody\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:29.388467691Z"}
{"log":"\u003ccenter\u003e\u003ch1\u003e413 Request Entity Too Large\u003c/h1\u003e\u003c/center\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:29.38847321Z"}
{"log":"\u003chr\u003e\u003ccenter\u003enginx\u003c/center\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:29.388479637Z"}
{"log":"\u003c/body\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:29.388485363Z"}
{"log":"\u003c/html\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:29.388490247Z"}
{"log":"\n","stream":"stderr","time":"2020-09-03T17:50:29.388495211Z"}
{"log":"2020-09-03T17:50:29.388Z\u0009INFO\u0009[publisher]\u0009pipeline/retry.go:173\u0009retryer: send wait signal to consumer\n","stream":"stderr","time":"2020-09-03T17:50:29
.388513587Z"}
{"log":"2020-09-03T17:50:29.388Z\u0009INFO\u0009[publisher]\u0009pipeline/retry.go:175\u0009 done\n","stream":"stderr","time":"2020-09-03T17:50:29.388520807Z"}
{"log":"2020-09-03T17:50:31.285Z\u0009ERROR\u0009pipeline/output.go:121\u0009Failed to publish events: 413 Request Entity Too Large: \u003chtml\u003e\r\n","stream":"stderr","time
":"2020-09-03T17:50:31.285786413Z"}
{"log":"\u003chead\u003e\u003ctitle\u003e413 Request Entity Too Large\u003c/title\u003e\u003c/head\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:31.285822628Z"}
{"log":"\u003cbody\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:31.285829076Z"}
{"log":"\u003ccenter\u003e\u003ch1\u003e413 Request Entity Too Large\u003c/h1\u003e\u003c/center\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:31.285832294Z"}
{"log":"\u003chr\u003e\u003ccenter\u003enginx\u003c/center\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:31.285837107Z"}
{"log":"\u003c/body\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:31.285840497Z"}
{"log":"\u003c/html\u003e\r\n","stream":"stderr","time":"2020-09-03T17:50:31.285842537Z"
I have searched the issue over the internet and found some blogs where they have advised to reduce the bulk_max_size value in filebeat.yaml file.
I tried to replicate the issue with a pod which sends the data over 200MB to elastic search and i am receiving logs for that pod.
Please suggest