Error in Kibana 7.12

Hi all! I've a problem on kibana that is running on my local machine.I would like to create a simple dashboard with data but in order to obtian the single data value in time I use Altair and jupyter combination in order to create vega editor file for solving that problem. When I've tried to insert them into the dashboard and I get over 10 visualization I get error. A week ago I manage to solve the same problem cleaning all data inside ./kbana _doc. But now I don't manage to do that. Can I have an hand?
I 'll publish my docker_compose file and the error log below: (at localhost:5601 -> Kibana is not ready yet) And every time I 'll do an api with get and delete I obtain this response! "(kibana is not ready yet)"
I think that I need to try to run the docker without kibana , clean the _doc ./kibana and then restart.

   kibana           | {"type":"log","@timestamp":"2021-06-01T09:17:14+00:00","tags":["fatal","root"],"pid":7,"message":"Error: Unable to complete saved object migrations for the [.kibana] index. RequestAbortedError: The content length (552247624) is bigger than the maximum allowed string (536870888)\n    at migrationStateActionMachine (/usr/share/kibana/src/core/server/saved_objects/migrationsv2/migrations_state_action_machine.js:148:13)\n    at processTicksAndRejections (internal/process/task_queues.js:93:5)\n    at async Promise.all (index 0)\n    at SavedObjectsService.start (/usr/share/kibana/src/core/server/saved_objects/saved_objects_service.js:163:7)\n    at Server.start (/usr/share/kibana/src/core/server/server.js:283:31)\n    at Root.start (/usr/share/kibana/src/core/server/root/index.js:58:14)\n    at bootstrap (/usr/share/kibana/src/core/server/bootstrap.js:100:5)\n    at Command.<anonymous> (/usr/share/kibana/src/cli/serve/serve.js:169:5)"}
   kibana           | {"type":"log","@timestamp":"2021-06-01T09:17:14+00:00","tags":["info","plugins-system"],"pid":7,"message":"Stopping all plugins."}
   kibana           | {"type":"log","@timestamp":"2021-06-01T09:17:14+00:00","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":7,"message":"Monitoring stats collection is stopped"}
   kibana           | {"type":"log","@timestamp":"2021-06-01T09:17:14+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH -> UPDATE_TARGET_MAPPINGS"}
   kibana           | {"type":"log","@timestamp":"2021-06-01T09:17:14+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK"}
   elasticsearch    | {"type": "server", "timestamp": "2021-06-01T09:17:15,004Z", "level": "INFO", "component": "o.e.t.LoggingTaskListener", "cluster.name": "docker-cluster", "node.name": "9b3255beb722", "message": "2978 finished with response BulkByScrollResponse[took=52.4ms,timed_out=false,sliceId=null,updated=9,created=0,deleted=0,batches=1,versionConflicts=0,noops=0,retries=0,throttledUntil=0s,bulk_failures=[],search_failures=[]]", "cluster.uuid": "5FrxkY3GRbGzR2nSuEaxow", "node.id": "kHGHvefFTViG8CdPdF5pxw"  }
   kibana           | {"type":"log","@timestamp":"2021-06-01T09:17:15+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> DONE"}
   kibana           | {"type":"log","@timestamp":"2021-06-01T09:17:15+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] Migration completed after 4821ms"}
   kibana           | {"type":"log","@timestamp":"2021-06-01T09:17:40+00:00","tags":["warning","plugins","licensing"],"pid":7,"message":"License information could not be obtained from Elasticsearch due to Error: Cluster client cannot be used after it has been closed. error"}
   kibana           | {"type":"log","@timestamp":"2021-06-01T09:17:44+00:00","tags":["warning","plugins-system"],"pid":7,"message":"\"eventLog\" plugin didn't stop in 30sec., move on to the next."}
   kibana           | 
   kibana           |  FATAL  Error: Unable to complete saved object migrations for the [.kibana] index. RequestAbortedError: The content length (552247624) is bigger than the maximum allowed string (536870888)
   kibana           |         kibana exited with code 1

Now I'll post the docker_compose.yml file

version: '2'

services: 
  zookeeper:
    image: wurstmeister/zookeeper:3.4.6
    ports:
     - "2181:2181"
  kafka:
    build: .
    ports:
     - "9092:9092"
    expose:
     - "9093"
    environment:
      KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,OUTSIDE://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
      KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_DELETE_TOPIC_ENABLE: "true"
      KAFKA_CREATE_TOPICS: "honeycomb_core:1:1,facesheet:1:1,adhesives:1:1,hot_bonded_insert:1:1,sandwich_assembly:1:1,panel_inspection_testing:1:1,insert_potting:1:1,cold_insert_potting:1:1,thermal_hardware:1:1,esmat:1:1,cleaning:1:1,kip:1:1,storage:1:1"
    volumes:
     - /var/run/docker.sock:/var/run/docker.sock
     
  jobmanager:
    image: pyflink/playgrounds:1.10.0
    volumes:
      - ./examples:/opt/examples
    hostname: "jobmanager"
    expose:
      - "6123"
    ports:
      - "8088:8088"
    command: jobmanager
    environment:
     - |
        FLINK_PROPERTIES=
        jobmanager.rpc.address: jobmanager
  taskmanager:
    image: pyflink/playgrounds:1.10.0
    volumes:
      - ./examples:/opt/examples
    expose:
      - "6121"
      - "6122"
    depends_on:
      - jobmanager
    command: taskmanager
    links:
      - jobmanager:jobmanager
    environment:
    - |
        FLINK_PROPERTIES=
        jobmanager.rpc.address: jobmanager
        taskmanager.numberOfTaskSlots: 2
  elasticsearch:
    restart: always
    image: docker.elastic.co/elasticsearch/elasticsearch:7.12.1
    container_name: elasticsearch
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - vibhuviesdata:/usr/share/elasticsearch/data    
    ports:
      - 9200:9200
    networks:
      - es-net
    environment:
     - discovery.type=single-node
     #- pack.security.enabled:"true"
     - bootstrap.memory_lock=true
     - ES_JAVA_OPTS:"-Xms1g-Xmx1g"
     - refresh_rate=300s
     - MALLOCA_ARENA_MAX=4
  kibana:
    image: docker.elastic.co/kibana/kibana:7.12.0
    mem_limit: 8096m
    mem_reservation: 7096m
    container_name: kibana
    restart: always
    networks:
    - es-net
    environment:
      ELASTICSEARCH_URL: "http://localhost:9200"
      ELASTICSEARCH_HOSTS: "http://elasticsearch:9200"  
      elasticsearch.ssl.verificationMode: "null"
      xpack.monitoring.ui.container.elasticsearch.enabled: "true"
      LOGGING_VERBOSE: null
      elasticsearch.username: "kibana"
      elasticsearch.password: "kibana"
      migrations.enableV2: "false"
      #xpack.security.encryptionKey: "kibana_kibana_kibana_kibana_kibana"
      #xpack.security.session.idleTimeout: "1h"
      #xpack.security.session.lifespan: "1h"


    ports:
      - 5601:5601
volumes:
  vibhuviesdata:
    driver: local
networks:
  es-net:
    driver: bridge 

If I call the following get routine on the web address :http://localhost:9200/.kibana/_search?q=type:dashboard&size=100

{"took":56,"timed_out":false,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0},"hits":{"total":{"value":1,"relation":"eq"},"max_score":5.9884596,"hits":[{"_index":".kibana_7.12.0_001","_type":"_doc","_id":"dashboard:11e4c160-c0da-11eb-823b-b1346039576d","_score":5.9884596,"_source":{"dashboard":{"title":"D1 DATA","hits":0,"description":"","panelsJSON":"[{\"version\":\"7.12.0\",\"gridData\":{\"x\":0,\"y\":0,\"w\":24,\"h\":10,\"i\":\"2e37ceea-ddbf-4941-bc1c-94339227b6b7\"},\"panelIndex\":\"2e37ceea-ddbf-4941-bc1c-94339227b6b7\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_0\"},{\"version\":\"7.12.0\",\"gridData\":{\"x\":24,\"y\":0,\"w\":24,\"h\":10,\"i\":\"5d6220f4-04f4-4e5a-bf7f-eab7f6fd9e7e\"},\"panelIndex\":\"5d6220f4-04f4-4e5a-bf7f-eab7f6fd9e7e\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_1\"},{\"version\":\"7.12.0\",\"gridData\":{\"x\":0,\"y\":10,\"w\":17,\"h\":11,\"i\":\"3f722705-9261-472d-8395-6fa15841f719\"},\"panelIndex\":\"3f722705-9261-472d-8395-6fa15841f719\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_2\"},{\"version\":\"7.12.0\",\"gridData\":{\"x\":17,\"y\":10,\"w\":14,\"h\":11,\"i\":\"3a41184a-062b-4cd7-9ca7-66519ca44b2e\"},\"panelIndex\":\"3a41184a-062b-4cd7-9ca7-66519ca44b2e\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_3\"},{\"version\":\"7.12.0\",\"gridData\":{\"x\":31,\"y\":10,\"w\":17,\"h\":11,\"i\":\"16c411ce-f66a-4f08-b30b-f0639bd6ea05\"},\"panelIndex\":\"16c411ce-f66a-4f08-b30b-f0639bd6ea05\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_4\"},{\"version\":\"7.12.0\",\"gridData\":{\"x\":0,\"y\":21,\"w\":19,\"h\":12,\"i\":\"8f89b575-7432-4bf4-8107-4ac6c5030370\"},\"panelIndex\":\"8f89b575-7432-4bf4-8107-4ac6c5030370\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_5\"},{\"version\":\"7.12.0\",\"gridData\":{\"x\":19,\"y\":21,\"w\":18,\"h\":12,\"i\":\"86035206-b3db-45aa-b741-68b0f4a1068a\"},\"panelIndex\":\"86035206-b3db-45aa-b741-68b0f4a1068a\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_6\"},{\"version\":\"7.12.0\",\"gridData\":{\"x\":0,\"y\":33,\"w\":48,\"h\":14,\"i\":\"f0d39443-13cf-4711-bd1d-16d6f95e9700\"},\"panelIndex\":\"f0d39443-13cf-4711-bd1d-16d6f95e9700\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_7\"},{\"version\":\"7.12.0\",\"gridData\":{\"x\":0,\"y\":47,\"w\":48,\"h\":11,\"i\":\"b3e5c482-c06d-4752-9fe3-9b17a7ee97b8\"},\"panelIndex\":\"b3e5c482-c06d-4752-9fe3-9b17a7ee97b8\",\"embeddableConfig\":{\"enhancements\":{}},\"panelRefName\":\"panel_8\"}]","optionsJSON":"{\"hidePanelTitles\":false,\"useMargins\":true}","version":1,"timeRestore":false,"kibanaSavedObjectMeta":{"searchSourceJSON":"{\"query\":{\"query\":\"\",\"language\":\"kuery\"},\"filter\":[]}"}},"type":"dashboard","references":[{"name":"panel_0","type":"visualization","id":"X_C_Point"},{"name":"panel_1","type":"visualization","id":"Y_C_Point"},{"name":"panel_2","type":"visualization","id":"ANGLE_C_Point"},{"name":"panel_3","type":"visualization","id":"DEPTH_CRUSH_Point"},{"name":"panel_4","type":"visualization","id":"DEGREE_Crush_Point"},{"name":"panel_5","type":"visualization","id":"BatchSerialN_TIMESLIPS_Point"},{"name":"panel_6","type":"visualization","id":"ELAPSED_TIMESLIPS_Point"},{"name":"panel_7","type":"visualization","id":"PanelS_Line"},{"name":"panel_8","type":"visualization","id":"Production_bar"}],"migrationVersion":{"dashboard":"7.11.0"},"coreMigrationVersion":"7.12.0","updated_at":"2021-05-31T17:58:52.649Z"}}]}}

Can you help me in order to clean kibana doc in order to restore the system and then is there a way for solving the problem caused by the inserting of panel in kibana?

Hi, can you try running the following to remove the saved objects from kibana

curl -XDELETE http://elasticsearch:9200/.kibana*

Also from the error by default kibana payload has a limit set to 1048576. You can increase it in the kibana.yml by updating the following server.maxPayloadBytes: <max_number>

More information here: https://www.elastic.co/guide/en/kibana/current/settings.html

This let me to solve the problem!

I've tried this but is like the system is not able to modify the maxPayloadBytes values

I think maxPayloadBytes has been deprecated and now it is server.maxPayload. Can you check with this config and see if it is working without having to delete .kibana

I've tried with server.maxPayload and with savedObjects.maxImportPayloadBytes but it doesn't work. It's like as the value is not considered. By the way I need a way to solve that problem. Do you want if the problem refers to the sum of saved object or if it's related to the single saved object(problem of quantity dimension)?

May I know how big is the payload you are trying to import? This is purely based on the payload size.

The default payload size is set at (536870888) and the payload size is 552247624. Can you please tell me where you have set the payload size (536870888)?

Yes I can post the docker.yml("Every suggestion is accepted") (I insert this information into environment section)

version: '2'

services: 
  zookeeper:
    image: wurstmeister/zookeeper:3.4.6
    ports:
     - "2181:2181"
  kafka:
    build: .
    ports:
     - "9092:9092"
    expose:
     - "9093"
    environment:
      KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,OUTSIDE://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
      KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_DELETE_TOPIC_ENABLE: "true"
      KAFKA_CREATE_TOPICS: "honeycomb_core:1:1,facesheet:1:1,adhesives:1:1,hot_bonded_insert:1:1,sandwich_assembly:1:1,panel_inspection_testing:1:1,insert_potting:1:1,cold_insert_potting:1:1,thermal_hardware:1:1,esmat:1:1,cleaning:1:1,kip:1:1,storage:1:1"
    volumes:
     - /var/run/docker.sock:/var/run/docker.sock
     
  jobmanager:
    image: pyflink/playgrounds:1.10.0
    volumes:
      - ./examples:/opt/examples
    hostname: "jobmanager"
    expose:
      - "6123"
    ports:
      - "8088:8088"
    command: jobmanager
    environment:
     - |
        FLINK_PROPERTIES=
        jobmanager.rpc.address: jobmanager
  taskmanager:
    image: pyflink/playgrounds:1.10.0
    volumes:
      - ./examples:/opt/examples
    expose:
      - "6121"
      - "6122"
    depends_on:
      - jobmanager
    command: taskmanager
    links:
      - jobmanager:jobmanager
    environment:
    - |
        FLINK_PROPERTIES=
        jobmanager.rpc.address: jobmanager
        taskmanager.numberOfTaskSlots: 2
  elasticsearch:
    restart: always
    image: docker.elastic.co/elasticsearch/elasticsearch:7.12.1
    container_name: elasticsearch
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - vibhuviesdata:/usr/share/elasticsearch/data    
    ports:
      - 9200:9200
    networks:
      - es-net
    environment:
     - discovery.type=single-node
     #- pack.security.enabled:"true"
     - bootstrap.memory_lock=true
     - ES_JAVA_OPTS:"-Xms1g-Xmx1g"
     - refresh_rate=300s
     - MALLOCA_ARENA_MAX=4
  kibana:
    image: docker.elastic.co/kibana/kibana:7.12.0
    mem_limit: 8096m
    mem_reservation: 7096m
    container_name: kibana
    restart: always
    networks:
    - es-net
    environment:
      ELASTICSEARCH_URL: "http://localhost:9200"
      ELASTICSEARCH_HOSTS: "http://elasticsearch:9200"  
      elasticsearch.ssl.verificationMode: "null"
      xpack.monitoring.ui.container.elasticsearch.enabled: "true"
      LOGGING_VERBOSE: null
      elasticsearch.username: "kibana"
      elasticsearch.password: "kibana"
      migrations.enableV2: "false"
      server.maxPayload: 553247624
      savedObjects.maxImportPayloadBytes: 553247624
      #xpack.security.encryptionKey: "kibana_kibana_kibana_kibana_kibana"
      #xpack.security.session.idleTimeout: "1h"
      #xpack.security.session.lifespan: "1h"


    ports:
      - 5601:5601
volumes:
  vibhuviesdata:
    driver: local
networks:
  es-net:
    driver: bridge

If you notice
server.maxPayload: 553247624
savedObjects.maxImportPayloadBytes: 553247624.

The payload you are receiving is of the following content-length (536870888) which is greater than what you have set (553247624). Can you try increasing it to a value greater than 552247624 and see if it works.

Also payload of size greater than 500MB is huge considering it is a single node cluster.

How can I modify the docker in order to have a correctly work system with I don't know also two node of elastic(in order to distribute and share the power and resource)? In the meanwhile I try to do the modification you suggested. I've deleted again but after inserting 7 it cointue to give me this error. Let me to know

modifying the information it respond about another error and also now instead of before while waiting a response print as log the information about the element that I think generate this issue.

kibana           |  FATAL  Error: Unable to complete saved object migrations for the [.kibana] index. Please check the health of your Elasticsearch cluster and try again. Error: [undefined]: Response Error
kibana           |

Now can you run the following command curl localhost:9200/_cluster/health. Also can we connect on slack? Its easier

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.