Kibana | FATAL Error: HELP

Hi all I've obtained this error:
I've created some plot through vega lite and when I execute again the script used for kibana I've this error but until some time before everthing works well. I've a node of kibana and a node of elastic.


ed_out=false,sliceId=null,updated=9,created=0,deleted=0,batches=1,versionConflicts=0,noops=0,retries=0,throttledUntil=0s,bulk_failures=[],search_failures=[]]", "cluster.uuid": "5FrxkY3GRbGzR2nSuEaxow", "node.id": "kHGHvefFTViG8CdPdF5pxw"  }
kibana           | {"type":"log","@timestamp":"2021-05-28T00:18:25+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> DONE"}
kibana           | {"type":"log","@timestamp":"2021-05-28T00:18:25+00:00","tags":["info","savedobjects-service"],"pid":7,"message":"[.kibana_task_manager] Migration completed after 11178ms"}
kibana           | {"type":"log","@timestamp":"2021-05-28T00:18:44+00:00","tags":["warning","plugins","licensing"],"pid":7,"message":"License information could not be obtained from Elasticsearch due to Error: Cluster client cannot be used after it has been closed. error"}
kibana           | {"type":"log","@timestamp":"2021-05-28T00:18:55+00:00","tags":["warning","plugins-system"],"pid":7,"message":"\"eventLog\" plugin didn't stop in 30sec., move on to the next."}
kibana           | 
kibana           |  FATAL  Error: Unable to complete saved object migrations for the [.kibana] index. RequestAbortedError: The content length (823769731) is bigger than the maximum allowed string (536870888)

I wish someone help me in order to solve as soon as I can

What version of Kibana is this? Are there additional details in the logs? How big was the vega spec?


version: '2'

services: 
  zookeeper:
    image: wurstmeister/zookeeper:3.4.6
    ports:
     - "2181:2181"
  kafka:
    build: .
    ports:
     - "9092:9092"
    expose:
     - "9093"
    environment:
      KAFKA_ADVERTISED_LISTENERS: INSIDE://kafka:9093,OUTSIDE://localhost:9092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
      KAFKA_LISTENERS: INSIDE://0.0.0.0:9093,OUTSIDE://0.0.0.0:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_DELETE_TOPIC_ENABLE: "true"
      KAFKA_CREATE_TOPICS: "honeycomb_core:1:1,facesheet:1:1,adhesives:1:1,hot_bonded_insert:1:1,sandwich_assembly:1:1,panel_inspection_testing:1:1,insert_potting:1:1,cold_insert_potting:1:1,thermal_hardware:1:1,esmat:1:1,cleaning:1:1,kip:1:1,storage:1:1"
    volumes:
     - /var/run/docker.sock:/var/run/docker.sock
     
  jobmanager:
    image: pyflink/playgrounds:1.10.0
    volumes:
      - ./examples:/opt/examples
    hostname: "jobmanager"
    expose:
      - "6123"
    ports:
      - "8088:8088"
    command: jobmanager
    environment:
     - |
        FLINK_PROPERTIES=
        jobmanager.rpc.address: jobmanager
  taskmanager:
    image: pyflink/playgrounds:1.10.0
    volumes:
      - ./examples:/opt/examples
    expose:
      - "6121"
      - "6122"
    depends_on:
      - jobmanager
    command: taskmanager
    links:
      - jobmanager:jobmanager
    environment:
    - |
        FLINK_PROPERTIES=
        jobmanager.rpc.address: jobmanager
        taskmanager.numberOfTaskSlots: 2
  elasticsearch:
    restart: always
    image: docker.elastic.co/elasticsearch/elasticsearch:7.12.1
    container_name: elasticsearch
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - vibhuviesdata:/usr/share/elasticsearch/data    
    ports:
      - 9200:9200
    networks:
      - es-net
    environment:
     - discovery.type=single-node
     #- pack.security.enabled:"true"
     - bootstrap.memory_lock=true
     - ES_JAVA_OPTS:"-Xms1g-Xmx1g"
  
  kibana:
    image: docker.elastic.co/kibana/kibana:7.12.0
    mem_limit: 8096m
    mem_reservation: 7096m
    container_name: kibana
    restart: always
    networks:
    - es-net
    environment:
      ELASTICSEARCH_URL: "http://localhost:9200"
      ELASTICSEARCH_HOSTS: "http://elasticsearch:9200"  
      elasticsearch.ssl.verificationMode: "null"
      xpack.monitoring.ui.container.elasticsearch.enabled: "true"
      LOGGING_VERBOSE: null
      #elasticsearch.username: "kibana"
      #elasticsearch.password: "kibana"
      #xpack.security.encryptionKey: "kibana_kibana_kibana_kibana_kibana"
      #xpack.security.session.idleTimeout: "1h"
      #xpack.security.session.lifespan: "1h"


    ports:
      - 5601:5601
volumes:
  vibhuviesdata:
    driver: local
networks:
  es-net:
    driver: bridge 

This is my code of the docker-composed file. but I think the problems are that I insert a lot of vegas as number I create a lot of plot through altair in order to report them

I'm trying to insert the code of the log @tylersmalley


kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:15+00:00","tags":["info","plugins","taskManager"],"pid":6,"message":"TaskManager is identified by the Kibana UUID: 70b8c42e-5405-437c-998f-4aa640b5173b"}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:17+00:00","tags":["warning","plugins","security","config"],"pid":6,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:17+00:00","tags":["warning","plugins","security","config"],"pid":6,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:18+00:00","tags":["warning","plugins","reporting","config"],"pid":6,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:18+00:00","tags":["warning","plugins","reporting","config"],"pid":6,"message":"Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 8.3.2011\n OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'."}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:18+00:00","tags":["warning","plugins","encryptedSavedObjects"],"pid":6,"message":"Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:18+00:00","tags":["warning","plugins","fleet"],"pid":6,"message":"Fleet APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:19+00:00","tags":["warning","plugins","actions","actions"],"pid":6,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:20+00:00","tags":["warning","plugins","alerts","plugins","alerting"],"pid":6,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:21+00:00","tags":["info","plugins","monitoring","monitoring"],"pid":6,"message":"config sourced from: production cluster"}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:25+00:00","tags":["info","savedobjects-service"],"pid":6,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:25+00:00","tags":["error","elasticsearch"],"pid":6,"message":"Request error, retrying\nGET http://elasticsearch:9200/_xpack?accept_enterprise=true => connect ECONNREFUSED 172.20.0.3:9200"}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:25+00:00","tags":["warning","elasticsearch"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:25+00:00","tags":["warning","elasticsearch"],"pid":6,"message":"No living connections"}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:25+00:00","tags":["warning","plugins","licensing"],"pid":6,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:25+00:00","tags":["warning","plugins","monitoring","monitoring"],"pid":6,"message":"X-Pack Monitoring Cluster Alerts will not be available: No Living connections"}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:26+00:00","tags":["error","savedobjects-service"],"pid":6,"message":"Unable to retrieve version information from Elasticsearch nodes."}
elasticsearch    | {"type": "server", "timestamp": "2021-05-28T07:47:48,798Z", "level": "INFO", "component": "o.e.p.PluginsService", "cluster.name": "docker-cluster", "node.name": "d2e496359d00", "message": "no plugins loaded" }
elasticsearch    | {"type": "server", "timestamp": "2021-05-28T07:47:49,604Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "docker-cluster", "node.name": "d2e496359d00", "message": "using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda5)]], net usable_space [13.8gb], net total_space [43.5gb], types [ext4]" }
elasticsearch    | {"type": "server", "timestamp": "2021-05-28T07:47:49,619Z", "level": "INFO", "component": "o.e.e.NodeEnvironment", "cluster.name": "docker-cluster", "node.name": "d2e496359d00", "message": "heap size [4gb], compressed ordinary object pointers [true]" }
elasticsearch    | {"type": "server", "timestamp": "2021-05-28T07:47:51,238Z", "level": "INFO", "component": "o.e.n.Node", "cluster.name": "docker-cluster", "node.name": "d2e496359d00", "message": "node name [d2e496359d00], node ID [kHGHvefFTViG8CdPdF5pxw], cluster name [docker-cluster], roles [transform, data_frozen, master, remote_cluster_client, data, ml, data_content, data_hot, data_warm, data_cold, ingest]" }
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:55+00:00","tags":["warning","elasticsearch"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:55+00:00","tags":["warning","elasticsearch"],"pid":6,"message":"No living connections"}
kibana           | {"type":"log","@timestamp":"2021-05-28T07:47:55+00:00","tags":["warning","plugins","licensing"],"pid":6,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}




The version as you can see is 7.12.0.
I'm trying to be more precise, I follow this link in order to create element of dashboard that I 'll add. After creating them I obtain the error provided below
How to bring Jupyter Notebook visualizations to Kibana dashboards for data science | Elastic Blog

I think I've too much saved object but I cannot access to kibana service on localhost because it respondes as kibana is not server is not ready yet! @tylersmalley

I manage to solve :slightly_smiling_face:
After analyzing different response on the web I manage to solve:

1 step:

 `http://localhost:9200/.kibana/_search?q=type:dashboard&size=100`

2 step:

`curl -X DELETE "localhost:5601/api/saved_objects/kibanaSavedObjectMeta.searchSourceJSON.index/index-pattern/e68f45b0-ab73-11eb-a01c-8590ef1580f4/" -H 'kbn-xsrf: true'
#(Respectly <name>/<type>/<id>)
`

Before doing it I'll do the instruction of this link to be sure that everything work correctly
In practice the problem is refereed to the fact we've too much saved object over the information in the space. There're different way for solving them in case I've deleted all but you can increase also the space. The other suggestion is to avoid partions

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.