Kibana error on migration saved-object

How can I add memory on kibana, for solving problem on saved_object? I create more than one topic but I've not received any response.

Hi @LombardoAndrea195

Can you expand a little more about what the problem is. Is your Kibana instance running out of memory during upgrade? How are you running your stack? (Docker? On the Cloud? On a local machine?)

Yes of course. I push data into elasticsearch using the routines on elasticsearchpy.More in detail I use index routine for create the first document with the first index and then I push data using create. The data is pushed through kafka. So I recive a document at time. The amount of total data is not more than 250MB.Althought I insert all of this data and create some saved_object using altair and eland as suggested in the link for creating panels so saved object for dashboards where I can see the evolution of a value in time.


version: "2"

services:
  zookeeper:
    image: wurstmeister/zookeeper:3.4.6
    ports:
      - "2181:2181"
  kafka:
    image: wurstmeister/kafka
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: localhost
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_CREATE_TOPICS: "a:1:1,b:1:1,c:1:1"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
         
  jobmanager:
    image: pyflink/playgrounds:1.10.0
    volumes:
      - ./examples:/opt/examples
    hostname: "jobmanager"
    expose:
      - "6123"
    ports:
      - "8088:8088"
    command: jobmanager
    environment:
     - |
        FLINK_PROPERTIES=
        jobmanager.rpc.address: jobmanager
  taskmanager:
    image: pyflink/playgrounds:1.10.0
    volumes:
      - ./examples:/opt/examples
    expose:
      - "6121"
      - "6122"
    depends_on:
      - jobmanager
    command: taskmanager
    links:
      - jobmanager:jobmanager
    environment:
    - |
        FLINK_PROPERTIES=
        jobmanager.rpc.address: jobmanager
        taskmanager.numberOfTaskSlots: 2
  
  elasticsearch0:
    restart: always
    image: docker.elastic.co/elasticsearch/elasticsearch:7.14.1
    container_name: elasticsearch
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - data01:/usr/share/elasticsearch/data    
    ports:
      - 9200:9200
    networks:
      - es-net
    environment:
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - ES_JAVA_OPTS:"-Xms1g-Xmx1g"
      - refresh_rate:300s
      - MALLOCA_ARENA_MAX=4
  
  kibana:
    image: docker.elastic.co/kibana/kibana:7.14.1
    mem_reservation: 9096m
    mem_limit: 9096m
    container_name: kibana
    networks:
      - es-net
    environment:
      ELASTICSEARCH_HOSTS: "http://elasticsearch:9200"     
      server.maxPayload: 1653247624
      NODE_OPTIONS=--max-old-space-size: 4048
      migrations.batchSize: 5000
      savedObjects.maxImportExportSize: 2000
      migrations.retryAttempts: 20
      savedObjects.maxImportPayloadBytes: 26214400
      mem_limit: 7096m
      mem_reservation: 8096m  


      
    ports:
      - 5601:5601
    links:

      - elasticsearch0

volumes:
  data01:
    driver: local
networks:
  es-net:
    driver: bridge

above there is the docker file

Error generated in Kibana log is the following:

{"type":"log","@timestamp":"2021-09-11T11:08:43+00:00","tags":["info","savedobjects-service"],"pid":1214,"message":"[.kibana_task_manager] OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT -> UPDATE_TARGET_MAPPINGS. took: 43ms."}

{"type":"log","@timestamp":"2021-09-11T11:08:43+00:00","tags":["info","savedobjects-service"],"pid":1214,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK. took: 387ms."}

{"type":"log","@timestamp":"2021-09-11T11:08:43+00:00","tags":["info","savedobjects-service"],"pid":1214,"message":"[.kibana_task_manager] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> DONE. took: 450ms."}

{"type":"log","@timestamp":"2021-09-11T11:08:43+00:00","tags":["info","savedobjects-service"],"pid":1214,"message":"[.kibana_task_manager] Migration completed after 1800ms"}

{"type":"log","@timestamp":"2021-09-11T11:08:47+00:00","tags":["info","savedobjects-service"],"pid":1214,"message":"[.kibana] Starting to process 19 documents."}

{"type":"log","@timestamp":"2021-09-11T11:08:47+00:00","tags":["info","savedobjects-service"],"pid":1214,"message":"[.kibana] OUTDATED_DOCUMENTS_SEARCH_READ -> OUTDATED_DOCUMENTS_TRANSFORM. took: 5340ms."}

{"type":"log","@timestamp":"2021-09-11T11:09:14+00:00","tags":["info","savedobjects-service"],"pid":1214,"message":"[.kibana] OUTDATED_DOCUMENTS_TRANSFORM -> TRANSFORMED_DOCUMENTS_BULK_INDEX. took: 26286ms."}

{"type":"log","@timestamp":"2021-09-11T11:09:17+00:00","tags":["info","savedobjects-service"],"pid":1214,"message":"[.kibana] TRANSFORMED_DOCUMENTS_BULK_INDEX -> FATAL. took: 3252ms."}

{"type":"log","@timestamp":"2021-09-11T11:09:17+00:00","tags":["error","savedobjects-service"],"pid":1214,"message":"[.kibana] migration failed, dumping execution log:"}

I don't what happens I change the version from 12.1 to 14.1 and everything goes well when I start inserting data pushing it with kafka. AS I said before I have some saved_object saved but I don't what happens. The only thing that I know is that in browser compares OUT of memory and kibana crush with exit 1. From that situation system is not able to respond and kibana arrive at the step to appear kibana server is not ready yet and then after a while everything crush. I think that I miss some configuration issue because data is too short in terms of quantity

So after upgrading, everything is good, it's only when you're attempting to view a dashboard that causes a crash? Am I understanding that correctly?

Mmm not exactly. The problem is when I insert data into elasticsearch and I've into kibana also some saved_object. I think that the saved object occupy some weights in the server. If I insert all data into elastic everything goes well but I shouldn't create any saved object for the dashboards.

i don't if it could be an help for understnading the error:

{"type":"log","@timestamp":"2021-09-13T16:57:19+00:00","tags":["info","savedobjects-service"],"pid":1213,"message":"[.kibana_task_manager] Migration completed after 1409ms"}

{"type":"log","@timestamp":"2021-09-13T16:57:22+00:00","tags":["info","savedobjects-service"],"pid":1213,"message":"[.kibana] Starting to process 19 documents."}

{"type":"log","@timestamp":"2021-09-13T16:57:22+00:00","tags":["info","savedobjects-service"],"pid":1213,"message":"[.kibana] OUTDATED_DOCUMENTS_SEARCH_READ -> OUTDATED_DOCUMENTS_TRANSFORM. took: 4365ms."}

{"type":"log","@timestamp":"2021-09-13T16:58:13+00:00","tags":["info","savedobjects-service"],"pid":1213,"message":"[.kibana] OUTDATED_DOCUMENTS_TRANSFORM -> TRANSFORMED_DOCUMENTS_BULK_INDEX. took: 51088ms."}

{"type":"log","@timestamp":"2021-09-13T16:58:16+00:00","tags":["info","savedobjects-service"],"pid":1213,"message":"[.kibana] TRANSFORMED_DOCUMENTS_BULK_INDEX -> FATAL. took: 3132ms."}

{"type":"log","@timestamp":"2021-09-13T16:58:16+00:00","tags":["error","savedobjects-service"],"pid":1213,"message":"[.kibana] migration failed, dumping execution log:"}

after this the system start crushing

Please share all the Kibana logs during the upgrade.

there are too much information and they doesn't enter into the section how can I do it?

Can you help me on that?

You can drag and drop attachment files into your reply

With drag and drop file system reply that doesn't support files different from jpeg,png of gif file I'll try to do in a different manner

I've inserted the file into a repository of github in order to give you the possibility to see all the log link

Are you sure these are all the logs? I would expect to see a log entry with tags: ["fatal"]

yes it's coming from kibana. Do you need also what is coming from log elastic?

What is present with Fatal is this one in kibana

{"type":"log","@timestamp":"2021-09-14T22:30:54+00:00","tags":["info","savedobjects-service"],"pid":1214,"message":"[.kibana] TRANSFORMED_DOCUMENTS_BULK_INDEX -> FATAL. took: 9596ms."}

When migrations fail the last log line should be something like

{... "tags":["fatal"], "message": "Unable to complete saved object migrations for the [.kibana] index. Error: [a stack trace]"}

Without this it's hard to know why it's failing other than that it fails during the TRANSFORMED_DOCUMENTS_BULK_INDEX step.

Could you try to run Kibana with debug logging enabled, I think adding LOGGING_ROOT_LEVEL='all' to the docker environment property should do that.

I don't know if i do everything well: because I'm a beginner also in docker. I use the shape as LOGGING_ROOT_LEVEL: 'all'. I share with you the outcome of the log file. I think that the problem is not more related on migration on the version but is related on migration of the saved object. link.

Sorry, I should have double checked that... Using LOGGING_VERBOSE=true should give you verbose logging and you'll see log records with ..."tags": ["debug", ...

sorry for replying you in late link