Strigo lab error in ElasticSearch Engineer 1 ondemand class lab 1.3

I am currently undertaking ES Engineer 1 on demand course and working in Strigo to conduct the labs.
In Lab 1.3 I am looking to start Kibana in a second server tab, with the first running ES. The Kibana launcher errors and closes after a number of errors being reported.

As the failure of Kibana prevents this lab and later ones I am looking for any help in correcting the environment.
The Kibana logs are showing comms issues in communicating to the Elastic cluster:

log [10:24:31.490] [info][status][plugin:encrypted_saved_objects@7.3.1] Status changed from uninitialized to green - Ready
log [10:24:31.500] [info][status][plugin:snapshot_restore@7.3.1] Status changed from uninitialized to yellow - Waiting for Elasticsearch
log [10:24:31.508] [info][status][plugin:actions@7.3.1] Status changed from uninitialized to green - Ready
log [10:24:31.515] [info][status][plugin:alerting@7.3.1] Status changed from uninitialized to green - Ready
log [10:24:31.518] [info][status][plugin:data@7.3.1] Status changed from uninitialized to green - Ready
log [10:24:31.658] [info][status][plugin:timelion@7.3.1] Status changed from uninitialized to green - Ready
log [10:24:31.662] [info][status][plugin:ui_metric@7.3.1] Status changed from uninitialized to green - Ready
log [10:24:31.665] [info][status][plugin:visualizations@7.3.1] Status changed from uninitialized to green - Ready
log [10:24:32.128] [info][status][plugin:elasticsearch@7.3.1] Status changed from yellow to green - Ready
log [10:24:32.138] [error][status][plugin:xpack_main@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.139] [error][status][plugin:graph@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.140] [error][status][plugin:spaces@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.141] [error][status][plugin:searchprofiler@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.142] [error][status][plugin:ml@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.142] [error][status][plugin:tilemap@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.143] [error][status][plugin:watcher@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.144] [error][status][plugin:grokdebugger@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.144] [error][status][plugin:logstash@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.145] [error][status][plugin:beats_management@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.145] [error][status][plugin:maps@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.145] [error][status][plugin:index_management@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.146] [error][status][plugin:index_lifecycle_management@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.146] [error][status][plugin:rollup@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.147] [error][status][plugin:remote_clusters@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.147] [error][status][plugin:cross_cluster_replication@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.147] [error][status][plugin:file_upload@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.148] [error][status][plugin:snapshot_restore@7.3.1] Status changed from yellow to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.738] [warning][browser-driver][reporting] Enabling the Chromium sandbox provides an additional layer of protection.
log [10:24:32.755] [warning][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml
log [10:24:32.760] [error][status][plugin:reporting@7.3.1] Status changed from uninitialized to red - [data] Elasticsearch cluster did not respond with license information.
log [10:24:32.788] [error][status][plugin:security@7.3.1] Status changed from green to red - [data] Elasticsearch cluster did not respond with license information.
^C
[elastic@server1 ~]$

Thank you for raising this issue.

My guess is that you did not start Elasticsearch with the right parameters as specified in the last question of the lab 1.2. To solve this issue:

  1. Shut down Kibana
  2. Shut down your Elasticsearch node
  3. Start Elasticsearch using the following command: ./elasticsearch-7.3.1/bin/elasticsearch -E node.name=node1 -E http.host="localhost","server1"
  4. Start Kibana using the following command line: ./kibana-7.3.1-linux-x86_64/bin/kibana --host=0.0.0.0

This should fix your issue. If it doesn't would you be able to add information relative to the log on the Elasticsearch side?

Thanks that worked ok.

Looking at the next step to load the SQL data I do not see the dataset folder on the server, is it possible to restore it?

Good to hear that it helped.

Regarding the datasets folder, it should be in server1. I can restore it but by doing so it will reinitialize the whole lab environment. Send me your email address in a private message if you want me to move forward with this option.

Hello I'm running into the same Strigo lab issue for Engineer 1.
First issue :

anytime I start elasticsearch with ./elasticsearch-7.3.1/bin/elasticsearch -E node.name=node1 -E http.host="localhost","server1"
I have to unlock(=delete) the node.lock in the folder here [elastic@server1 0]$ /home/elastic/elasticsearch-7.3.1/data/nodes/0
If not I can not start elasticsearch and end up with an error : [n ode1] uncaught exception in thread [main]
then elasticsearch is started and then in a new terminal I use SSH on server1 and try to launch KIBANA.
./kibana-7.3.1-linux-x86_64/bin/kibana --host=0.0.0.0
and it's an ERROR

OK after killing existing process !!
ERROR
log [14:23:55.106] [fatal][root] { Error: Request Timeout after 30000ms
at /home/elastic/kibana-7.3.1-linux-x86_64/node_modules/elasticsearch/src/lib/transport.js:362:15
displayName: 'RequestTimeout',
message: 'Request Timeout after 30000ms',
{ statusCode: 503,
payload:
{ statusCode: 503,
error: 'Service Unavailable',
message: 'Request Timeout after 30000ms' },

Here I restarted everything (ElasticSearch & relaunch a new terminal). Doing nothing else I can reach KIBANA which appears to be ALREADY STARTED at .com/app/kibana#. here I have hope it would work ...
but in the KIBANA console excepting the request GET / and initial GET _search, whenever I start the lab for populating data as per lab 1.3 questions I got an ERROR.
PUT my_blogs/_doc/1

{

"id": "1",

"title": "Better query execution",

"category": "Engineering",

"date": "July 15, 2015",

"author": {

"first_name": "Clinton",

"last_name": "Gormley",

"company": "Elastic"

}

}
Provides this answer (not a GOOD one :))
{
"statusCode": 504,
"error": "Gateway Time-out",
"message": "Client request timeout"
}

In conclusion, I tried to remove everything and restart unzipping and reconfiguring at least Three times. About the hours remaining in Strigo I spent at least 3hours doing troubleshooting. Now I'm stuck at the second question of module 1 lab 1.3 ...
Can you help me please or reset my STRIGO or whatever to make it work please ?
thanks

Hello, the errors are suggesting that there's no master node elected in your cluster and you should fix that first.

Can you send me a private message with your Strigo DNS so I can connect on your instance and take a look a this issue?

Hello I sent a PM with details right now
thanks

Based on your hint I did this in my elastic config yml file :slight_smile:In yml file

Use a descriptive name for the node:

node.name: node-1

[...]

cluster.name: "myCluster"

Uncommenting the node1 line looks like fixing elasticsearch and give a KIBANA launch with ./kibana-7.3.1-linux-x86_64/bin/kibana --host=0.0.0. Up and running.
I do hope this will continue to work properly [too much time spent on debugging the lab env ...]
I let you know any future issues about that if any happen.

Hi
I'm still running into troubles anytime I restartd the lab.
Now what I dod is everytime I remove all folder and data from ELasticsearch and Kibana
I do untar the archives and in elasticsearch file "config/elasticsearch.yml" I uncomment node1 and instantiare my cluster.name : "mycluster".
OK =>
`` `
/elasticsearch-7.3.1/bin/elasticsearch
-E node.name=node1 -E http.host="localhost","server1"

This works then after in a new terminal  I start Kibana  

./kibana-7.3.1-linux-x86_64/bin/kibana --host=0.0.0.

and port 5601 is always already in use. 

[elastic@server1 ~]$ ./kibana-7.3.1-linux-x86_64/bin/kibana --host=0.0.0.0
log [16:32:56.178] [fatal][root] Error: Port 5601 is already in use. Another instance of Kibana may be running!
at Root.shutdown (/home/elastic/kibana-7.3.1-linux-x86_64/src/core/server/root/index.js:67:18)
at Root.setup (/home/elastic/kibana-7.3.1-linux-x86_64/src/core/server/root/index.js:46:18)
at process._tickCallback (internal/process/next_tick.js:68:7)
at Function.Module.runMain (internal/modules/cjs/loader.js:745:11)
at startup (internal/bootstrap/node.js:283:19)
at bootstrapNodeJSCore (internal/bootstrap/node.js:743:3)

FATAL Error: Port 5601 is already in use. Another instance of Kibana may be running!

I kill Kibana process running and get now the following error :

slight_smile: log [16:34:38.378] [info][status][plugin:reporting@7.3.1] Status changed from uninitialized to green - Ready
error [16:35:06.236] [warning][process] UnhandledPromiseRejectionWarning: Error: Request Timeout after 30000ms
at /home/elastic/kibana-7.3.1-linux-x86_64/node_modules/elasticsearch/src/lib/transport.js:362:15
at Timeout. (/home/elastic/kibana-7.3.1-linux-x86_64/node_modules/elasticsearch/src/lib/transport.js:391:7)
at ontimeout (timers.js:436:11)
at tryOnTimeout (timers.js:300:5)
at listOnTimeout (timers.js:263:5)
at Timer.processTimers (timers.js:223:10)
at emitWarning (internal/process/promises.js:81:15)
at emitPromiseRejectionWarnings (internal/process/promises.js:120:9)
at process._tickCallback (internal/process/next_tick.js:69:34)
error [16:35:06.237] [warning][process] Error: Request Timeout after 30000ms
at /home/elastic/kibana-7.3.1-linux-x86_64/node_modules/elasticsearch/src/lib/transport.js:362:15
at Timeout. (/home/elastic/kibana-7.3.1-linux-x86_64/node_modules/elasticsearch/src/lib/transport.js:391:7)
at ontimeout (timers.js:436:11)
at tryOnTimeout (timers.js:300:5)
at listOnTimeout (timers.js:263:5)
at Timer.processTimers (timers.js:223:10)
log [16:35:08.379] [warning][reporting] Reporting plugin self-check failed. Please check the Kibana Reporting settings. Error: Request Timeout after 30000ms
log [16:35:08.420] [warning][task_manager] PollError Request Timeout after 30000ms
log [16:35:08.420] [warning][maps] Error scheduling telemetry task, received NotInitialized: Tasks cannot be scheduled until after task manager is initialized!
log [16:35:08.422] [warning][telemetry] Error scheduling task, received NotInitialized: Tasks cannot be scheduled until after task manager is initialized!

This is really embarrassing ! I do follow the procedure and every time same errors and coming back and I spend hours now on the strigo lab to execute a stable env. before doing the lab exercice. 
What is wrong here please ? it has worked 2 times only !
Here the dns : ec2-52-59-198-155.eu-central-1.compute.amazonaws.com
Can an admin can extend my strigo hours remaining and due date please ?
Thanks

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.