Not reinstalling ElasticsSearch when running "npm run elasticsearch"


(Jesus Gonzalez-Barahona) #1

I'm trying to run Kibana from sources (master HEAD). I followed instructions in the git repo CONTRIBUTING.md file. But I have a problem with the ElasticSearch configuration: when I run

npm run elasticsearch

I get an error, because in the machine I have available for testing, the disk is close to full, and the watermarks for ElasticSearch are too low. Usually I solve this by adding the appropriate configuration to config/elasticsearch.yml , but in this case, I cannot, because everytime I run the above command, it seems that ElasticSearch is reinstalled, which means the config files are overwritten.

Any chance of running that npm command in a way that ElasticSearch is not reinstalled from scratch? Alternatively, any way of passing the config file for ElasticSearch when running it that way?


(Tyler Smalley) #2

Under the hood, npm run elasticsearch is using esvm and runs from esvm/dev/branch-master. You are correct, in that each time you run that command it will re-download the master branch and overwrite any changes to that directory. After you have ran the command once, you could modify esvm/dev/branch-master/config/elasticsearch.yml and run ES directly ./esvm/dev/branch-master/bin/elasticsearch.

Alternatively, you could download ES 5.0.0-alpha2 and run it directly, so long as there are not compatibility issues between the version of ES you are running and Kibana.


(Jesus Gonzalez-Barahona) #3

Thanks a lot! I'm testing with ES 5.0.0-alpha2. But something weird is still happening, I still cannot see Kibana.

I launch ES, apparently with no problems (I see its answer at http://localhost:9200, which seems correct:

{
  "name" : "Solarr",
  "cluster_name" : "jgbarah",
  "version" : {
    "number" : "5.0.0-alpha2",
    "build_hash" : "e3126df",
    "build_date" : "2016-04-26T12:08:58.960Z",
    "build_snapshot" : false,
    "lucene_version" : "6.0.0"
  },
  "tagline" : "You Know, for Search"
}

The ES log seems ok too:

[2016-05-26 09:56:35,514][INFO ][node                     ] [Solarr] version[5.0.0-alpha2], pid[13139], build[e3126df/2016-04-26T12:08:58.960Z]
[2016-05-26 09:56:35,515][INFO ][node                     ] [Solarr] initializing ...
[2016-05-26 09:56:36,517][INFO ][plugins                  ] [Solarr] modules [lang-mustache, lang-painless, ingest-grok, reindex, lang-expression, lang-groovy], plugins []
[2016-05-26 09:56:36,605][INFO ][env                      ] [Solarr] using [1] data paths, mounts [[/ (/dev/mapper/expisito--vg-root)]], net usable_space [24.7gb], net total_space [460.8gb], spins? [no], types [ext4]
[2016-05-26 09:56:36,605][INFO ][env                      ] [Solarr] heap size [990.7mb], compressed ordinary object pointers [true]
[2016-05-26 09:56:41,029][INFO ][node                     ] [Solarr] initialized
[2016-05-26 09:56:41,030][INFO ][node                     ] [Solarr] starting ...
[2016-05-26 09:56:41,201][INFO ][transport                ] [Solarr] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2016-05-26 09:56:41,215][WARN ][bootstrap                ] [Solarr] bootstrap checks failed
[2016-05-26 09:56:41,215][WARN ][bootstrap                ] [Solarr] initial heap size [268435456] not equal to maximum heap size [1073741824]; this can cause resize pauses and prevents mlockall from locking the entire heap
[2016-05-26 09:56:41,215][WARN ][bootstrap                ] [Solarr] please set [discovery.zen.minimum_master_nodes] to a majority of the number of master eligible nodes in your cluster.
[2016-05-26 09:56:41,216][WARN ][bootstrap                ] [Solarr] max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
[2016-05-26 09:56:44,379][INFO ][cluster.service          ] [Solarr] new_master {Solarr}{tud1eKejTae7r9NzH4EE3w}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-05-26 09:56:44,402][INFO ][http                     ] [Solarr] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2016-05-26 09:56:44,402][INFO ][node                     ] [Solarr] started
[2016-05-26 09:56:44,435][INFO ][gateway                  ] [Solarr] recovered [0] indices into cluster_state

Then I launch Kibana from the git repo (master HEAD) with npm start. I load http://localhost:5603 in the browser, get redirected to https://localhost:5603/xed/app/kibana and then it shows:

{"statusCode":404,"error":"Not Found"}

Just for checking, I also tried https://localhost:5603/app/kibana and there I get a message stating "Kibana did not load properly. Check the server output for more information." (white font on red background).

The Kibana log seems ok [I'm including it in another message, since it seems I'm over the number of allowed lines for a reply]

Any idea? Or I better open a new thread (since this has nothing to do with the subject now), or open an issue in GitHub?


(Jesus Gonzalez-Barahona) #4

[Adding the Kibana log mentioned in my last reply, and two new lines in the ES log]

    $ npm start

    > kibana@5.0.0-snapshot start /home/jgb/src/elastic/kibana
    > sh ./bin/kibana --dev

     watching for changes  (189 files)
    managr    log   [09:58:44.558] [info][listening] basePath Proxy running at https://localhost:5601/xed
    optmzr    log   [09:58:50.385] [info][optimize] Lazy optimization of bundles for console, kibana, sense-tests and status_page ready
    optmzr    log   [09:58:50.447] [info][optimize] Lazy optimization started
    server    log   [09:58:50.475] [info][optimize] Waiting for optimizer completion
    optmzr    log   [09:58:50.497] [info] Plugin initialization disabled.
    server    log   [09:58:51.233] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
    server    log   [09:58:51.336] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch
    server    log   [09:58:51.412] [info][status][plugin:console] Status changed from uninitialized to green - Ready
    server    log   [09:58:51.419] [info][status][plugin:dev_mode] Status changed from uninitialized to green - Ready
    server    log   [09:58:51.428] [info][status][plugin:kbn_doc_views] Status changed from uninitialized to green - Ready
    server    log   [09:58:51.437] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
    server    log   [09:58:51.459] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
    server    log   [09:58:51.480] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
    server    log   [09:58:51.494] [info][status][plugin:spy_modes] Status changed from uninitialized to green - Ready
    server    log   [09:58:51.508] [info][status][plugin:status_page] Status changed from uninitialized to green - Ready
    server    log   [09:58:51.524] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready
    server    log   [09:58:51.535] [info][status][plugin:tests_bundle] Status changed from uninitialized to green - Ready
    server    log   [09:58:51.555] [info][listening] Server running at https://localhost:5603
    server    log   [09:58:56.619] [info][status][plugin:elasticsearch] Status changed from yellow to yellow - No existing Kibana index found
    server    log   [09:58:57.548] [info][status][plugin:elasticsearch] Status changed from yellow to green - Kibana index ready
    optmzr    log   [10:05:36.841] [info][optimize] Lazy optimization success in 406.39 seconds

And I see two extra lines in the ES log, which seems ok too (creation of the .kibana index):

    [2016-05-26 09:58:56,965][INFO ][cluster.metadata         ] [Solarr] [.kibana] creating index, cause [api], templates [], shards [1]/[1], mappings [server, config]
    [2016-05-26 09:58:57,345][INFO ][cluster.routing.allocation] [Solarr] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).

(Jesus Gonzalez-Barahona) #5

OK, I found the problem. I was loading https://localhost:5603 in my browser, because that's what I saw at the end of the Kibana log. But it is https://localhost:5601 what I should be loading. Now it works!

I'm going to file an issue about this in GitHub, since I don't understand why the log says port 5603, and then it is 5601...

Sorry for the noise.


(system) #6