XPack Not Installing on Logstash

I stop Logstash, run the "bin/logstash-plugin install x-pack" command, I get the 'Install successful' reply and then start Logstash again. I now check the Logstash.yml file and can't see the following settings:
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: logstashpassword

Do I need to add the above manually or has the install gone wrong somewhere?

I have looked in the Logstash directory and can't see a plugin folder.

root@SIEM-MLS-VM-elk:/usr/share/logstash# ll
total 104
drwxrwxr-x  10 logstash logstash  4096 Dec 13 16:54 ./
drwxr-xr-x 122 root     root      4096 Dec 13 16:54 ../
drwxrwxr-x   2 logstash logstash  4096 Dec 13 16:54 bin/
-rw-r--r--   1 logstash logstash  2276 Dec 12 13:56 CONTRIBUTORS
drwxrwxr-x   2 logstash logstash  4096 Dec 12 14:00 data/
-rw-r--r--   1 logstash logstash  3807 Jan  2 10:47 Gemfile
-rw-r--r--   1 logstash logstash 21133 Jan  2 10:47 Gemfile.lock
drwxrwxr-x   5 logstash logstash  4096 Dec 13 16:54 lib/
-rw-r--r--   1 logstash logstash   589 Dec 12 13:56 LICENSE
drwxrwxr-x   4 logstash logstash  4096 Dec 13 16:54 logstash-core/
drwxrwxr-x   3 logstash logstash  4096 Dec 13 16:54 logstash-core-plugin-api/
drwxrwxr-x   4 logstash logstash  4096 Dec 13 16:54 modules/
-rw-rw-r--   1 logstash logstash 26953 Dec 12 14:00 NOTICE.TXT
drwxrwxr-x   3 logstash logstash  4096 Dec 13 16:54 tools/
drwxrwxr-x   4 logstash logstash  4096 Dec 13 16:54 vendor/

Cheers,

G

Logstash now constantly restarts and I see the below in the log file:

 [2018-01-02T10:56:39,851][INFO ][logstash.pipeline        ] Pipeline started {"pipeline.id"=".monitoring-logstash"}
 [2018-01-02T10:56:39,865][DEBUG][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=".monitoring-logstash", :thread="#<Thread:0x7f8d6a78@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:245 sleep"}
 [2018-01-02T10:56:39,875][DEBUG][logstash.inputs.metrics  ] Metric: input started
 [2018-01-02T10:56:39,876][DEBUG][logstash.agent           ] Executing action {:action=LogStash::PipelineAction::Create/pipeline_id:main}
 [2018-01-02T10:56:39,985][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name="gelf", :type="input", :class=LogStash::Inputs::Gelf}
 [2018-01-02T10:56:39,996][DEBUG][logstash.codecs.plain    ] config LogStash::Codecs::Plain/@id = "plain_a9a9778b-28bd-4bbf-b6fb-8adf0d87403c"
 [2018-01-02T10:56:39,996][DEBUG][logstash.codecs.plain    ] config LogStash::Codecs::Plain/@enable_metric = true
 [2018-01-02T10:56:39,996][DEBUG][logstash.codecs.plain    ] config LogStash::Codecs::Plain/@charset = "UTF-8"
 [2018-01-02T10:56:39,997][DEBUG][logstash.inputs.gelf     ] config LogStash::Inputs::Gelf/@host = "10.16.0.5"
 [2018-01-02T10:56:40,003][DEBUG][logstash.inputs.gelf     ] config LogStash::Inputs::Gelf/@port = 20001
 [2018-01-02T10:56:40,003][DEBUG][logstash.inputs.gelf     ] config LogStash::Inputs::Gelf/@id = "5513518de419a8d7b427ec0c405c91bdfdde4e4dffad0f8405fbdcb43ad59dfa"
 [2018-01-02T10:56:40,004][DEBUG][logstash.inputs.gelf     ] config LogStash::Inputs::Gelf/@enable_metric = true
 [2018-01-02T10:56:40,004][DEBUG][logstash.inputs.gelf     ] config LogStash::Inputs::Gelf/@codec = <LogStash::Codecs::Plain id="plain_a9a9778b-28bd-4bbf-b6fb-8adf0d87403c", enable_metric=true, charset="UTF-8"
 [2018-01-02T10:56:40,005][DEBUG][logstash.inputs.gelf     ] config LogStash::Inputs::Gelf/@add_field = {}
 [2018-01-02T10:56:40,005][DEBUG][logstash.inputs.gelf     ] config LogStash::Inputs::Gelf/@remap = true
 [2018-01-02T10:56:40,005][DEBUG][logstash.inputs.gelf     ] config LogStash::Inputs::Gelf/@strip_leading_underscore = true
 [2018-01-02T10:56:40,027][DEBUG][logstash.codecs.plain    ] config LogStash::Codecs::Plain/@id = "plain_6f903927-fa53-4b40-abdd-887858cd9087"
 [2018-01-02T10:56:40,031][DEBUG][logstash.codecs.plain    ] config LogStash::Codecs::Plain/@enable_metric = true
 [2018-01-02T10:56:40,031][DEBUG][logstash.codecs.plain    ] config LogStash::Codecs::Plain/@charset = "UTF-8"
 [2018-01-02T10:56:40,033][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@hosts = [//localhost:9200]
 [2018-01-02T10:56:40,034][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@manage_template = false
 [2018-01-02T10:56:40,034][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@index = "{[@metadata][beat]}-%{+YYYY.MM.dd}"
 [2018-01-02T10:56:40,034][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@id = "a4707d34ff738ec7e975755a13e91ab8d8fe807e3ffb685279a8c0015a0008a6"
 [2018-01-02T10:56:40,034][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@enable_metric = true
 [2018-01-02T10:56:40,034][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@codec = <LogStash::Codecs::Plain id="plain_6f903927-fa53-4b40-abdd-887858cd9087", enable_metric=true, charset="UTF-8"
 [2018-01-02T10:56:40,034][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@workers = 1
 [2018-01-02T10:56:40,034][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@template_name = "logstash"
 [2018-01-02T10:56:40,034][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@template_overwrite = false
 [2018-01-02T10:56:40,034][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@parent = nil
 [2018-01-02T10:56:40,034][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@join_field = nil
 [2018-01-02T10:56:40,034][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@upsert = ""
 [2018-01-02T10:56:40,034][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@doc_as_upsert = false
 [2018-01-02T10:56:40,040][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@script = ""
 [2018-01-02T10:56:40,040][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@script_type = "inline"
 [2018-01-02T10:56:40,040][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@script_lang = "painless"
 [2018-01-02T10:56:40,040][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@script_var_name = "event"
 [2018-01-02T10:56:40,040][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@scripted_upsert = false
 [2018-01-02T10:56:40,040][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@retry_initial_interval = 2
 [2018-01-02T10:56:40,040][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@retry_max_interval = 64
 [2018-01-02T10:56:40,041][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@retry_on_conflict = 1
 [2018-01-02T10:56:40,041][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@pipeline = nil
 [2018-01-02T10:56:40,041][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@action = "index"
 [2018-01-02T10:56:40,041][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@ssl_certificate_verification = true
 [2018-01-02T10:56:40,041][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@sniffing = false
 [2018-01-02T10:56:40,041][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@sniffing_delay = 5
 [2018-01-02T10:56:40,041][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@timeout = 60
 [2018-01-02T10:56:40,041][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@failure_type_logging_whitelist = []
 [2018-01-02T10:56:40,041][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@pool_max = 1000
 [2018-01-02T10:56:40,041][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@pool_max_per_route = 100
 [2018-01-02T10:56:40,041][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@resurrect_delay = 5

Here is some more of the log file as I reached the character limit in the other post.

     [2018-01-02T10:56:40,041][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@validate_after_inactivity = 10000
     [2018-01-02T10:56:40,041][DEBUG][logstash.outputs.elasticsearch] config LogStash::Outputs::ElasticSearch/@http_compression = false
     [2018-01-02T10:56:40,046][DEBUG][logstash.pipeline        ] Starting pipeline {:pipeline_id="main"}
     [2018-01-02T10:56:40,052][DEBUG][logstash.outputs.elasticsearch] Normalizing http path {:path=nil, :normalized=nil}
     [2018-01-02T10:56:40,079][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes={:removed=[], :added=[http://localhost:9200/]}}
     [2018-01-02T10:56:40,080][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=http://localhost:9200/, :path="/"}
     [2018-01-02T10:56:40,096][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url="http://localhost:9200/", :error_type=LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error="Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
     [2018-01-02T10:56:40,106][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class="LogStash::Outputs::ElasticSearch", :hosts=["//localhost:9200"]}
     [2018-01-02T10:56:40,111][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id="main", "pipeline.workers"=2, "pipeline.batch.size"=125, "pipeline.batch.delay"=5, "pipeline.max_inflight"=250, :thread="#<Thread:0x753f6196@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:245 run"}
     [2018-01-02T10:56:40,135][INFO ][logstash.pipeline        ] Pipeline started {"pipeline.id"="main"}
     [2018-01-02T10:56:40,140][INFO ][logstash.inputs.gelf     ] Starting gelf listener {:address="10.16.0.5:20001"}
     [2018-01-02T10:56:40,145][DEBUG][logstash.pipeline        ] Pipeline started successfully {:pipeline_id="main", :thread="#<Thread:0x753f6196@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:245 sleep"}
     [2018-01-02T10:56:40,150][INFO ][logstash.agent           ] Pipelines running {:count=2, :pipelines=[".monitoring-logstash", "main"]}
     [2018-01-02T10:56:40,156][ERROR][logstash.inputs.metrics  ] Monitoring is not available: License information is currently unavailable. Please make sure you have added your production elasticsearch connection info in the xpack.monitoring.elasticsearch settings.
     [2018-01-02T10:56:44,560][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=http://localhost:9200/, :path="/"}
     [2018-01-02T10:56:44,565][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url="http://localhost:9200/", :error_type=LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error="Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
     [2018-01-02T10:56:44,765][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=http://localhost:9200/, :path="/"}
     [2018-01-02T10:56:44,775][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url="http://localhost:9200/", :error_type=LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error="Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}
     [2018-01-02T10:56:44,868][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=".monitoring-logstash", :thread="#<Thread:0x7f8d6a78@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:245 sleep"}
     [2018-01-02T10:56:45,115][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=http://localhost:9200/, :path="/"}
     [2018-01-02T10:56:45,120][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url="http://localhost:9200/", :error_type=LogStash::Outputs::ElasticSearch::HttpClient::Pool::BadResponseCodeError, :error="Got response code '401' contacting Elasticsearch at URL 'http://localhost:9200/'"}

Yes, you need to provide these settings.

Once I added the settings to the end of my Logstash.yml file, everything worked perfectly.

Cheers for the help @guyboertje

Regards,

G

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.