Plugin Install fails`gem install logstash-core -v '6.1.3'`

I am having problems installing plugins:
I am running ELK 1.6.3 with an aim to get Netflow data showing in Kibana on Ubuntu 16.04LTS - When trying to install numerous plugins I am getting an error:

 /usr/share/logstash# bin/logstash-plugin install logstash-filter-cidr
 Validating logstash-filter-cidr
 Installing logstash-filter-cidr
 Error Bundler::InstallError, retrying 1/10
 An error occurred while installing logstash-core (6.1.3), and Bundler cannot continue.
 Make sure that `gem install logstash-core -v '6.1.3'` succeeds before bundling.

I have tried running the gem install command directly (though my Ruby knowledge is less than basic) and it also fails:

gem install logstash-core -v '6.1.3'
ERROR:  Could not find a valid gem 'logstash-core' (= 6.1.3) in any repository
ERROR:  Possible alternatives: logstash-cli, logstash-file, logstash-lite, logstasher, logstash-fakes

I am not sure which (if any) alternatives I can take? Can anyone assist

Thanks in advance.

How did you install Logstash?

Hi Warkolm,

I downloaded the .deb file copied it to the server. I then ran:

dpkg -i /tmp/logstash-6.1.3.deb

Which installs without error. This was after a few dramas installing a previous version. I decided to remove and install the latest. To remove I did a dpkg remove & purge of logstash-5.6.1 then manually removed the leftovers as dpkg gave some warnings:

dpkg: warning: while removing logstash, directory '/var/log/logstash' not empty so not removed
dpkg: warning: while removing logstash, directory '/var/lib/logstash' not empty so not removed
dpkg: warning: while removing logstash, directory '/etc/logstash' not empty so not removed
dpkg: warning: while removing logstash, directory '/usr/share/logstash/data' not empty so not removed
dpkg: warning: while removing logstash, directory '/usr/share/logstash/logstash-core' not empty so not removed
root@elk:/etc# rm -rf /var/log/logstash
root@elk:/etc# rm -rf /var/lib/logstash
root@elk:/etc# rm -rf /etc/logstash
root@elk:/etc# rm -rf /usr/share/logstash/data
root@elk:/etc# rm -rf /usr/share/logstash/logstash-core

I suspect this might have had something to do with it - though the installation of 6.1.3 recreated the directory and the files:

/usr/share/logstash/logstash-core# head versions-gem-copy.yml 
---
logstash: 6.1.3
logstash-core: 6.1.3
logstash-core-plugin-api: 2.1.16

# jruby must reference a *released* version of jruby which can be downloaded from the official download url
# *and* for which jars artifacts are published for compile-time
jruby:
  version: 9.1.13.0

Perhaps, it was not the best move. I'd prefer not to have to rebuild the server and start from scratch with the most recent versions (though it may be in my best interest to do so). I accept it is not a bug and probably entirely local to my box.

Thanks

A fresh rebuild:

root@nf01:/usr/share/logstash# ./bin/logstash-plugin install logstash-codec-sflow
ERROR: Something went wrong when installing logstash-codec-sflow, message: execution expired

So, i tried direct from rubygems...

root@nf01:/usr/share/logstash# gem install logstash-core
ERROR:  Could not find a valid gem 'logstash-core' (>= 0) in any repository
ERROR:  Possible alternatives: logstash-cli, logstash-file, logstash-lite, logstasher, logstash-fakes
root@nf01:/usr/share/logstash# gem install logstash
ERROR:  Could not find a valid gem 'logstash' (>= 0) in any repository
ERROR:  Possible alternatives: big_stash, log_stats, logstasher, logstats, lstash
root@nf01:/usr/share/logstash# 

There is no issues between my server and rubygems.org - I have allowed the following IP's on my firewall and I can telnet to each on port 443 from the server:

root@nf01:/usr/share/logstash# host rubygems.org
rubygems.org has address 151.101.130.2
rubygems.org has address 151.101.66.2
rubygems.org has address 151.101.2.2
rubygems.org has address 151.101.194.2

I can also see from a tcpdump two way traffic between my server and 151.101.194.2 port 443.
I can pull down other dependencies of the logstash packages - so i am pretty sure it is not a communications issue.

root@nf01:/usr/share/logstash# gem install snmp
Fetching: snmp-1.2.0.gem (100%)
Successfully installed snmp-1.2.0
Parsing documentation for snmp-1.2.0
Installing ri documentation for snmp-1.2.0
Done installing documentation for snmp after 0 seconds
1 gem installed

Can anyone help please? I have no idea on what I can do from here. I'm stumped.

Thanks

Could it be due to a similar limitation?

The CIDR plugin (logstash-filter-cidr) is already included in Logstash 6.1.3 (has been since 5.6.something). You should not have to install it manually.

A few days ago I released ElastiFlow v2.0.0 (https://github.com/robcowart/elastiflow) which may be interesting for you. The instructions for the v1.x releases included installing both cidr and translate filters, but those have both since been included in shipping Logstash build. You will notice that in the updated setup steps you no longer need to install them.

1 Like

Ahh, that is what I was following - i must have missed that.

Having omitted that instruction, I have cloned the latest but I am now getting the following message which is causing Logstash to continually error and restart:

[ERROR] 2018-02-07 12:28:18.892 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] sourceloader - No configurat

In Kibana I can see the template but the Netflow ports are not opening

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:9200          0.0.0.0:*               LISTEN      11259/java      
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      4441/nginx -g daemo
tcp        0      0 127.0.0.1:9300          0.0.0.0:*               LISTEN      11259/java      
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      15107/sshd      
tcp        0      0 127.0.0.1:5601          0.0.0.0:*               LISTEN      666/node        
tcp6       0      0 :::22                   :::*                    LISTEN      15107/sshd   

and Elasticsearch is not finding the index.

curl -X GET 'http://localhost:9200/_cat/indices?v'
health status index   uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana QQGgiGh1QeGrcwPkU31Czg   1   1        217            0    141.2kb        141.2kb

Thanks

1 Like

How did you specify to Logstash how to start the pipeline? In logstash.yml or pipelines,yml? Please post a copy of logstash.yml, or both files if you used pipeline.yml.

Thanks for your response. The logstash.yml config is in /etc/logstash/logstash.yml which is where i loaded the ElastiFlow™ pipeline:

cat logstash.yml | sed '/^#/ d'
path.data: /var/lib/logstash
path.config: /etc/logstash/elastiflow/conf.d
path.logs: /var/log/logstash

I have not used pipeline.yml option. I have tried running from the command line, but it gives me the same error.

I have relaised that I had failed to read the install instructions correctly the first, second and third time and that the Elastiflow files were not in the correct place which is why it couldn't find the configuration files. Having got past that hurdle I am face with an old friend:

[ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::PluginLoadingError", :message=>"Couldn't find any codec plugin named 'sflow'. Are you sure this is correct? Trying to load the sflow codec plugin resulted in this error: no such file to load -- logstash/codecs/sflow", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/plugins/registry.rb:192:in `lookup_pipeline_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/plugin.rb:140:in `lookup'", "/usr/share/logstash/logstash-core/lib/logstash/plugins/plugin_factory.rb:82:in `plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:114:in `plugin'", "(eval):12:in `<eval>'", "org/jruby/RubyKernel.java:994:in `eval'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:86:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:171:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:40:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:335:in `block in converge_state'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in `with_pipelines'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:332:in `block in converge_state'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:319:in `converge_state'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:166:in `block in converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in `with_pipelines'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:164:in `converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:90:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:343:in `block in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in `block in initialize'"]}

I then tried to install the plugin, but failed. Which ultimately gives me the first error I posted with :frowning: when trying to install the missing codec:

/usr/share/logstash/bin/logstash-plugin install logstash-codec-sflow
Validating logstash-codec-sflow
Installing logstash-codec-sflow
Error Bundler::InstallError, retrying 1/10
An error occurred while installing logstash-core (6.1.3), and Bundler cannot continue.
Make sure that `gem install logstash-core -v '6.1.3'` succeeds before bundling.

Then:

gem install logstash-core -v '6.1.3'
ERROR:  Could not find a valid gem 'logstash-core' (= 6.1.3) in any repository
ERROR:  Possible alternatives: logstash-cli, logstash-file, logstash-lite, logstasher, logstash-fakes

Where do I go from here?

Thanks

I keep replicating the same scenario every time where I the logstash agent fails to find the sFlow codec.

Is there any way to skip sFlow as I don't need it?

I disabled it in the /etc/logstash/elastiflow/10_input.logstash.conf
I then had to move the /etc/logstash/elastiflow/dictionaries geoip & template directories up a level to /etc/logstash/.

Now the ports have opened, however the index doesn't seem to have been created:

curl -X GET 'http://localhost:9200/_cat/indices?v'
health status index   uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana djPtbuRlQJa6iS6kYuNi7g   1   1          2            0     13.6kb         13.6kb

I managed to run this successfully as per the Elastistash instructions:

curl -X POST -u USERNAME:PASSWORD http://KIBANASERVER:5601/api/saved_objects/index-pattern/elastiflow-* -H "Content-Type: application/json" -H "kbn-xsrf: true" -d @/PATH/TO/elastiflow.index_pattern.json

Kibana now kicks out the following error:

Visualize: Error: in cell #1: Elasticsearch index not found: elastiflow-*

So close now - Why is it not creating an index?

Thanks

After running the following command, I get the json output and it looks as if it has bee sucessful, though Kibana cannot find it.


 curl -X POST -u kibanaadmin:password http://localhost:5601/api/saved_objects/index-pattern/elastiflow-* -H "Content-Type: application/json" -H "kbn-xsrf: true" -d @./elastiflow/kibana/elastiflow.inde
x_pattern.json
{"id":"elastiflow-*","type":"index-pattern","updated_at":"2018-02-13T21:57:57.492Z","version":1,"attributes":{"title":"elastiflow-*","timeFieldName":"@timestamp","notExpandable":true,"fields": (Output truncated for ease of reading)

If i browse to http://localhost:5601/api/saved_objects/index-pattern I see the JSON/RawData & Headers.
and I can pull in the saved JSON objects.


curl -X GET 'http://localhost:9200/_cat/indices?v'
health status index   uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .kibana djPtbuRlQJa6iS6kYuNi7g   1   1        217            0    282.1kb        282.1kb

If i try recreating it wont because it complains of a duplicate index.
I can see the netflow packets hitting my server It looks like the data is just not hitting elasticsearch.

What am i doing wrong?

Thanks

This has been fixed, there was a final piece of the puzzle which was Logstash not recognising the Cisco IPV4 TTL.

@sarlacpit and @bobspunkhouse, the maintainer of the Netflow codec (@jorritfolmer) does a really good job of updating the codec for things like this when they are encountered. It is truly a well maintained plugin. I encourage you to open an issue for that repository (https://github.com/logstash-plugins/logstash-codec-netflow). He will need a PCAP of the flow packet. If any changes are then needed to ElastiFlow afterwards, I can make them at that time.

1 Like

Great thank you - This has been raised as suggested: https://github.com/logstash-plugins/logstash-codec-netflow/issues/124

This issue was fixed by the logstash-codec-netflow bunch 21 days ago which should be reflected in more recent builds.

I have referenced what I did to do a quick fix.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.