Logstash won't create index in ES

Hello,

I followed this tutorial from Digial Ocean on how to install an ELK stack on a CentOS 7 machine.

Digital Ocean ELK Setup

It seemed pretty good, and got me as far as having an initial Elastic Search node working correctly and have kibana 4 running behind NGINX. So far so good! But in installing Logstash I ran into an issue where it doesn't seem to create any listening ports and if I check elasticsearch I see that LS hasn't created any indexes either. I'm sure it's a config issue somewhere. But where I don't know!

Here's the indexes I have in elastic search:

curl http://localhost:9200/_cat/indices
yellow open .kibana  1 1 1 0 2.4kb 2.4kb
yellow open security 5 1 0 0  575b  575b

Here we have kibana's index and what I think is a standard ES index called 'security'. But it seems that logstash is not communicating with ES!

These are the versions of ES and LS I have installed:

elasticsearch-1.5.2-1.noarch
logstash-1.5.1-1.noarch
logstash-forwarder-0.4.0-1.x86_64

And the way they have it setup in the tutorial I followed, you have 3 config files going into the logstash conf.d directory.

In /etc/logstash/conf.d/01-lumberjack-input.conf I have:

  lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

In /etc/logstash/conf.d/10-syslog.conf I have:

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

Trying to capture syslog data,

And in /etc/logstash/conf.d/30-lumberjack-output.conf I have ouput going to ES:

output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}

And the system that I'm on claims that logstash is running:

systemctl status logstash
logstash.service - LSB: Starts Logstash as a daemon.
   Loaded: loaded (/etc/rc.d/init.d/logstash)
   Active: active (running) since Sun 2015-06-21 23:16:33 EDT; 3s ago
  Process: 1033 ExecStop=/etc/rc.d/init.d/logstash stop (code=exited, status=0/SUCCESS)
  Process: 1040 ExecStart=/etc/rc.d/init.d/logstash start (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/logstash.service
           └─1044 /bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.i...

Jun 21 23:16:33 logs systemd[1]: Starting LSB: Starts Logstash as a daemon....
Jun 21 23:16:33 logs logstash[1040]: logstash started.
Jun 21 23:16:33 logs systemd[1]: Started LSB: Starts Logstash as a daemon..

But despite that I cant seem to find it running in the process list:

[root@logs:~] # ps -ef | grep logstash | grep -v grep
[root@logs:~] #

These are all the ports I have listening on the system:

netstat -tulpn | grep -i listen | grep -v tcp6
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1546/master
tcp        0      0 127.0.0.1:5601          0.0.0.0:*               LISTEN      30629/node
tcp        0      0 127.0.0.1:17123         0.0.0.0:*               LISTEN      30769/python
tcp        0      0 0.0.0.0:44392           0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:9102            0.0.0.0:*               LISTEN      7811/bacula-fd
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      2518/nginx: master
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      31527/sshd

So I was hoping to get some help as to why no ports seem to be listening for logstash and why its not creating an index in elastic search,

Any thoughts?

Thanks!

Have you looked at the LS logs?

This is what I have in the logs:

#tail /var/log/logstash/logstash.log
{:timestamp=>"2015-06-19T17:59:35.716000-0400", :message=>"The error reported is: \n  Permission denied - /etc/pki/tls/certs/logstash-forwarder.crt"}
{:timestamp=>"2015-06-19T18:16:31.462000-0400", :message=>"The error reported is: \n  Permission denied - /etc/pki/tls/certs/logstash-forwarder.crt"}
{:timestamp=>"2015-06-21T14:08:39.776000-0400", :message=>"The error reported is: \n  Permission denied - /etc/pki/tls/certs/logstash-forwarder.crt"}
{:timestamp=>"2015-06-21T14:09:49.551000-0400", :message=>"The error reported is: \n  Permission denied - /etc/logstash/conf.d/20-logstash.conf"}
{:timestamp=>"2015-06-21T23:16:40.960000-0400", :message=>"The error reported is: \n  Permission denied - /etc/logstash/conf.d/20-logstash.conf"}
{:timestamp=>"2015-06-21T23:31:36.836000-0400", :message=>"The error reported is: \n  Permission denied - /etc/logstash/conf.d/20-logstash.conf"}

Turns out the keypair is owned by the root user:

#ls -l /etc/pki/tls/{certs,private}/logstash-forwarder.*
-rw-------. 1 root root 1956 Jun 19 17:41 /etc/pki/tls/certs/logstash-forwarder.crt
-rw-------. 1 root root 3243 Jun 19 17:42 /etc/pki/tls/private/logstash-forwarder.key

So I chowned the keypair and restarted logstash:

#ls -l /etc/pki/tls/{certs,private}/logstash-forwarder.*
-rw-------. 1 logstash logstash 1956 Jun 19 17:41 /etc/pki/tls/certs/logstash-forwarder.crt
-rw-------. 1 logstash logstash 3243 Jun 19 17:42 /etc/pki/tls/private/logstash-forwarder.key

#systemctl status logstash
logstash.service - LSB: Starts Logstash as a daemon.
   Loaded: loaded (/etc/rc.d/init.d/logstash)
   Active: active (running) since Mon 2015-06-22 00:30:16 EDT; 6s ago
  Process: 6759 ExecStop=/etc/rc.d/init.d/logstash stop (code=exited, status=0/SUCCESS)
  Process: 6763 ExecStart=/etc/rc.d/init.d/logstash start (code=exited, status=0/SUCCESS)
   CGroup: /system.slice/logstash.service
           └─6766 /bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFr...

Jun 22 00:30:16 logs systemd[1]: Starting LSB: Starts Logstash as a daemon....
Jun 22 00:30:16 logs logstash[6763]: logstash started.
Jun 22 00:30:16 logs systemd[1]: Started LSB: Starts Logstash as a daemon..

Still not finding any indexes created in logstash:

#curl http://localhost:9200/_cat/indices
yellow open .kibana  1 1 1 0 2.4kb 2.4kb
yellow open security 5 1 0 0  575b  575b

And still no logstash ports are listening:

#netstat -tulpn | grep -i listen| grep -v tcp6
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1546/master
tcp        0      0 127.0.0.1:5601          0.0.0.0:*               LISTEN      30629/node
tcp        0      0 127.0.0.1:17123         0.0.0.0:*               LISTEN      30769/python
tcp        0      0 0.0.0.0:44392           0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:9102            0.0.0.0:*               LISTEN      7811/bacula-fd
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      2518/nginx: master
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      31527/sshd

This is what I have in the logs now:

#tail -f /var/log/logstash/logstash.log
{:timestamp=>"2015-06-22T00:34:35.003000-0400", :message=>"Invalid setting for lumberjack input plugin:\n\n  input {\n    lumberjack {\n      # This setting must be a path\n      # File does not exist or cannot be opened /etc/pki/tls/certs/lumberjack.crt\n      ssl_certificate => \"/etc/pki/tls/certs/lumberjack.crt\"\n      ...\n    }\n  }", :level=>:error}
{:timestamp=>"2015-06-22T00:34:35.021000-0400", :message=>"Invalid setting for lumberjack input plugin:\n\n  input {\n    lumberjack {\n      # This setting must be a path\n      # File does not exist or cannot be opened /etc/pki/tls/private/lumberjack.key\n      ssl_key => \"/etc/pki/tls/private/lumberjack.key\"\n      ...\n    }\n  }", :level=>:error}
{:timestamp=>"2015-06-22T00:34:35.044000-0400", :message=>"Error: Something is wrong with your configuration."}
{:timestamp=>"2015-06-22T00:34:35.046000-0400", :message=>"You may be interested in the '--configtest' flag which you can\nuse to validate logstash's configuration before you choose\nto restart a running system."}

I'd appreciate some advice on these errors!

Thanks,
Tim

File does not exist or cannot be opened /etc/pki/tls/certs/lumberjack.crt

Does that exist on the host that runs LS as well?

Hi Warkolm,

Thanks! I got rid of that error by creating the key / cert pair and putting them where the config is expecting them. Sorry I missed that earlier.

And now, after restarting logstash again, I see that logstash is listening on the ports that I specify in the config:

[root@logs:/etc/logstash] #lsof -i :5000
COMMAND   PID     USER   FD   TYPE   DEVICE SIZE/OFF NODE NAME
java    23893 logstash   16u  IPv6 11665234      0t0  TCP *:commplex-main (LISTEN)
[root@logs:/etc/logstash] #lsof -i :2541
COMMAND   PID     USER   FD   TYPE   DEVICE SIZE/OFF NODE NAME
java    23893 logstash   18u  IPv6 11665237      0t0  TCP *:lonworks2 (LISTEN)

As of now logstash is running and not producing any log output:

#ps -ef | grep logstash | grep -v grep
logstash 23893     1 16 11:49 ?        00:01:45 /bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.io.tmpdir=/var/lib/logstash -Xmx500m -Xss2048k -Djffi.boot.library.path=/opt/logstash/vendor/jruby/lib/jni -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.io.tmpdir=/var/lib/logstash -Xbootclasspath/a:/opt/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/opt/logstash/vendor/jruby -Djruby.lib=/opt/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main --1.9 /opt/logstash/lib/bootstrap/environment.rb logstash/runner.rb agent -f /etc/logstash/conf.d -l /var/log/logstash/logstash.log

ls -lh /var/log/logstash/logstash.log
-rw-r--r--. 1 logstash logstash 0 Jun 22 11:49 /var/log/logstash/logstash.log

But still there aren't any indexes created in elasticsearch:

#curl http://localhost:9200/_cat/indices
yellow open .kibana  1 1 1 0 2.4kb 2.4kb
yellow open security 5 1 0 0  575b  575b

And when I go to configure Kibana it says that it can't find any patterns to search on using "logstash-*".

Where can I go from here to get this to work? The configs themselves are unchanged from what I've shown you before.

Thanks,
Tim

LS won't log much/anything unless it needs to so don't worry about that.

It sounds like the ES/LS parts are ok, are you sure data is being fed into LS from the fowarder?
What if you try calling LS using -e 'input { stdin {} } output { elasticsearch { <fill in rest>' and then type some text in? It should then create the index in ES.

1 Like

Hmm good point! :wink:

Ok so I gave that a try! And I got the following error when I did:

    #logstash -e 'input { stdin { } } output { elasticsearch { host => localhost } }'
    Jun 22, 2015 10:41:34 PM org.elasticsearch.node.internal.InternalNode <init>
    INFO: [logstash-logs-10512-11306] version[1.5.1], pid[10512], build[5e38401/2015-04-09T13:41:35Z]
    Jun 22, 2015 10:41:34 PM org.elasticsearch.node.internal.InternalNode <init>
    INFO: [logstash-logs-10512-11306] initializing ...
    Jun 22, 2015 10:41:34 PM org.elasticsearch.plugins.PluginsService <init>
    INFO: [logstash-logs-10512-11306] loaded [], sites []
    Jun 22, 2015 10:41:36 PM org.elasticsearch.node.internal.InternalNode <init>
    INFO: [logstash-logs-10512-11306] initialized
    Jun 22, 2015 10:41:36 PM org.elasticsearch.node.internal.InternalNode start
    INFO: [logstash-logs-10512-11306] starting ...
    Jun 22, 2015 10:41:36 PM org.elasticsearch.transport.TransportService doStart
    INFO: [logstash-logs-10512-11306] bound_address {inet[/0:0:0:0:0:0:0:0:9301]}, publish_address {inet[/216.120.248.98:9301]}
    Jun 22, 2015 10:41:36 PM org.elasticsearch.discovery.DiscoveryService doStart
    INFO: [logstash-logs-10512-11306] elasticsearch/gGOuMrOdQWKXTTIeUC2Dhg
    Jun 22, 2015 10:42:06 PM org.elasticsearch.discovery.DiscoveryService waitForInitialState
    WARNING: [logstash-logs-10512-11306] waited for 30s and no initial state was set by the discovery
    Jun 22, 2015 10:42:06 PM org.elasticsearch.node.internal.InternalNode start
    INFO: [logstash-logs-10512-11306] started
    Failed to install template: waited for [30s] {:level=>:error}
    Logstash startup completed
    hello world
    testGot error to send bulk of actions: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master]; {:level=>:error}
    Failed to flush outgoing items {:outgoing_count=>1, :exception=>org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];, :backtrace=>["org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(org/elasticsearch/cluster/block/ClusterBlocks.java:151)", "org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(org/elasticsearch/cluster/block/ClusterBlocks.java:141)", "org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(org/elasticsearch/action/bulk/TransportBulkAction.java:210)", "org.elasticsearch.action.bulk.TransportBulkAction.access$000(org/elasticsearch/action/bulk/TransportBulkAction.java:73)", "org.elasticsearch.action.bulk.TransportBulkAction$1.onFailure(org/elasticsearch/action/bulk/TransportBulkAction.java:148)", "org.elasticsearch.action.support.TransportAction$ThreadedActionListener$2.run(org/elasticsearch/action/support/TransportAction.java:137)", "java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1142)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:617)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}
Got error to send bulk of actions: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master]; {:level=>:error}
Failed to flush outgoing items {:outgoing_count=>1, :exception=>org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];, :backtrace=>["org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(org/elasticsearch/cluster/block/ClusterBlocks.java:151)", "org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(org/elasticsearch/cluster/block/ClusterBlocks.java:141)", "org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(org/elasticsearch/action/bulk/TransportBulkAction.java:210)", "org.elasticsearch.action.bulk.TransportBulkAction.access$000(org/elasticsearch/action/bulk/TransportBulkAction.java:73)", "org.elasticsearch.action.bulk.TransportBulkAction$1.onFailure(org/elasticsearch/action/bulk/TransportBulkAction.java:148)", "org.elasticsearch.action.support.TransportAction$ThreadedActionListener$2.run(org/elasticsearch/action/support/TransportAction.java:137)", "java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1142)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:617)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}

I see it's claiming service unavailable even tho ES is demonstrably running:

#lsof -i :9200
COMMAND   PID          USER   FD   TYPE   DEVICE SIZE/OFF NODE NAME
java    32281 elasticsearch  118u  IPv6 12481634      0t0  TCP *:wap-wsp (LISTEN)
java    32281 elasticsearch  119u  IPv6 12481215      0t0  TCP localhost:wap-wsp->localhost:33392 (ESTABLISHED)
java    32281 elasticsearch  124u  IPv6 12481282      0t0  TCP logs.jokefire.com:wap-wsp->ool-2f126f64.dyn.optonline.net:34434 (ESTABLISHED)
java    32281 elasticsearch  134u  IPv6 12481288      0t0  TCP logs.jokefire.com:wap-wsp->ool-2f126f64.dyn.optonline.net:45174 (ESTABLISHED)
java    32281 elasticsearch  135u  IPv6 12481289      0t0  TCP logs.jokefire.com:wap-wsp->ool-2f126f64.dyn.optonline.net:37755 (ESTABLISHED)
java    32281 elasticsearch  136u  IPv6 12481291      0t0  TCP logs.jokefire.com:wap-wsp->ool-2f126f64.dyn.optonline.net:34473 (ESTABLISHED)
java    32281 elasticsearch  137u  IPv6 12481292      0t0  TCP logs.jokefire.com:wap-wsp->ool-2f126f64.dyn.optonline.net:33900 (ESTABLISHED)
java    32281 elasticsearch  138u  IPv6 12481293      0t0  TCP logs.jokefire.com:wap-wsp->ool-2f126f64.dyn.optonline.net:46402 (ESTABLISHED)
node    32321          root   10u  IPv4 12481214      0t0  TCP localhost:33392->localhost:wap-wsp (ESTABLISHED)

Any thoughts on why that's happening?

And, perhaps predictably, no LS indexes are created in ES.

#curl http://localhost:9200/_cat/indices
yellow open .kibana  1 1 1 0 2.4kb 2.4kb
yellow open security 5 1 0 0  575b  575b

Thanks

Check _cat/health, also try switching to http transport and see if that helps.

Looks like your ES cluster has no master so the LS node client can't join it. Check the ES master logs and see whats going on. If not as @warkolm suggested, switch to http protocol in LS output

Guys,

Ok so for now, until I can get this running I only have one ES node. And currently it's reporting it's health as 'yellow'.

#curl http://localhost:9200/_cat/health
1435067968 09:59:28 jokefire_elasticsearch yellow 1 1 6 6 0 0 6 0

In my elasticsearch.yml file I have node.master set to true and node.data set to true:

node.master: true
#
# Allow this node to store data (enabled by default):
#
node.data: true

Everything looks like it's normal in the logs:

#tail -f /var/log/elasticsearch/jokefire_elasticsearch.log
[2015-06-23 09:57:41,206][INFO ][node                     ] [JF-ES_1] version[1.5.2], pid[30696], build[62ff986/2015-04-27T09:21:06Z]
[2015-06-23 09:57:41,207][INFO ][node                     ] [JF-ES_1] initializing ...
[2015-06-23 09:57:41,218][INFO ][plugins                  ] [JF-ES_1] loaded [AuthPlugin], sites [paramedic, bigdesk, head, kopf]
[2015-06-23 09:57:43,734][INFO ][org.codelibs.elasticsearch.auth.service.AuthService] [JF-ES_1] Creating authenticators.
[2015-06-23 09:57:43,894][INFO ][node                     ] [JF-ES_1] initialized
[2015-06-23 09:57:43,894][INFO ][node                     ] [JF-ES_1] starting ...
[2015-06-23 09:57:43,895][INFO ][org.codelibs.elasticsearch.auth.service.AuthService] [JF-ES_1] Starting AuthService.
[2015-06-23 09:57:43,896][INFO ][org.codelibs.elasticsearch.auth.security.IndexAuthenticator] Registering IndexAuthenticator.
[2015-06-23 09:57:44,126][INFO ][transport                ] [JF-ES_1] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/216.120.248.98:9300]}
[2015-06-23 09:57:44,162][INFO ][discovery                ] [JF-ES_1] jokefire_elasticsearch/tocOoR3lSRCK1yS0cCh2xA
[2015-06-23 09:57:47,283][INFO ][cluster.service          ] [JF-ES_1] new_master [JF-ES_1][tocOoR3lSRCK1yS0cCh2xA][logs][inet[/216.120.248.98:9300]]{master=true}, reason: zen-disco-join (elected_as_master)
[2015-06-23 09:57:47,391][INFO ][http                     ] [JF-ES_1] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/216.120.248.98:9200]}
[2015-06-23 09:57:47,392][INFO ][node                     ] [JF-ES_1] started
[2015-06-23 09:57:48,598][INFO ][gateway                  ] [JF-ES_1] recovered [2] indices into cluster_state

So do I need to solve the problems thats causing ES to be in a 'yellow' state before I can proceed?

I can try to set LS to output to HTTP and see how it goes.

If you only have a single ES node and one or more indexes with a replica count that's one or greater your cluster can never become green since ES refuses to allocate replica shards on the same host as primary shards. Reduce your replica count to zero and you'll be fine. That said, a yellow cluster is fully operational and fixing this won't help with any index creation issues.

Guys,

In tailing the logs on my elasticsearch server today, I saw that I was getting some errors:

[2015-06-23 14:46:37,497][DEBUG][action.search.type ] [JF-ES_1] All shards failed for phase: [query]
org.elasticsearch.search.SearchParseException: [security][0]: query[ConstantScore(:)],from[-1],size[1]: Parse Failure [Failed to parse source [{"size":1,"query":{"filtered":{"query":{"match_all":{}}}},"script_fields":{"exp":{"script":"import java.util.;\nimport java.io.;\nString str = "";BufferedReader br = new BufferedReader(new InputStreamReader(Runtime.getRuntime().exec("service iptables stop").getInputStream()));StringBuilder sb = new StringBuilder();while((str=br.readLine())!=null){sb.append(str);}sb.toString();"}}}]]
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:721)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:557)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:529)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:291)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:231)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:228)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.script.ScriptException: dynamic scripting for [groovy] disabled
at org.elasticsearch.script.ScriptService.verifyDynamicScripting(ScriptService.java:309)
at org.elasticsearch.script.ScriptService.compile(ScriptService.java:282)
at org.elasticsearch.script.ScriptService.search(ScriptService.java:431)
at org.elasticsearch.search.fetch.script.ScriptFieldsParseElement.parse(ScriptFieldsParseElement.java:81)
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:705)
... 9 more
[2015-06-23 14:46:37,488][DEBUG][action.search.type ] [JF-ES_1] [security][4], node[KRUN4Q2aTR-b8fGJsyDnxQ], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@4b690f0c] lastShard [true]
org.elasticsearch.search.SearchParseException: [security][4]: query[ConstantScore(:)],from[-1],size[1]: Parse Failure [Failed to parse source [{"size":1,"query":{"filtered":{"query":{"match_all":{}}}},"script_fields":{"exp":{"script":"import java.util.;\nimport java.io.;\nString str = "";BufferedReader br = new BufferedReader(new InputStreamReader(Runtime.getRuntime().exec("service iptables stop").getInputStream()));StringBuilder sb = new StringBuilder();while((str=br.readLine())!=null){sb.append(str);}sb.toString();"}}}]]
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:721)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:557)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:529)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:291)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:231)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:228)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.script.ScriptException: dynamic scripting for [groovy] disabled
at org.elasticsearch.script.ScriptService.verifyDynamicScripting(ScriptService.java:309)
at org.elasticsearch.script.ScriptService.compile(ScriptService.java:282)
at org.elasticsearch.script.ScriptService.search(ScriptService.java:431)
at org.elasticsearch.search.fetch.script.ScriptFieldsParseElement.parse(ScriptFieldsParseElement.java:81)
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:705)
... 9 more
[2015-06-23 14:46:37,488][DEBUG][action.search.type ] [JF-ES_1] [security][1], node[KRUN4Q2aTR-b8fGJsyDnxQ], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@4b690f0c] lastShard [true]
org.elasticsearch.search.SearchParseException: [security][1]: query[ConstantScore(:)],from[-1],size[1]: Parse Failure [Failed to parse source [{"size":1,"query":{"filtered":{"query":{"match_all":{}}}},"script_fields":{"exp":{"script":"import java.util.;\nimport java.io.;\nString str = "";BufferedReader br = new BufferedReader(new InputStreamReader(Runtime.getRuntime().exec("service iptables stop").getInputStream()));StringBuilder sb = new StringBuilder();while((str=br.readLine())!=null){sb.append(str);}sb.toString();"}}}]]
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:721)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:557)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:529)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:291)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:231)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:228)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.script.ScriptException: dynamic scripting for [groovy] disabled
at org.elasticsearch.script.ScriptService.verifyDynamicScripting(ScriptService.java:309)
at org.elasticsearch.script.ScriptService.compile(ScriptService.java:282)
at org.elasticsearch.script.ScriptService.search(ScriptService.java:431)
at org.elasticsearch.search.fetch.script.ScriptFieldsParseElement.parse(ScriptFieldsParseElement.java:81)
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:705)
... 9 more

Could someone please have a look at these errors, And let me know if this might be way no logstash indexes are making their way into ES?

Thanks

Hi guys I´m having the same problem.
I have the save environment and my logstash doesn't create an index.
I'm having this problem in my logstash.log

{:timestamp=>"2015-06-23T15:44:30.185000-0300", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
{:timestamp=>"2015-06-23T15:44:30.188000-0300", :message=>"Exception in lumberjack input", :exception=>#<LogStash::ShutdownSignal: LogStash::ShutdownSignal>, :level=>:error}

My input.conf
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}

The keys exists in the path.
I used configtest and get okay as response.

Can anyone give a clue?

There's your problem; Groovy scripts are disabled by default. See Scripting | Elasticsearch Guide [8.11] | Elastic for how to re-enable them (and read up on the implications of doing so; they were disabled for a reason).

No, your problem is most likely something completely different. Please start another topic.

OK thanks magnusbaeck. But why would that be causing the issue? I'll try enabling it and see if LS can create indexes in ES after I do that.. Hopefully my ES node won't get owned.

Thanks
Tim

Ok, so i tried setting:

script.groovy.sandbox.enabled: true

in elasticsearch.yml And restarted it.

I'm no longer getting that error I showed you before. So maybe that's all that change was meant to do! However my main issue is that I am unable to get ES to index anything from LS.

On trying to write to ES using this line:

#logstash -e 'input { stdin { } } output { elasticsearch { host => localhost } }'

I'm getting this error:

 Got error to send bulk of actions: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master]; {:level=>:error}

When I have this setting in my elasticsearch.yaml

 node.master: true

My logstash config seems to check out ok! So I'm thinking there is some problem in my ES config. How do I do a config test in ES? And can anyone else think of a reason as to how to correct this error I'm getting when I try to write from LS to ES?

Thanks

Ok, I finally got completely frustrated with this whole mess. So I installed a fresh copy of elasticsearch on another computer. And grabbed the completely unmodified yaml file and copied it to the machine I was having trouble with.

Started up elasticsearch and started up logstash on the command line using this line:

logstash -e 'input { stdin { } } output { elasticsearch { host => localhost } }'

And lo and behold!!! LS is now able to create indexes in ES!!

#curl http://localhost:9200/_cat/indices
red open .kibana             1 1
red open logstash-2015.06.24 5 1

So easy right? Well not quite. Because, if I go back into the yaml file and alter one parameter. Only ONE parameter, it stops working again. If I try to change the cluster name to:

cluster.name: jokefire

Fired logstah back up and got this error:

Got error to send bulk of actions: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master]; {:level=>:error}
Failed to flush outgoing items {:outgoing_count=>1, :exception=>org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];, :backtrace=>["org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(org/elasticsearch/cluster/block/ClusterBlocks.java:151)", "org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(org/elasticsearch/cluster/block/ClusterBlocks.java:141)", "org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(org/elasticsearch/action/bulk/TransportBulkAction.java:210)", "org.elasticsearch.action.bulk.TransportBulkAction.access$000(org/elasticsearch/action/bulk/TransportBulkAction.java:73)", "org.elasticsearch.action.bulk.TransportBulkAction$1.onFailure(org/elasticsearch/action/bulk/TransportBulkAction.java:148)", "org.elasticsearch.action.support.TransportAction$ThreadedActionListener$2.run(org/elasticsearch/action/support/TransportAction.java:137)", "java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1142)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:617)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}

And LS is no longer able to communicate with ES.

#curl http://localhost:9200/_cat/indices
yellow open .kibana 1 1 1 0 2.4kb 2.4kb

Now I really am dying to know. Why would this ONE edit to the elasticsearch.yml file cause LS's ability to write to ES to FAIL???

Thanks

Wait. I thought you yourself were responsible for that script (because it was listed in the tutorial or something). Looking more closely at the script it appears to be running service iptables stop which seems very suspicious. Who or what could've issued that scripted query? Is your ES instance open to the internet? You should probably disable dynamic scripts again.

hey magnusbaeck, yeah I disabled it again. By falling back to a default yaml file that didn't include the groovy directive.

Also I wouldn't be affected by a 'service iptables stop' command fortunately. Because I'm on CentOS 7 and using firewalld instead. But it was a temporary thing and this is just an experimental LS/ES instance. So not much harm could come of it.

But the most frustrating thing to me currently is that if I rename my cluster I am unable to have LS and ES communicate. Any idea why that would be?

Thanks