Logstash won't create index in ES

Check _cat/health, also try switching to http transport and see if that helps.

Looks like your ES cluster has no master so the LS node client can't join it. Check the ES master logs and see whats going on. If not as @warkolm suggested, switch to http protocol in LS output

Guys,

Ok so for now, until I can get this running I only have one ES node. And currently it's reporting it's health as 'yellow'.

#curl http://localhost:9200/_cat/health
1435067968 09:59:28 jokefire_elasticsearch yellow 1 1 6 6 0 0 6 0

In my elasticsearch.yml file I have node.master set to true and node.data set to true:

node.master: true
#
# Allow this node to store data (enabled by default):
#
node.data: true

Everything looks like it's normal in the logs:

#tail -f /var/log/elasticsearch/jokefire_elasticsearch.log
[2015-06-23 09:57:41,206][INFO ][node                     ] [JF-ES_1] version[1.5.2], pid[30696], build[62ff986/2015-04-27T09:21:06Z]
[2015-06-23 09:57:41,207][INFO ][node                     ] [JF-ES_1] initializing ...
[2015-06-23 09:57:41,218][INFO ][plugins                  ] [JF-ES_1] loaded [AuthPlugin], sites [paramedic, bigdesk, head, kopf]
[2015-06-23 09:57:43,734][INFO ][org.codelibs.elasticsearch.auth.service.AuthService] [JF-ES_1] Creating authenticators.
[2015-06-23 09:57:43,894][INFO ][node                     ] [JF-ES_1] initialized
[2015-06-23 09:57:43,894][INFO ][node                     ] [JF-ES_1] starting ...
[2015-06-23 09:57:43,895][INFO ][org.codelibs.elasticsearch.auth.service.AuthService] [JF-ES_1] Starting AuthService.
[2015-06-23 09:57:43,896][INFO ][org.codelibs.elasticsearch.auth.security.IndexAuthenticator] Registering IndexAuthenticator.
[2015-06-23 09:57:44,126][INFO ][transport                ] [JF-ES_1] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/216.120.248.98:9300]}
[2015-06-23 09:57:44,162][INFO ][discovery                ] [JF-ES_1] jokefire_elasticsearch/tocOoR3lSRCK1yS0cCh2xA
[2015-06-23 09:57:47,283][INFO ][cluster.service          ] [JF-ES_1] new_master [JF-ES_1][tocOoR3lSRCK1yS0cCh2xA][logs][inet[/216.120.248.98:9300]]{master=true}, reason: zen-disco-join (elected_as_master)
[2015-06-23 09:57:47,391][INFO ][http                     ] [JF-ES_1] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/216.120.248.98:9200]}
[2015-06-23 09:57:47,392][INFO ][node                     ] [JF-ES_1] started
[2015-06-23 09:57:48,598][INFO ][gateway                  ] [JF-ES_1] recovered [2] indices into cluster_state

So do I need to solve the problems thats causing ES to be in a 'yellow' state before I can proceed?

I can try to set LS to output to HTTP and see how it goes.

If you only have a single ES node and one or more indexes with a replica count that's one or greater your cluster can never become green since ES refuses to allocate replica shards on the same host as primary shards. Reduce your replica count to zero and you'll be fine. That said, a yellow cluster is fully operational and fixing this won't help with any index creation issues.

Guys,

In tailing the logs on my elasticsearch server today, I saw that I was getting some errors:

[2015-06-23 14:46:37,497][DEBUG][action.search.type ] [JF-ES_1] All shards failed for phase: [query]
org.elasticsearch.search.SearchParseException: [security][0]: query[ConstantScore(:)],from[-1],size[1]: Parse Failure [Failed to parse source [{"size":1,"query":{"filtered":{"query":{"match_all":{}}}},"script_fields":{"exp":{"script":"import java.util.;\nimport java.io.;\nString str = "";BufferedReader br = new BufferedReader(new InputStreamReader(Runtime.getRuntime().exec("service iptables stop").getInputStream()));StringBuilder sb = new StringBuilder();while((str=br.readLine())!=null){sb.append(str);}sb.toString();"}}}]]
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:721)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:557)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:529)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:291)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:231)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:228)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.script.ScriptException: dynamic scripting for [groovy] disabled
at org.elasticsearch.script.ScriptService.verifyDynamicScripting(ScriptService.java:309)
at org.elasticsearch.script.ScriptService.compile(ScriptService.java:282)
at org.elasticsearch.script.ScriptService.search(ScriptService.java:431)
at org.elasticsearch.search.fetch.script.ScriptFieldsParseElement.parse(ScriptFieldsParseElement.java:81)
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:705)
... 9 more
[2015-06-23 14:46:37,488][DEBUG][action.search.type ] [JF-ES_1] [security][4], node[KRUN4Q2aTR-b8fGJsyDnxQ], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@4b690f0c] lastShard [true]
org.elasticsearch.search.SearchParseException: [security][4]: query[ConstantScore(:)],from[-1],size[1]: Parse Failure [Failed to parse source [{"size":1,"query":{"filtered":{"query":{"match_all":{}}}},"script_fields":{"exp":{"script":"import java.util.;\nimport java.io.;\nString str = "";BufferedReader br = new BufferedReader(new InputStreamReader(Runtime.getRuntime().exec("service iptables stop").getInputStream()));StringBuilder sb = new StringBuilder();while((str=br.readLine())!=null){sb.append(str);}sb.toString();"}}}]]
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:721)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:557)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:529)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:291)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:231)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:228)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.script.ScriptException: dynamic scripting for [groovy] disabled
at org.elasticsearch.script.ScriptService.verifyDynamicScripting(ScriptService.java:309)
at org.elasticsearch.script.ScriptService.compile(ScriptService.java:282)
at org.elasticsearch.script.ScriptService.search(ScriptService.java:431)
at org.elasticsearch.search.fetch.script.ScriptFieldsParseElement.parse(ScriptFieldsParseElement.java:81)
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:705)
... 9 more
[2015-06-23 14:46:37,488][DEBUG][action.search.type ] [JF-ES_1] [security][1], node[KRUN4Q2aTR-b8fGJsyDnxQ], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@4b690f0c] lastShard [true]
org.elasticsearch.search.SearchParseException: [security][1]: query[ConstantScore(:)],from[-1],size[1]: Parse Failure [Failed to parse source [{"size":1,"query":{"filtered":{"query":{"match_all":{}}}},"script_fields":{"exp":{"script":"import java.util.;\nimport java.io.;\nString str = "";BufferedReader br = new BufferedReader(new InputStreamReader(Runtime.getRuntime().exec("service iptables stop").getInputStream()));StringBuilder sb = new StringBuilder();while((str=br.readLine())!=null){sb.append(str);}sb.toString();"}}}]]
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:721)
at org.elasticsearch.search.SearchService.createContext(SearchService.java:557)
at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:529)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:291)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:231)
at org.elasticsearch.search.action.SearchServiceTransportAction$5.call(SearchServiceTransportAction.java:228)
at org.elasticsearch.search.action.SearchServiceTransportAction$23.run(SearchServiceTransportAction.java:559)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.script.ScriptException: dynamic scripting for [groovy] disabled
at org.elasticsearch.script.ScriptService.verifyDynamicScripting(ScriptService.java:309)
at org.elasticsearch.script.ScriptService.compile(ScriptService.java:282)
at org.elasticsearch.script.ScriptService.search(ScriptService.java:431)
at org.elasticsearch.search.fetch.script.ScriptFieldsParseElement.parse(ScriptFieldsParseElement.java:81)
at org.elasticsearch.search.SearchService.parseSource(SearchService.java:705)
... 9 more

Could someone please have a look at these errors, And let me know if this might be way no logstash indexes are making their way into ES?

Thanks

Hi guys I´m having the same problem.
I have the save environment and my logstash doesn't create an index.
I'm having this problem in my logstash.log

{:timestamp=>"2015-06-23T15:44:30.185000-0300", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}
{:timestamp=>"2015-06-23T15:44:30.188000-0300", :message=>"Exception in lumberjack input", :exception=>#<LogStash::ShutdownSignal: LogStash::ShutdownSignal>, :level=>:error}

My input.conf
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}

The keys exists in the path.
I used configtest and get okay as response.

Can anyone give a clue?

There's your problem; Groovy scripts are disabled by default. See Scripting | Elasticsearch Guide [8.11] | Elastic for how to re-enable them (and read up on the implications of doing so; they were disabled for a reason).

No, your problem is most likely something completely different. Please start another topic.

OK thanks magnusbaeck. But why would that be causing the issue? I'll try enabling it and see if LS can create indexes in ES after I do that.. Hopefully my ES node won't get owned.

Thanks
Tim

Ok, so i tried setting:

script.groovy.sandbox.enabled: true

in elasticsearch.yml And restarted it.

I'm no longer getting that error I showed you before. So maybe that's all that change was meant to do! However my main issue is that I am unable to get ES to index anything from LS.

On trying to write to ES using this line:

#logstash -e 'input { stdin { } } output { elasticsearch { host => localhost } }'

I'm getting this error:

 Got error to send bulk of actions: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master]; {:level=>:error}

When I have this setting in my elasticsearch.yaml

 node.master: true

My logstash config seems to check out ok! So I'm thinking there is some problem in my ES config. How do I do a config test in ES? And can anyone else think of a reason as to how to correct this error I'm getting when I try to write from LS to ES?

Thanks

Ok, I finally got completely frustrated with this whole mess. So I installed a fresh copy of elasticsearch on another computer. And grabbed the completely unmodified yaml file and copied it to the machine I was having trouble with.

Started up elasticsearch and started up logstash on the command line using this line:

logstash -e 'input { stdin { } } output { elasticsearch { host => localhost } }'

And lo and behold!!! LS is now able to create indexes in ES!!

#curl http://localhost:9200/_cat/indices
red open .kibana             1 1
red open logstash-2015.06.24 5 1

So easy right? Well not quite. Because, if I go back into the yaml file and alter one parameter. Only ONE parameter, it stops working again. If I try to change the cluster name to:

cluster.name: jokefire

Fired logstah back up and got this error:

Got error to send bulk of actions: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master]; {:level=>:error}
Failed to flush outgoing items {:outgoing_count=>1, :exception=>org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];, :backtrace=>["org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(org/elasticsearch/cluster/block/ClusterBlocks.java:151)", "org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(org/elasticsearch/cluster/block/ClusterBlocks.java:141)", "org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(org/elasticsearch/action/bulk/TransportBulkAction.java:210)", "org.elasticsearch.action.bulk.TransportBulkAction.access$000(org/elasticsearch/action/bulk/TransportBulkAction.java:73)", "org.elasticsearch.action.bulk.TransportBulkAction$1.onFailure(org/elasticsearch/action/bulk/TransportBulkAction.java:148)", "org.elasticsearch.action.support.TransportAction$ThreadedActionListener$2.run(org/elasticsearch/action/support/TransportAction.java:137)", "java.util.concurrent.ThreadPoolExecutor.runWorker(java/util/concurrent/ThreadPoolExecutor.java:1142)", "java.util.concurrent.ThreadPoolExecutor$Worker.run(java/util/concurrent/ThreadPoolExecutor.java:617)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:warn}

And LS is no longer able to communicate with ES.

#curl http://localhost:9200/_cat/indices
yellow open .kibana 1 1 1 0 2.4kb 2.4kb

Now I really am dying to know. Why would this ONE edit to the elasticsearch.yml file cause LS's ability to write to ES to FAIL???

Thanks

Wait. I thought you yourself were responsible for that script (because it was listed in the tutorial or something). Looking more closely at the script it appears to be running service iptables stop which seems very suspicious. Who or what could've issued that scripted query? Is your ES instance open to the internet? You should probably disable dynamic scripts again.

hey magnusbaeck, yeah I disabled it again. By falling back to a default yaml file that didn't include the groovy directive.

Also I wouldn't be affected by a 'service iptables stop' command fortunately. Because I'm on CentOS 7 and using firewalld instead. But it was a temporary thing and this is just an experimental LS/ES instance. So not much harm could come of it.

But the most frustrating thing to me currently is that if I rename my cluster I am unable to have LS and ES communicate. Any idea why that would be?

Thanks

Yes, but why is the cluster red? Did you issue the _cat/indices RPC immediately after the index was created? After creating an index the cluster should go yellow within a second or two.

Yes, because Logstash by default attempts to join a cluster named "elasticsearch" (see the documentation of the cluster parameter) and when you rename the cluster Logstash won't find any cluster with that name to connect to. Make sure ES and LS agree on the cluster name, or make Logstash use HTTP instead with protocol => http.

Yes, but why is the cluster red? Did you issue the _cat/indices RPC immediately after the index was created? After creating an index the cluster should go yellow within a second or two.

Yep. You're right. It was red because I curled the indexes a little too quickly. I started from scratch with the default yaml file, plus I added one more node.

green open .kibana             1 1 2 0  9.5kb  4.8kb
green open logstash-2015.06.24 5 1 7 0 38.2kb 19.2kb

So, I'm all good there now.

Yes, because Logstash by default attempts to join a cluster named "elasticsearch" (see the documentation of the cluster parameter) and when you rename the cluster Logstash won't find any cluster with that name to connect to. Make sure ES and LS agree on the cluster name, or make Logstash use HTTP instead with protocol => http.

Ok that makes sense. Is there an easy way to get LS to look for a different cluster name? One of my choosing? Or maybe just using http mode would be easier.

Thanks

Is there an easy way to get LS to look for a different cluster name?

Yes, see the cluster parameter whose documentation I linked to.

Ok, thanks! I'll check it out.

I seem to be having a similar issue. Fresh install of all the latest from that tutorial, and logstash isn't creating indexes. help

Please start your own thread for this :slight_smile: