Query not executing in Elasticsearch input plugin for logstash

I'm new to the stack, and am trying to execute simple queries in logstash via the elasticsearch input plugin. I have worked through some initial errors and now have only a couple of notable warnings, but am not getting any output from the query. I've verified that the query works properly in elasticsearch itself and that the output functions properly using by using a generator. Here is the config, with some private info swapped out.

input {
	elasticsearch {
		id => "es_input_plugin"
		user => myuser
		password => mypassword
		hosts => myhost
		ca_file => "C:\devsetup\logstash-8.6.0\rootca3.crt"
		ssl => true
		index => "log-com-ms-ldap--*"
		query => '{ 
					"query": {
						"range": {
							"@timestamp": {
								"gte": "now-1d/d"
							}
						}
					},
					"size": 1
				}'
		docinfo => true
		docinfo_target => "[@metadata][doc]"
	}
}

output {
	stdout { 
		codec => rubydebug
	}
	
	email { 
		to => myemail
		from => mysystem
		subject => 'Success'
		body => 'Success'
		address => myaddress
		via => 'smtp'
	}

And here is the output I get when running the file.

C:\devsetup\logstash-8.6.0\bin>logstash.bat -f C:\Users\ElamR\Documents\elasticinput.conf
"Using bundled JDK: C:\devsetup\logstash-8.6.0\jdk\bin\java.exe"
Sending Logstash logs to C:/devsetup/logstash-8.6.0/logs which is now configured via log4j2.properties
[2023-02-23T13:01:56,964][INFO ][logstash.runner          ] Log4j configuration path used is: C:\devsetup\logstash-8.6.0\config\log4j2.properties
[2023-02-23T13:01:56,973][WARN ][logstash.runner          ] The use of JAVA_HOME has been deprecated. Logstash 8.0 and later ignores JAVA_HOME and uses the bundled JDK. Running Logstash with the bundled JDK is recommended. The bundled JDK has been verified to work with each specific version of Logstash, and generally provides best performance and reliability. If you have compelling reasons for using your own JDK (organizational-specific compliance requirements, for example), you can configure LS_JAVA_HOME to use that version instead.
[2023-02-23T13:01:56,975][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.6.0", "jruby.version"=>"jruby 9.3.8.0 (2.6.8) 2022-09-13 98d69c9461 OpenJDK 64-Bit Server VM 17.0.5+8 on 17.0.5+8 +indy +jit [x86_64-mswin32]"}
[2023-02-23T13:01:56,980][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[2023-02-23T13:01:57,019][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2023-02-23T13:01:58,272][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2023-02-23T13:01:58,700][INFO ][org.reflections.Reflections] Reflections took 180 ms to scan 1 urls, producing 127 keys and 444 values
[2023-02-23T13:02:01,422][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2023-02-23T13:02:02,095][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["C:/Users/ElamR/Documents/elasticinput.conf"], :thread=>"#<Thread:0x6f11e638@C:/devsetup/logstash-8.6.0/logstash-core/lib/logstash/java_pipeline.rb:131 run>"}
[2023-02-23T13:02:02,797][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.7}
[2023-02-23T13:02:04,827][INFO ][logstash.inputs.elasticsearch][main] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2023-02-23T13:02:04,831][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2023-02-23T13:02:04,849][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2023-02-23T13:02:05,108][WARN ][logstash.inputs.elasticsearch][main][es_input_plugin] Ignoring clear_scroll exception {:message=>"[404] {\"succeeded\":true,\"num_freed\":0}", :exception=>Elasticsearch::Transport::Transport::Errors::NotFound}
[2023-02-23T13:02:05,215][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2023-02-23T13:02:05,356][INFO ][logstash.pipelinesregistry] Removed pipeline from registry successfully {:pipeline_id=>:main}
[2023-02-23T13:02:05,362][INFO ][logstash.runner          ] Logstash shut down.

My thoughts have been centered around the final warning, regarding clear_scroll exception, but I have yet to find any related references.
Any help would be greatly appreciated. Thanks

Hi @aelam and welcome to the community!

I believe that the error from the clear_scroll is coming because after the query runs Logstash is attempting to clear any open scrolls that it created and in this case, there are none.

from these logs, it would appear that the pipeline fires and finishes without error.
have you attempted to run Logstash with debug to see if it produces anything additional?

I tested a version query locally and it worked for me as well. Have you tried changing the "gte": "now-1d/d" to something like "gte": "now-10000d/d" to rule out any strange date behavior?
Or maybe attempt a different index?

You could also try to hit the Logstash API on :9600 to see if it is seeing any documents flow through.

I think it was an issue with index. Thanks for suggesting that. An additional question if you don't mind me asking: I only want to send a single email if the query doesn't return anything, and none if it does. Is there a straightforward way to go about that? Sorry to trouble you. Thanks again for the help.

Glad it worked!

If you only want to email on a "failure", then you can examine the response of the query in the Filter section of Logstash and based on what you want - put a condition in the output section like this.

What are you trying to do here with this process? It seems like you're utilizing Logstash for a notification engine. Would Kibana Alerting or Watcher better a more suitable option for you?

Unfortunately, the Observability functions aren't yet enabled for my companies nodes or that would be a much better route! I guess my question is: how do I actually go about examining the query contents? I can't find documentation for how to do so on the pages for the plugin. Sorry to keep pestering you and once again I appreciate your help.

Nevermind! I wasn't putting together that I should just examine the log and use a grok pattern. Now that that clicked I'm good! Thanks for all of your help.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.