Detect when a Logstash pipeline is ready for input?

Until recently, using Elastic 7.11.2, I've been using the following curl command line, in a Linux shell script, to detect whether Logstash is ready to ingest data:

curl localhost:9600 2> /dev/null

(In a previous edit of this question, I mistakenly wrote that "an empty response means success". That was bogus, wrongheaded.)

Now that I've upgraded to 7.13.2, that technique no longer works. I gather from the log that there's now a longer delay between that successful response and the pipeline actually being ready. (Perhaps I've just been lucky so far. Perhaps it was never a good technique.)

The socat TCP forwarder that I call (this particular pipeline uses a tcp input), following a successful response from curl, now displays the error:

socat[525] E connect(6, AF=2, 16): Connection refused

However, if I insert a /bin/sleep 10 before the call to socat, it works again.

I hate inserting arbitrary "sleep for n seconds" steps in scripts!

I'm considering a successful response (code 200) to the following request as an option:

curl -LI 'localhost:9600/_node/pipelines/main' -o /dev/null -w '%{http_code}\n' -s

but I haven't fully tested whether this is a reliable indicator.

Another option: keep trying socat until it works. Perhaps:

socat /dev/null TCP4:localhost:5046

(5046 is the port on which the Logstash tcp input is configured to listen)

I welcome alternative recommendations for a telltale API request/response.

Note: I need to test not just that the Logstash service is running, but that a particular pipeline (e.g. main) is ready for input.


does this would do the trick ? echo > /dev/tcp/localhost/5046 && echo "Pipeline started"

I wonder how do you do your tests with your command for open pipeline and check if data is coming at the same time ?

Dont worry about some pipelines erors when starting those logs are generally not lost thanks to logstash caching

That works nicely, thank you!

In the last few hours, before seeing your reply, I implemented a solution based on socat /dev/null/. However, I like yours better, and I'm going to mark it as the solution. If I get the time to revisit my code, I'll use this.

Port is being listened on:

# echo > /dev/tcp/localhost/5046 && echo "Pipeline started"
Pipeline started

Unused port (not being listened on):

# echo > /dev/tcp/localhost/5047 && echo "Pipeline started"
bash: connect: Cannot assign requested address
bash: /dev/tcp/localhost/5047: Cannot assign requested address

The only gripe I have about your solution, and I have the same gripe about mine, is that it's TCP-specific. Ideally, I'd like a general-purpose telltale for any Logstash pipeline. But that's verging on a solution looking for a problem; right now, I have a tcp input, and your solution works well. Thank you!

Here's my context: I'm using a post-hooks shell script with the sebp/elk Docker image to automate Elastic Stack configuration, including loading a file of JSON Lines data into Elasticsearch via Logstash. I'm starting up a brand-new Docker container and immediately loading sample data into it. To load the sample data, I use the same TCP input that users can use later to load their own data.

Looking ahead, I'm considering moving away from the all-in-one sebp/elk-based container, and using the individual Elastic-built Docker images for each component (Elasticsearch, Logstash, Kibana) with docker compose. I just need to wean myself off the nice "post-hooks" functionality in sebp/elk, and figure out how to migrate that shell script to a multi-container environment.


i believe you can also include /dev/udp/ for another use case.

if that's what you meant by "tcp specific"

Nice use case, i automate some things myself with gitlab CI and ssh runners because i don't like docker :wink:

If you want something smoother and cleaner you can always play with command return codes and more.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.