Knowledge of successful logstash upload


#1

How does one know if a scheduled logstash upload went through to the index on ES?

For example, lets say I have a conf file that is scheduled to upload data from a mysql database to an ElasticSearch index daily. If one day, the logstash upload didn't go through, is there some kind of notification setting built in logstash or do I have to code something manually to warn me about this potential situation.


(Magnus Bäck) #2

There's no built-in notification mechanism for this.


#3

Ok. Is there a way for the timestamps for newly added rows in the mysql tables to dynamically change everyday so we can just keep logstash conf file running and not have to manually change the timestamp on the sql statement in the conf file?


(Magnus Bäck) #4

It sounds like you should be using the sql_last_value parameter to keep track of which rows Logstash has processed and only select new rows, but you're not providing much details about your setup.


#5

Ok. Right now I just set up a simple test using current_timestamp method to import newly added sql rows into Elasticsearch index using logstash every 10 minutes. So right now I am adding new rows of information with a timestamp and logstash is able to successfully upload the new data into the same existing Elasticsearch index.

I am curious to know if logstash fails between a scheduled session, is there a way to load the old (past 10 minutes) and new data (new 10 minutes) to Elasticsearch index in the conf file?


(Magnus Bäck) #6

I don't think you understand the point of the sql_last_value parameter. The idea is that it records e.g. the timestamp the was processed last so that the query can select everything that has happened since then. That way you can have Logstash make the query at any frequency you like.


#7

Thanks. I was able to figure that out.

Is there a way to pass arguments on the command line to the logstash conf file when I run logstash? Say if I want to pass in the argument of my username and password to my mysql account through the command line rather than the logstash conf file. It seems like you can't combine -f and -e paths in the new version of logstash. Is there another way to pass arguments in logstash?


(Magnus Bäck) #8

You can use environment variables: https://www.elastic.co/guide/en/logstash/current/environment-variables.html


#9

The documentation on this is not clear to me. Am I supposed to run export JDBC_USER=username on the command line and then run my logstash conf file?


(Magnus Bäck) #10

If you run Logstash from your shell, yes. If you start Logstash as a system service you'll have to do something else.


(system) #11

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.