with a python script Im running logstash via command inside a docker container, the normal behavior (with logstash installed in the server) is that after the pipeline get the data that pipeline shuts down, but the process never ends.
logstash=subprocess.call(["docker","exec", "-it", "logstash-docker_logstash_1", "/usr/share/logstash/bin/logstash","-f", "/usr/share/logstash/pipeline/site-canvas.conf","--path.data","/usr/share/logstash/config/min-data/"])
Im using docker top to see the running processes inside the container
what can I do to ensure that the process end when finish getting the data?
This is my pipeline
input {
jdbc {
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_connection_string => "jdbc:sqlserver:/db-ip:1433;databasename=omi"
jdbc_user => "my-user"
jdbc_password => "my-pass"
statement => "SELECT
TIME_CREATED,DESCRIPTION as problem, SEVERITY as severity_mame, NODEHINTS_DNSNAME as source,CATEGORY
FROM [omi1062event].[dbo].[ALL_EVENTS]
WHERE STATE = 'OPEN'
AND NODEHINTS_DNSNAME LIKE 'mju%'
AND TIME_CREATED >= DATEADD(day, -1, GETDATE())
ORDER BY TIME_CREATED ASC
"
jdbc_default_timezone => "UTC"
}
}
filter {
date {
match => [ "time_created", "ISO8601", "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'","yyyy-MM-dd HH:mm:ss", "yyyy-MM-dd HH:mm:ss.SSSSSS" ]
timezone => "Chile/Continental"
}
}
output {
elasticsearch {
hosts => "my-ip:9200"
index => "canvas"
user => "my-user"
password => "my-pass"
}
}