Process never end when I run a single pipeline inside a docker container

with a python script Im running logstash via command inside a docker container, the normal behavior (with logstash installed in the server) is that after the pipeline get the data that pipeline shuts down, but the process never ends.

logstash=subprocess.call(["docker","exec", "-it", "logstash-docker_logstash_1", "/usr/share/logstash/bin/logstash","-f", "/usr/share/logstash/pipeline/site-canvas.conf","--path.data","/usr/share/logstash/config/min-data/"])

Im using docker top to see the running processes inside the container

what can I do to ensure that the process end when finish getting the data?

This is my pipeline

input {
    jdbc {
        jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
        jdbc_connection_string => "jdbc:sqlserver:/db-ip:1433;databasename=omi"
        jdbc_user => "my-user"
        jdbc_password => "my-pass"

        statement => "SELECT
                    TIME_CREATED,DESCRIPTION as problem, SEVERITY as severity_mame, NODEHINTS_DNSNAME as source,CATEGORY
                    FROM [omi1062event].[dbo].[ALL_EVENTS]
                    WHERE  STATE = 'OPEN'
                    AND NODEHINTS_DNSNAME LIKE 'mju%'
                    AND TIME_CREATED >= DATEADD(day, -1, GETDATE())
                    ORDER BY TIME_CREATED ASC
                    "
        jdbc_default_timezone => "UTC"
    }
}

filter {
        date {
            match => [ "time_created", "ISO8601", "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'","yyyy-MM-dd HH:mm:ss", "yyyy-MM-dd HH:mm:ss.SSSSSS" ]
            timezone => "Chile/Continental"
        }
}
output {
    elasticsearch {
        hosts => "my-ip:9200"
        index => "canvas"
        user => "my-user"
        password => "my-pass"
    }
}

So, I just enter de container and run a single pipeline with this line

/usr/share/logstash/bin/logstash -f /usr/share/logstash/pipeline/sitescope-minju-canvas.conf --path.data /usr/share/logstash/config/minju-data/ --debug

and I got some debug messages, after logstash is done collecting data, it keep looping over this messages

[2021-05-06T01:36:04,177][DEBUG][logstash.javapipeline    ][main] Input plugins stopped! Will shutdown filter/output workers. {:pipeline_id=>"main", :thread=>"#<Thread:0x9f13dbf run>"}
[2021-05-06T01:36:04,302][DEBUG][logstash.javapipeline    ][main] Shutdown waiting for worker thread {:pipeline_id=>"main", :thread=>"#<Thread:0x44d21352 run>"}
[2021-05-06T01:36:04,347][DEBUG][logstash.instrument.periodicpoller.cgroup.cpuresource] File /sys/fs/cgroup/cpu/docker/1890c4c980b980095fddf8ee117a15a0f23122b5af50ddec1561e81a6e494d41/cpu.cfs_period_us cannot be found, try providing an override 'ls.cgroup.cpu.path.override' in the Logstash JAVA_OPTS environment variable
``

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.