Pipeline terminated {"pipeline.id"=>".monitoring-logstash"} Logstash shutdown

I finally got my elk stack running and working with mysql-connector and then I decided to resize my vm. Now logstash will not stay up. This was working before I'm pretty sure.

Logstash.yml
---

Default Logstash configuration from Logstash base image.

https://github.com/elastic/logstash/blob/master/docker/data/logstash/config/logstash-full.yml

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch:9200" ]
path.config: /usr/share/logstash/pipeline

X-Pack security credentials

xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password:

I tried commenting out the path.config as was suggested in another post but same issue.

logstash.conf

input {
jdbc {
jdbc_connection_string => "jdbc:mysql://10.50.0.11:3306/ncryptedcloud"
jdbc_user => "ncryptedcloud"
jdbc_password => "db3Looking4Charlie@NCC"
jdbc_driver_class => ""
jdbc_driver_library => ""
jdbc_default_timezone => "UTC"
statement => "select * from table"
}
}

Add your filters / logstash plugins configuration here

output {
elasticsearch {
hosts => "elasticsearch:9200"
user => "elastic"
password => "changeme"
index => "metadata-sql"
}
}

docker-compose.xml

logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
- type: bind
source: ./mysql-connector-java-8.0.17
target: /usr/share/mysql-connector-java-8.0.17
ports:
- "5000:5000"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch

logstash/Dockerfile

ARG ELK_VERSION

https://github.com/elastic/logstash-docker

FROM docker.elastic.co/logstash/logstash:${ELK_VERSION}
ADD mysql-connector-java-8.0.17.jar /usr/share/logstash/logstash-core/lib/jars

Add your logstash plugins setup here

Example: RUN logstash-plugin install logstash-filter-json

HEre is the log:

logstash_1 | [2019-10-08T17:35:49,355][INFO ][logstash.monitoring.internalpipelinesource] Validated license for monitoring. Enabling monitoring pipeline.
logstash_1 | [2019-10-08T17:35:50,930][INFO ][org.reflections.Reflections] Reflections took 69 ms to scan 1 urls, producing 19 keys and 39 values
logstash_1 | [2019-10-08T17:35:51,397][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://elastic:xxxxxx@elasticsearch:9200/]}}
logstash_1 | [2019-10-08T17:35:51,455][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@elasticsearch:9200/"}
logstash_1 | [2019-10-08T17:35:51,471][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
logstash_1 | [2019-10-08T17:35:51,472][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2019-10-08T17:35:51,493][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
logstash_1 | [2019-10-08T17:35:51,717][INFO ][logstash.outputs.elasticsearch] Using default mapping template
logstash_1 | [2019-10-08T17:35:51,747][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
logstash_1 | [2019-10-08T17:35:51,753][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, :thread=>"#<Thread:0x1d913655 run>"}
logstash_1 | [2019-10-08T17:35:51,790][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
logstash_1 | [2019-10-08T17:35:51,856][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
logstash_1 | [2019-10-08T17:35:52,142][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>"main"}
logstash_1 | [2019-10-08T17:35:52,315][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
logstash_1 | [2019-10-08T17:35:53,473][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://elastic:xxxxxx@elasticsearch:9200/]}}
logstash_1 | [2019-10-08T17:35:53,491][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@elasticsearch:9200/"}
logstash_1 | [2019-10-08T17:35:53,520][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
logstash_1 | [2019-10-08T17:35:53,520][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
logstash_1 | [2019-10-08T17:35:53,537][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://elasticsearch:9200"]}
logstash_1 | [2019-10-08T17:35:53,550][INFO ][logstash.javapipeline ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>2, :thread=>"#<Thread:0x6b04f508 run>"}
logstash_1 | [2019-10-08T17:35:53,614][INFO ][logstash.javapipeline ] Pipeline started {"pipeline.id"=>".monitoring-logstash"}
logstash_1 | [2019-10-08T17:35:53,651][INFO ][logstash.agent ] Pipelines running {:count=>2, :running_pipelines=>[:main, :".monitoring-logstash"], :non_running_pipelines=>}
logstash_1 | Loading class com.mysql.jdbc.Driver'. This is deprecated. The new driver class iscom.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
logstash_1 | [2019-10-08T17:35:54,104][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
logstash_1 | [2019-10-08T17:35:55,074][INFO ][logstash.inputs.jdbc ] (0.053201s) select * from mytable
logstash_1 | [2019-10-08T17:35:57,814][INFO ][logstash.javapipeline ] Pipeline terminated {"pipeline.id"=>".monitoring-logstash"}
logstash_1 | [2019-10-08T17:35:58,586][INFO ][logstash.runner ] Logstash shut down.

Anyone?

This seems to be happening on a consistent basis. If I start from scratch everything works. Then I shut all the containers down and modify the pipeline/logstash.conf and it breaks. Not sure how to stop the .monitoring-logstash pipeline from killing the container...

You do not have a schedule on your jdbc input, so after it has executed the query once there is nothing left for logstash to do, so it exits.

Thanks. I tried that too. It turns out I needed to add the elastic output section I guess becasue of monitoring being setup by default. Now I'm struggling with reading files...