I think you are right about .conf file not being used. I have run the logstash start command from inside the container and getting more information now.
root@589454b082d2:/usr/share/logstash# ./bin/logstash -f config/logstash.conf
Using bundled JDK: /usr/share/logstash/jdk
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2024-03-03T02:01:24,260][WARN ][deprecation.logstash.runner] NOTICE: Running Logstash as superuser is not recommended and won't be allowed in the future. Set 'allow_superuser' to 'false' to avoid startup errors in future releases.
[2024-03-03T02:01:24,271][INFO ][logstash.runner ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2024-03-03T02:01:24,272][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"8.8.0", "jruby.version"=>"jruby 9.3.10.0 (2.6.8) 2023-02-01 107b2e6697 OpenJDK 64-Bit Server VM 17.0.7+7 on 17.0.7+7 +indy +jit [x86_64-linux]"}
[2024-03-03T02:01:24,275][INFO ][logstash.runner ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[2024-03-03T02:01:24,284][INFO ][logstash.settings ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2024-03-03T02:01:24,286][INFO ][logstash.settings ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2024-03-03T02:01:24,484][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2024-03-03T02:01:24,494][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"88ef06dc-0715-4356-97b6-9b91dd24877f", :path=>"/usr/share/logstash/data/uuid"}
[2024-03-03T02:01:25,129][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-03-03T02:01:25,762][INFO ][org.reflections.Reflections] Reflections took 142 ms to scan 1 urls, producing 132 keys and 464 values
[2024-03-03T02:01:26,654][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "cacert" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Set 'ssl_certificate_authorities' instead. If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"cacert", :plugin=><LogStash::Outputs::ElasticSearch ssl_certificate=>"/usr/share/logstash/certs/logstash/logstash.crt", password=><password>, ssl_key=>"/usr/share/logstash/certs/logstash/logstash.pkcs8.key", hosts=>[https://es01:9200, https://es02:9200, https://es03:9200], ssl_enabled=>true, cacert=>"/usr/share/logstash/certs/ca/ca.crt", ssl_verification_mode=>"none", id=>"6bc181a4d6d0ef448181c2f2c3c45949e00b34e50853423e462f532084b4a403", user=>"logstash_internal", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_3e8b053b-e1c6-442d-a0a8-93d1f89c741c", enable_metric=>true, charset=>"UTF-8">, workers=>1, ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false, retry_initial_interval=>2, retry_max_interval=>64, data_stream_type=>"logs", data_stream_dataset=>"generic", data_stream_namespace=>"default", data_stream_sync_fields=>true, data_stream_auto_routing=>true, manage_template=>true, template_overwrite=>false, template_api=>"auto", doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_on_conflict=>1, ilm_enabled=>"auto", ilm_pattern=>"{now/d}-000001", ilm_policy=>"logstash-policy", dlq_on_failed_indexname_interpolation=>true>}
[2024-03-03T02:01:26,675][INFO ][logstash.javapipeline ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-03-03T02:01:26,706][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://es01:9200", "https://es02:9200", "https://es03:9200"]}
[2024-03-03T02:01:26,710][WARN ][logstash.outputs.elasticsearch][main] You have enabled encryption but DISABLED certificate verification, to make sure your data is secure set `ssl_verification_mode => full`
[2024-03-03T02:01:26,894][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://logstash_internal:xxxxxx@es01:9200/, https://logstash_internal:xxxxxx@es02:9200/, https://logstash_internal:xxxxxx@es03:9200/]}}
[2024-03-03T02:01:27,231][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://logstash_internal:xxxxxx@es01:9200/"}
[2024-03-03T02:01:27,241][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.8.0) {:es_version=>8}
[2024-03-03T02:01:27,241][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2024-03-03T02:01:27,593][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://logstash_internal:xxxxxx@es02:9200/"}
[2024-03-03T02:01:27,829][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://logstash_internal:xxxxxx@es03:9200/"}
[2024-03-03T02:01:27,918][INFO ][logstash.outputs.elasticsearch][main] Data streams auto configuration (`data_stream => auto` or unset) resolved to `true`
[2024-03-03T02:01:27,919][WARN ][logstash.outputs.elasticsearch][main] Elasticsearch Output configured with `ecs_compatibility => v8`, which resolved to an UNRELEASED preview of version 8.0.0 of the Elastic Common Schema. Once ECS v8 and an updated release of this plugin are publicly available, you will need to update this plugin to resolve this warning.
[2024-03-03T02:01:27,946][WARN ][logstash.filters.grok ][main] ECS v8 support is a preview of the unreleased ECS v8, and uses the v1 patterns. When Version 8 of the Elastic Common Schema becomes available, this plugin will need to be updated
[2024-03-03T02:01:28,168][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/usr/share/logstash/config/logstash.conf"], :thread=>"#<Thread:0x57cbd1ff@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-03-03T02:01:29,059][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.89}
[2024-03-03T02:01:29,182][INFO ][logstash.inputs.file ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_1e3cf6fba627b1a5d046b00da35fa7cb", :path=>["/var/log/cron"]}
[2024-03-03T02:01:29,187][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2024-03-03T02:01:29,192][INFO ][logstash.inputs.tcp ][main][b9712cca12503e9d0b900a489e88a6bb7b7969f2b3ed9326bc96927f84a77e78] Starting tcp input listener {:address=>"0.0.0.0:514", :ssl_enable=>false}
[2024-03-03T02:01:29,223][INFO ][filewatch.observingtail ][main][294a4a88d1dd162d7ab104c9d9c0b5045272361d9d09910bde58e221ff6b6661] START, creating Discoverer, Watch with file and sincedb collections
[2024-03-03T02:01:29,234][INFO ][logstash.inputs.tcp ][main][3781061f0d60f76406bb601dc94256b0917cc197036cfdf57209514741803dbf] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>false}
[2024-03-03T02:01:29,239][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2024-03-03T02:01:29,242][INFO ][org.logstash.beats.Server][main][b8952b8e7e68061fb9730fd1300e6e7dd04ab8dbc535488fec921bc175666f3a] Starting server on port: 5044
[2024-03-03T02:01:29,246][INFO ][logstash.inputs.udp ][main][94ed349e0390fdfec9474182c601d66ebfe52ea44fc77aafb2a4aff6a59adc40] Starting UDP listener {:address=>"0.0.0.0:514"}
[2024-03-03T02:01:29,260][INFO ][logstash.inputs.udp ][main][94ed349e0390fdfec9474182c601d66ebfe52ea44fc77aafb2a4aff6a59adc40] UDP listener started {:address=>"0.0.0.0:514", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[2024-03-03T02:01:29,270][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2024-03-03T02:01:31,071][INFO ][logstash.outputs.elasticsearch][main][6bc181a4d6d0ef448181c2f2c3c45949e00b34e50853423e462f532084b4a403] Retrying failed action {:status=>403, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"@timestamp"=>2024-03-03T02:01:30.800777918Z, "tags"=>["_grokparsefailure"], "message"=>"<166>2024-03-03T02:01:30.776Z esxi67.local Vpxa: info vpxa[2100035] [Originator@6876 sub=vpxaInvtHost] Increment master gen. no to (33720): Event:VpxaEventHostd::CheckQueuedEvents\n", "event"=>{"original"=>"<166>2024-03-03T02:01:30.776Z esxi67.local Vpxa: info vpxa[2100035] [Originator@6876 sub=vpxaInvtHost] Increment master gen. no to (33720): Event:VpxaEventHostd::CheckQueuedEvents\n"}, "type"=>"syslog", "@version"=>"1", "host"=>{"ip"=>"192.168.1.40"}, "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"security_exception", "reason"=>"action [indices:admin/auto_create] is unauthorized for user [logstash_internal] with effective roles [logstash_writer] on indices [logs-generic-default], this action is granted by the index privileges [auto_configure,create_index,manage,all]"}}
[2024-03-03T02:01:31,072][INFO ][logstash.outputs.elasticsearch][main][6bc181a4d6d0ef448181c2f2c3c45949e00b34e50853423e462f532084b4a403] Retrying failed action {:status=>403, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"@timestamp"=>2024-03-03T02:01:30.904085884Z, "tags"=>["_grokparsefailure"], "message"=>"<164>2024-03-03T02:01:30.882Z esxi67.local Vpxa: warning vpxa[2099609] [Originator@6876 sub=hostdstats] Host to vpxd translation is empty, dropping results\n", "event"=>{"original"=>"<164>2024-03-03T02:01:30.882Z esxi67.local Vpxa: warning vpxa[2099609] [Originator@6876 sub=hostdstats] Host to vpxd translation is empty, dropping results\n"}, "type"=>"syslog", "@version"=>"1", "host"=>{"ip"=>"192.168.1.40"}, "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"security_exception", "reason"=>"action [indices:admin/auto_create] is unauthorized for user [logstash_internal] with effective roles [logstash_writer] on indices [logs-generic-default], this action is granted by the index privileges [auto_configure,create_index,manage,all]"}}
[2024-03-03T02:01:31,073][INFO ][logstash.outputs.elasticsearch][main][6bc181a4d6d0ef448181c2f2c3c45949e00b34e50853423e462f532084b4a403] Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>2}
[2024-03-03T02:01:32,848][INFO ][logstash.outputs.elasticsearch][main][6bc181a4d6d0ef448181c2f2c3c45949e00b34e50853423e462f532084b4a403] Retrying failed action {:status=>403, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"@timestamp"=>2024-03-03T02:01:32.687416552Z, "tags"=>["_grokparsefailure"], "message"=>"<163>2024-03-03T02:01:32.662Z esxi67.local Hostd: -->\n", "event"=>{"original"=>"<163>2024-03-03T02:01:32.662Z esxi67.local Hostd: -->\n"}, "type"=>"syslog", "@version"=>"1", "host"=>{"ip"=>"192.168.1.40"}, "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"security_exception", "reason"=>"action [indices:admin/auto_create] is unauthorized for user [logstash_internal] with effective roles [logstash_writer] on indices [logs-generic-default], this action is granted by the index privileges [auto_configure,create_index,manage,all]"}}
[2024-03-03T02:01:32,850][INFO ][logstash.outputs.elasticsearch][main][6bc181a4d6d0ef448181c2f2c3c45949e00b34e50853423e462f532084b4a403] Retrying individual bulk actions that failed or were rejected by the previous bulk request {:count=>1}
[2024-03-03T02:01:32,863][INFO ][logstash.outputs.elasticsearch][main][6bc181a4d6d0ef448181c2f2c3c45949e00b34e50853423e462f532084b4a403] Retrying failed action {:status=>403, :action=>["create", {:_id=>nil, :_index=>"logs-generic-default", :routing=>nil}, {"@timestamp"=>2024-03-03T02:01:32.686886359Z, "tags"=>["_grokparsefailure"], "message"=>"<163>2024-03-03T02:01:32.662Z esxi67.local Hostd: error hostd[2114082] [Originator@6876 sub=Default] [LikewiseGetDomainJoinInfo:354] QueryInformation(): ERROR_FILE_NOT_FOUND (2/0):\n", "event"=>{"original"=>"<163>2024-03-03T02:01:32.662Z esxi67.local Hostd: error hostd[2114082] [Originator@6876 sub=Default] [LikewiseGetDomainJoinInfo:354] QueryInformation(): ERROR_FILE_NOT_FOUND (2/0):\n"}, "type"=>"syslog", "@version"=>"1", "host"=>{"ip"=>"192.168.1.40"}, "data_stream"=>{"type"=>"logs", "dataset"=>"generic", "namespace"=>"default"}}], :error=>{"type"=>"security_exception", "reason"=>"action [indices:admin/auto_create] is unauthorized for user [logstash_internal] with effective roles [logstash_writer] on indices [logs-generic-default], this action is granted by the index privileges [auto_configure,create_index,manage,all]"}}
Now it just looks like a perm/role issue.