Logstash elasticsearch output problem

hi all
i have an strange problem

i do check apache grok from this git in stdout codec rubydebug and i get success
but when i try to push it in elastic search it wont show in indexes
could you kindly test that out too?

i have found out if i do import filter and output in the same file and run it with logstash -f it does everything right but when it runs from systemctl it won't show in idexes

NGINX

access_log syslog:server=unix:/dev/log,nohostname,facility=local7,tag=nginx_access, combined;

RSYSLOG

if ( $syslogtag == "nginx_access:" ) then {
    action(type="omfile"  dynaFile="accessFile")
stop
}

LOGSTASH

input {
    file {
        path => [ "${LL_LOG_IMPORT_NGINX:/var/log/remote/ingress-nginx/nginx-access.log}" ]
        start_position => beginning
        type => "access"
        sincedb_path => "${LL_SINCEDB_IMPORT_NGINX:/var/lib/logstash/plugins/inputs/file/nginx-import.sincedb}"
    }
}

filter {
    if [type] == "access" {
        grok {
            match => {
                "message" => "%{TIMESTAMP_ISO8601:loggedtime} %{IPORHOST:host } %{PROG:program}(?:\[%{POSINT:pid}\])?: %{COMBINEDAPACHELOG}"
                }
        }
        grok {
            match => {
                "path" => "%{GREEDYDATA}/%{DATA:vhost}-access\.log"
                }
        }
        date {
            locale => "en"
            match => [ "timestamp", "MMM dd yyyy HH:mm:ss", "MMM  d yyyy HH:mm:ss", "dd/MMM/yyyy:HH:mm:ss Z", "ISO8601" ]
            timezone => "Asia/Tehran"
        }
        geoip {
            source => "clientip"
        }
        useragent {
            source => "agent"
            prefix => "useragent_"
        }
        mutate {
            convert => { "bytes" => "integer" }
        }
    }
}

output

output {
    elasticsearch {
        hosts => ["http://localhost:9200"]
        index => "logstash-%{type}-%{+YYYY.MM}"
    }
}

LOG SAMPLE

 2020-05-23T19:26:34+04:30 nginx-host nginx_access: 83.120.93.230 - - 
 [23/May/2020:19:26:34 +0430] "GET /_timesync HTTP/1.1" 200 13 
 "https://chat.xx.xx/direct/robert" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) 
 AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36"

If it works on the command line but not when run as a service that could be any number of things. What do the logstash logs look like when you start it as a service?

thanks for the fast responce
whats funny to me is all other filters and outputs running ok
only this filter does not provide output in service
for reference I have used this git source

        [2020-05-27T20:16:49,252][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.7.0"}
    [2020-05-27T20:16:57,173][INFO ][org.reflections.Reflections] Reflections took 61 ms to scan 1 urls, producing 21 keys and 41 values
    [2020-05-27T20:17:36,236][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://127.0.0.1:9200/]}}
    [2020-05-27T20:17:36,596][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://127.0.0.1:9200/"}
    [2020-05-27T20:17:36,676][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
    [2020-05-27T20:17:36,682][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
    [2020-05-27T20:17:36,786][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//127.0.0.1"]}
    [2020-05-27T20:17:36,820][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://localhost:9200/]}}
    [2020-05-27T20:17:36,829][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
    [2020-05-27T20:17:36,838][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
    [2020-05-27T20:17:36,844][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>7}
    [2020-05-27T20:17:36,859][INFO ][logstash.outputs.elasticsearch][main] Using default mapping template
    [2020-05-27T20:17:36,886][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
    [2020-05-27T20:17:36,902][INFO ][logstash.outputs.elasticsearch][main] Using default mapping template
    [2020-05-27T20:17:36,981][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
    [2020-05-27T20:17:36,986][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
    [2020-05-27T20:17:37,231][INFO ][logstash.filters.geoip ][main] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-City.mmdb"}
    [2020-05-27T20:17:37,490][INFO ][logstash.filters.geoip ][main] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-City.mmdb"}
    [2020-05-27T20:17:37,938][INFO ][logstash.filters.geoip ][main] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.3-java/vendor/GeoLite2-City.mmdb"}
    [2020-05-27T20:17:38,859][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge][main] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been created for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
    [2020-05-27T20:17:38,868][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>48, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>6000, "pipeline.sources"=>["/etc/logstash/conf.d/10-import.conf", "/etc/logstash/conf.d/30-filter-mail.conf", "/etc/logstash/conf.d/30-filter-nginx.conf", "/etc/logstash/conf.d/31-filter-auth.conf", "/etc/logstash/conf.d/50-filter-dovecot.conf", "/etc/logstash/conf.d/50-filter-postfix.conf", "/etc/logstash/conf.d/50-filter-postgrey.conf", "/etc/logstash/conf.d/51-filter-postfix-postproc.conf", "/etc/logstash/conf.d/65-filter-spamd.conf", "/etc/logstash/conf.d/90-output.conf"], :thread=>"#<Thread:0x5441fe run>"}
    [2020-05-27T20:17:47,497][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
    [2020-05-27T20:17:47,660][INFO ][filewatch.observingtail ][main][da7aa9bb1b33dce2079843c8dfd2ee334309b530fc4189372c6a0d146bb59c70] START, creating Discoverer, Watch with file and sincedb collections
    [2020-05-27T20:17:47,659][INFO ][filewatch.observingtail ][main][9ce5eb4961358279a3ac473ad32cbc26a8a3e31cec0e82c6bc6b3d810bbd3712] START, creating Discoverer, Watch with file and sincedb collections
    [2020-05-27T20:17:47,661][INFO ][filewatch.observingtail ][main][df2ca38f8542ce31331753b1b083ce4299bcf6b81c2ccd1e97634cf7edcc813b] START, creating Discoverer, Watch with file and sincedb collections
    [2020-05-27T20:17:47,675][INFO ][filewatch.observingtail ][main][e3bc4c168dfc480910a5e5d6e71c2afcf8733f8a5ca5fc1cadab35c3b75b6f86] START, creating Discoverer, Watch with file and sincedb collections
    [2020-05-27T20:17:47,828][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
    [2020-05-27T20:17:48,596][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

That message occurs four times, which means your configuration has four different file inputs. Which may mean you are not running the configuration you think. Start by setting log.level to debug in logstash.yml, then restart your service. Check for a line like

[DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/home/user/test.conf"}

Also the line before it which starts with

[DEBUG][logstash.config.source.local.configpathloader] Skipping the following files while reading config ...

Between the two you should be able to figure out what path.config is set to. Once you do that you will need to determine why it has that value.

thanks aggain
but it seems to me logstash is actually reading my config file correctly

[2020-05-28T00:54:28,670][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/30-filter-nginx.conf"}
[2020-05-28T00:54:28,728][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>1}
[2020-05-28T00:54:28,746][DEBUG][logstash.agent           ] Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:main}
[2020-05-28T00:54:30,260][DEBUG][org.logstash.secret.store.SecretStoreFactory] Attempting to exists or secret store with implementation: org.logstash.secret.store.backend.JavaKeyStore
[2020-05-28T00:54:31,539][DEBUG][org.reflections.Reflections] going to scan these urls:
jar:file:/usr/share/logstash/logstash-core/lib/jars/logstash-core.jar!/
[2020-05-28T00:54:31,588][INFO ][org.reflections.Reflections] Reflections took 45 ms to scan 1 urls, producing 21 keys and 41 values

and this is looped

[2020-05-28T01:05:21,091][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2020-05-28T01:05:25,687][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[2020-05-28T01:05:25,807][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-05-28T01:05:25,809][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-05-28T01:05:30,687][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[2020-05-28T01:05:30,821][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-05-28T01:05:30,830][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}

and this didn't work
I have tried both /dev/null and nul

what could be wrong with this particular config?
i have multiple configs just like this and everything is working except this one

input {
    file {
        path => [ "${LL_LOG_IMPORT_NGINX:/var/log/remote/ingress-nginx/nginx-access.log}" ]
        type => "nginx"
        sincedb_path => "${LL_SINCEDB_IMPORT_NGINX:/var/lib/logstash/plugins/inputs/file/nginx-import.sincedb}"
    }
}
filter {
    if [type] == "nginx" {
        grok {
          match => {
             "message" => "%{TIMESTAMP_ISO8601:loggedtime} %{IPORHOST:host } %{PROG:program}(?:\[%{POSINT:pid}\])?: %{HTTPD_COMBINEDLOG}"
          }
        }
       date {
           locale => "en"
           match => [ "timestamp", "MMM dd yyyy HH:mm:ss", "MMM  d yyyy HH:mm:ss", "dd/MMM/yyyy:HH:mm:ss Z", "ISO8601" ]
       }
       geoip {
           source => "clientip"
       }
       useragent {
           source => "agent"
           prefix => "useragent_"
       }
       mutate {
           convert => { "bytes" => "integer" }
       }
    }
}
output {
    elasticsearch {
        hosts => ["http://localhost:9200"]
        index => "logstash-%{type}-%{+YYYY.MM}"
    }
}

Is that the only message from logstash.config.source.local.configpathloader in the logs?

yes
i have removed all my config files except this and restarted logstash

[2020-05-28T10:13:34,780][DEBUG][logstash.agent           ] Starting agent
[2020-05-28T10:13:34,827][DEBUG][logstash.config.source.multilocal] Reading pipeline configurations from YAML {:location=>"/etc/logstash/pipelines.yml"}
[2020-05-28T10:13:34,912][DEBUG][logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>[]}
[2020-05-28T10:13:34,917][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/30-filter-nginx.conf"}
[2020-05-28T10:13:34,977][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>1}
[2020-05-28T10:13:34,991][DEBUG][logstash.agent           ] Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:main}
[2020-05-28T10:13:36,227][DEBUG][org.logstash.secret.store.SecretStoreFactory] Attempting to exists or secret store with implementation: org.logstash.secret.store.backend.JavaKeyStore
[2020-05-28T10:13:37,482][DEBUG][org.reflections.Reflections] going to scan these urls:
jar:file:/usr/share/logstash/logstash-core/lib/jars/logstash-core.jar!/
[2020-05-28T10:13:37,532][INFO ][org.reflections.Reflections] Reflections took 46 ms to scan 1 urls, producing 21 keys and 41 values

anyone ther that can help?
any suggestion will be welcomed

in case some one could find anything
here is debug log

[logstash.modules.scaffold] Found module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[logstash.plugins.registry] Adding plugin to the registry {:name=>"fb_apache", :type=>:modules, :class=>#<LogStash::Modules::Scaffold:0x37930f48 @directory="/usr/share/logstash/modules/fb_apache/configuration", @module_name="fb_apache", @kibana_version_parts=["6", "0", "0"]>}
[logstash.modules.scaffold] Found module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[logstash.plugins.registry] Adding plugin to the registry {:name=>"netflow", :type=>:modules, :class=>#<LogStash::Modules::Scaffold:0x160c47b4 @directory="/usr/share/logstash/modules/netflow/configuration", @module_name="netflow", @kibana_version_parts=["6", "0", "0"]>}
[logstash.runner          ] -------- Logstash Settings (* means modified) ---------
[logstash.runner          ] node.name: "rsyslog"
[logstash.runner          ] path.data: "/usr/share/logstash/data"
[logstash.runner          ] modules.cli: []
[logstash.runner          ] modules: []
[logstash.runner          ] modules_list: []
[logstash.runner          ] modules_variable_list: []
[logstash.runner          ] modules_setup: false
[logstash.runner          ] config.test_and_exit: false
[logstash.runner          ] config.reload.automatic: false
[logstash.runner          ] config.reload.interval: 3000000000
[logstash.runner          ] config.support_escapes: false
[logstash.runner          ] config.field_reference.parser: "STRICT"
[logstash.runner          ] metric.collect: true
[logstash.runner          ] pipeline.id: "main"
[logstash.runner          ] pipeline.system: false
[logstash.runner          ] pipeline.workers: 48
[logstash.runner          ] pipeline.batch.size: 125
[logstash.runner          ] pipeline.batch.delay: 50
[logstash.runner          ] pipeline.unsafe_shutdown: false
[logstash.runner          ] pipeline.java_execution: true
[logstash.runner          ] pipeline.reloadable: true
[logstash.runner          ] pipeline.plugin_classloaders: false
[logstash.runner          ] pipeline.separate_logs: false
[logstash.runner          ] pipeline.ordered: "auto"
[logstash.runner          ] path.plugins: []
[logstash.runner          ] config.debug: false
[logstash.runner          ] *log.level: "debug" (default: "info")
[logstash.runner          ] version: false
[logstash.runner          ] help: false
[logstash.runner          ] log.format: "plain"
[logstash.runner          ] http.host: "127.0.0.1"
[logstash.runner          ] http.port: 9600..9700
[logstash.runner          ] http.environment: "production"
[logstash.runner          ] queue.type: "memory"
[logstash.runner          ] queue.drain: false
[logstash.runner          ] queue.page_capacity: 67108864
[logstash.runner          ] queue.max_bytes: 1073741824
[logstash.runner          ] queue.max_events: 0
[logstash.runner          ] queue.checkpoint.acks: 1024
[logstash.runner          ] queue.checkpoint.writes: 1024
[logstash.runner          ] queue.checkpoint.interval: 1000
[logstash.runner          ] queue.checkpoint.retry: false
[logstash.runner          ] dead_letter_queue.enable: false
[logstash.runner          ] dead_letter_queue.max_bytes: 1073741824
[logstash.runner          ] slowlog.threshold.warn: -1
[logstash.runner          ] slowlog.threshold.info: -1
[logstash.runner          ] slowlog.threshold.debug: -1
[logstash.runner          ] slowlog.threshold.trace: -1
[logstash.runner          ] keystore.classname: "org.logstash.secret.store.backend.JavaKeyStore"
[logstash.runner          ] *keystore.file: "/etc/logstash/logstash.keystore" (default: "/usr/share/logstash/config/logstash.keystore")
[logstash.runner          ] path.queue: "/usr/share/logstash/data/queue"
[logstash.runner          ] path.dead_letter_queue: "/usr/share/logstash/data/dead_letter_queue"
[logstash.runner          ] *path.settings: "/etc/logstash" (default: "/usr/share/logstash/config")
[logstash.runner          ] *path.logs: "/var/log/logstash" (default: "/usr/share/logstash/logs")
[logstash.runner          ] xpack.management.enabled: false
[logstash.runner          ] xpack.management.logstash.poll_interval: 5000000000
[logstash.runner          ] xpack.management.pipeline.id: ["main"]
[logstash.runner          ] xpack.management.elasticsearch.username: "logstash_system"
[logstash.runner          ] xpack.management.elasticsearch.hosts: ["https://localhost:9200"]
[logstash.runner          ] xpack.management.elasticsearch.ssl.verification_mode: "certificate"
[logstash.runner          ] xpack.management.elasticsearch.sniffing: false
[logstash.runner          ] xpack.monitoring.enabled: false
[logstash.runner          ] xpack.monitoring.elasticsearch.hosts: ["http://localhost:9200"]
[logstash.runner          ] xpack.monitoring.collection.interval: 10000000000
[logstash.runner          ] xpack.monitoring.collection.timeout_interval: 600000000000
[logstash.runner          ] xpack.monitoring.elasticsearch.username: "logstash_system"
[logstash.runner          ] xpack.monitoring.elasticsearch.ssl.verification_mode: "certificate"
[logstash.runner          ] xpack.monitoring.elasticsearch.sniffing: false
[logstash.runner          ] xpack.monitoring.collection.pipeline.details.enabled: true
[logstash.runner          ] xpack.monitoring.collection.config.enabled: true
[logstash.runner          ] monitoring.enabled: false
[logstash.runner          ] monitoring.elasticsearch.hosts: ["http://localhost:9200"]
[logstash.runner          ] monitoring.collection.interval: 10000000000
[logstash.runner          ] monitoring.collection.timeout_interval: 600000000000
[logstash.runner          ] monitoring.elasticsearch.username: "logstash_system"
[logstash.runner          ] monitoring.elasticsearch.ssl.verification_mode: "certificate"
[logstash.runner          ] monitoring.elasticsearch.sniffing: false
[logstash.runner          ] monitoring.collection.pipeline.details.enabled: true
[logstash.runner          ] monitoring.collection.config.enabled: true
[logstash.runner          ] node.uuid: ""
[logstash.runner          ] --------------- Logstash Settings -------------------
[logstash.config.source.multilocal] Reading pipeline configurations from YAML {:location=>"/etc/logstash/pipelines.yml"}
[logstash.runner          ] Starting Logstash {"logstash.version"=>"7.7.0"}
[logstash.agent           ] Setting up metric collection
[logstash.instrument.periodicpoller.os] Starting {:polling_interval=>5, :polling_timeout=>120}
[logstash.instrument.periodicpoller.jvm] Starting {:polling_interval=>5, :polling_timeout=>120}
[logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[logstash.instrument.periodicpoller.persistentqueue] Starting {:polling_interval=>5, :polling_timeout=>120}
[logstash.instrument.periodicpoller.deadletterqueue] Starting {:polling_interval=>5, :polling_timeout=>120}
[logstash.agent           ] Starting agent
[logstash.config.source.multilocal] Reading pipeline configurations from YAML {:location=>"/etc/logstash/pipelines.yml"}
[logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>[]}
[logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/30-filter-nginx.conf"}
[logstash.agent           ] Converging pipelines state {:actions_count=>1}
[logstash.agent           ] Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:main}
[org.logstash.secret.store.SecretStoreFactory] Attempting to exists or secret store with implementation: org.logstash.secret.store.backend.JavaKeyStore
[org.reflections.Reflections] going to scan these urls:
stash-core/lib/jars/logstash-core.jar!/
[org.reflections.Reflections] Reflections took 46 ms to scan 1 urls, producing 21 keys and 41 values
[org.reflections.Reflections] expanded subtype co.elastic.logstash.api.Plugin -> co.elastic.logstash.api.Input
[org.reflections.Reflections] expanded subtype co.elastic.logstash.api.Plugin -> co.elastic.logstash.api.Codec
[org.reflections.Reflections] expanded subtype org.jruby.RubyBasicObject -> org.jruby.RubyObject
[org.reflections.Reflections] expanded subtype java.lang.Cloneable -> org.jruby.RubyBasicObject
[org.reflections.Reflections] expanded subtype org.jruby.runtime.builtin.IRubyObject -> org.jruby.RubyBasicObject
[org.reflections.Reflections] expanded subtype java.io.Serializable -> org.jruby.RubyBasicObject
[org.reflections.Reflections] expanded subtype java.lang.Comparable -> org.jruby.RubyBasicObject
[org.reflections.Reflections] expanded subtype org.jruby.runtime.marshal.CoreObjectType -> org.jruby.RubyBasicObject
[org.reflections.Reflections] expanded subtype org.jruby.runtime.builtin.InstanceVariables -> org.jruby.RubyBasicObject
[org.reflections.Reflections] expanded subtype org.jruby.runtime.builtin.InternalVariables -> org.jruby.RubyBasicObject
[org.reflections.Reflections] expanded subtype co.elastic.logstash.api.Plugin -> co.elastic.logstash.api.Output
[org.reflections.Reflections] expanded subtype co.elastic.logstash.api.Metric -> co.elastic.logstash.api.NamespacedMetric
[org.reflections.Reflections] expanded subtype java.security.SecureClassLoader -> java.net.URLClassLoader
[org.reflections.Reflections] expanded subtype java.lang.ClassLoader -> java.security.SecureClassLoader
[org.reflections.Reflections] expanded subtype java.io.Closeable -> java.net.URLClassLoader
[org.reflections.Reflections] expanded subtype java.lang.AutoCloseable -> java.io.Closeable
[org.reflections.Reflections] expanded subtype java.lang.Comparable -> java.lang.Enum
[org.reflections.Reflections] expanded subtype java.io.Serializable -> java.lang.Enum
[org.reflections.Reflections] expanded subtype co.elastic.logstash.api.Plugin -> co.elastic.logstash.api.Filter
[logstash.plugins.registry] On demand adding plugin to the registry {:name=>"file", :type=>"input", :class=>LogStash::Inputs::File}
[logstash.plugins.registry] On demand adding plugin to the registry {:name=>"plain", :type=>"codec", :class=>LogStash::Codecs::Plain}

some grok garbage per every cpu that server has


 
 org.logstash.config.ir.compiler.ComputeStepSyntaxElement@1be48183
[logstash.inputs.file     ][main] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/usr/share/logstash/data/plugins/inputs/file/.sincedb_537f43b8f369f9bfc182fd0f73253586", :path=>["/var/log/remote/ingress-nginx/nginx-access.log"]}
[logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[logstash.javapipeline    ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x27d18c2a run>"}
[org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[filewatch.observingtail  ][main][43350504aed21aed6dd5f83eb7776697861c7b33578414ce864d13c2cdded3b2] START, creating Discoverer, Watch with file and sincedb collections
[logstash.agent           ] Starting puma
[logstash.agent           ] Trying to start WebServer {:port=>9600}
[logstash.api.service     ] [api-service] start
[logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.

i have found the soloution
log files permission was the problem
even though I had added logstash to adm group, for some reason logstash could not read the file
file in question had exact permissions as the other logs that logstash was able to read so I ignored it at first
but later for tshoot purposes, I had to grant shell permission to logstash user and logged in as logstash
what I saw was that logstash was unable to read the log file in question

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.