Need help logstash stop indexing to elasitc search

hi
we have been using elastic search for call manger CDR record for few yrs now. been working great.

its stoped working on 12aug2019 for unknow reason. i see log file stay in the FTP directory not clearning up. and nothing new in the kibana after that date.

attached log from logstash and elasicsearch

i have restart the server, and close all the index. still same thing

###after closed all index now seems worked..i might have to merge more again..

root@elk:/home/ftp# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev             12G     0   12G   0% /dev
tmpfs           2.4G  1.1M  2.4G   1% /run
/dev/sda2       196G   57G  130G  31% /
tmpfs            12G     0   12G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs            12G     0   12G   0% /sys/fs/cgroup
/dev/loop0       87M   87M     0 100% /snap/core/4486
tmpfs           2.4G     0  2.4G   0% /run/user/0
root@elk:/home/ftp#
[2019-09-04T01:21:03,232][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>34, "name"=>"[main]<file", "current_call"=>"[...]/vendor/bundle/jruby/2.3.0/gems/logstash-input-file-4.1.5/lib/filewatch/watched_files_collection.rb:80:in `[]='"}], ["Lo
gStash::Filters::Mutate", {"remove_field"=>["message"], "id"=>"894418ee323961c3a2605309bcb5a8b97fdf98c9a3d7b233f8d3ef9d02e018d3"}]=>[{"thread_id"=>26, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:315:in `read_batch'"}, {"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/logstash-core/
lib/logstash/pipeline.rb:315:in `read_batch'"}, {"thread_id"=>28, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:315:in `read_batch'"}, {"thread_id"=>29, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:315:in `read_batch'"}, {"thread_id"=>30, "name"=>nil, "curre
nt_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:315:in `read_batch'"}, {"thread_id"=>31, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:315:in `read_batch'"}, {"thread_id"=>32, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:315:in `read_batch'"}, {"thre
ad_id"=>33, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/pipeline.rb:315:in `read_batch'"}]}}
[2019-09-04T01:21:31,236][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.4.0"}
[2019-09-04T01:21:39,212][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed
entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch index=>"lcch-cisco-%{+YYYY.MM.dd}", manage_template=>false, id=>"d5222d947f76b01bc62c5d5f284541324145e7e29df6698
7f5f8d756dcf5ad65", document_id=>"%{pkid}", hosts=>[//localhost:9200], document_type=>"%{type}", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_9f87a70c-3780-472a-afcb-5bb43f04f845", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upse
rt=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing=>false, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_rout
e=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2019-09-04T01:21:39,280][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-09-04T01:21:39,833][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2019-09-04T01:21:39,847][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2019-09-04T01:21:40,092][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2019-09-04T01:21:40,176][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2019-09-04T01:21:40,180][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2019-09-04T01:21:40,213][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2019-09-04T01:21:40,617][INFO ][logstash.inputs.file     ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_697fd033fdfc9bb6ccfac8a56026cf37", :path=>["/home/ftp/cdr*"]}
[2019-09-04T01:21:40,656][INFO ][logstash.inputs.file     ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"/var/lib/logstash/plugins/inputs/file/.sincedb_d0f4e6f86e69bd93fda203a6645e41bd", :path=>["/home/ftp/cmr*"]}
[2019-09-04T01:21:40,684][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0xcdbbfe9 run>"}
[2019-09-04T01:21:40,805][INFO ][filewatch.observingread  ] START, creating Discoverer, Watch with file and sincedb collections
[2019-09-04T01:21:40,807][INFO ][filewatch.observingread  ] START, creating Discoverer, Watch with file and sincedb collections
[2019-09-04T01:21:40,817][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-09-04T01:21:41,258][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
root@elk:/var/log/logstash#
[2019-09-04T01:19:44,950][INFO ][o.e.n.Node               ] [ICJPPKt] JVM arguments [-Xms8g, -Xmx8g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceI
nFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.lX47VvD4, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=/var/lib/elasticsearch, -XX:ErrorFile=/
var/log/elasticsearch/hs_err_pid%p.log, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:/var/log/elasticsearch/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.path.home=/usr/share/elasticsearch, -Des.pa
th.conf=/etc/elasticsearch, -Des.distribution.flavor=default, -Des.distribution.type=deb]
[2019-09-04T01:19:46,542][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [aggs-matrix-stats]
[2019-09-04T01:19:46,542][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [analysis-common]
[2019-09-04T01:19:46,542][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [ingest-common]
[2019-09-04T01:19:46,542][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [lang-expression]
[2019-09-04T01:19:46,542][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [lang-mustache]
[2019-09-04T01:19:46,542][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [lang-painless]
[2019-09-04T01:19:46,542][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [mapper-extras]
[2019-09-04T01:19:46,542][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [parent-join]
[2019-09-04T01:19:46,542][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [percolator]
[2019-09-04T01:19:46,543][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [rank-eval]
[2019-09-04T01:19:46,543][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [reindex]
[2019-09-04T01:19:46,543][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [repository-url]
[2019-09-04T01:19:46,543][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [transport-netty4]
[2019-09-04T01:19:46,543][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [tribe]
[2019-09-04T01:19:46,543][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [x-pack-core]
[2019-09-04T01:19:46,543][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [x-pack-deprecation]
[2019-09-04T01:19:46,543][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [x-pack-graph]
[2019-09-04T01:19:46,543][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [x-pack-logstash]
[2019-09-04T01:19:46,543][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [x-pack-ml]
[2019-09-04T01:19:46,543][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [x-pack-monitoring]
[2019-09-04T01:19:46,543][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [x-pack-rollup]
[2019-09-04T01:19:46,543][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [x-pack-security]
[2019-09-04T01:19:46,543][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [x-pack-sql]
[2019-09-04T01:19:46,544][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [x-pack-upgrade]
[2019-09-04T01:19:46,544][INFO ][o.e.p.PluginsService     ] [ICJPPKt] loaded module [x-pack-watcher]
[2019-09-04T01:19:46,544][INFO ][o.e.p.PluginsService     ] [ICJPPKt] no plugins loaded
[2019-09-04T01:19:50,986][INFO ][o.e.x.s.a.s.FileRolesStore] [ICJPPKt] parsed [0] roles from file [/etc/elasticsearch/roles.yml]
[2019-09-04T01:19:51,445][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/4056] [Main.cc@109] controller (64 bit): Version 6.4.0 (Build cf8246175efff5) Copyright (c) 2018 Elasticsearch BV
[2019-09-04T01:19:51,838][DEBUG][o.e.a.ActionModule       ] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2019-09-04T01:19:51,909][WARN ][o.e.c.u.IndexFolderUpgrader] [/var/lib/elasticsearch/nodes/0/indices/gEHCi2kVSMqf9QLZ3hsmwA] no index state found - ignoring
[2019-09-04T01:19:52,106][WARN ][o.e.c.u.IndexFolderUpgrader] [/var/lib/elasticsearch/nodes/0/indices/0GMTztcvRKCTiJWiviOChg] no index state found - ignoring
[2019-09-04T01:19:52,476][INFO ][o.e.d.DiscoveryModule    ] [ICJPPKt] using discovery type [zen]
[2019-09-04T01:19:53,326][INFO ][o.e.n.Node               ] [ICJPPKt] initialized
[2019-09-04T01:19:53,326][INFO ][o.e.n.Node               ] [ICJPPKt] starting ...
[2019-09-04T01:19:53,540][INFO ][o.e.t.TransportService   ] [ICJPPKt] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2019-09-04T01:19:56,699][INFO ][o.e.c.s.MasterService    ] [ICJPPKt] zen-disco-elected-as-master ([0] nodes joined)[, ], reason: new_master {ICJPPKt}{ICJPPKtKRd6yTTdBS00w6Q}{HF0l_m3sREuiU53a_W9myA}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=25275117568, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
[2019-09-04T01:19:56,705][INFO ][o.e.c.s.ClusterApplierService] [ICJPPKt] new_master {ICJPPKt}{ICJPPKtKRd6yTTdBS00w6Q}{HF0l_m3sREuiU53a_W9myA}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=25275117568, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {ICJPP
Kt}{ICJPPKtKRd6yTTdBS00w6Q}{HF0l_m3sREuiU53a_W9myA}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=25275117568, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)[, ]]])
[2019-09-04T01:19:56,728][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [ICJPPKt] publish_address {127.0.0.1:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}
[2019-09-04T01:19:56,729][INFO ][o.e.n.Node               ] [ICJPPKt] started
[2019-09-04T01:19:57,166][INFO ][o.e.c.s.ClusterSettings  ] [ICJPPKt] updating [xpack.monitoring.collection.enabled] from [false] to [true]
[2019-09-04T01:19:57,976][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [ICJPPKt] Failed to clear cache for realms [[]]
[2019-09-04T01:19:58,025][INFO ][o.e.l.LicenseService     ] [ICJPPKt] license [9411fa6d-e4c2-4a7a-a12d-763d6733b0b8] mode [basic] - valid
[2019-09-04T01:19:58,037][INFO ][o.e.g.GatewayService     ] [ICJPPKt] recovered [56] indices into cluster_state
[2019-09-04T01:19:59,287][INFO ][o.e.c.r.a.AllocationService] [ICJPPKt] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[lcch-cisco-2019.08.12][0], [.kibana][0]] ...]).
root@elk:/var/log/elasticsearch#
root@elk:/var/log/elasticsearch#                      curl -XGET 'localhost:9200/_cluster/health?pretty'
{
  "cluster_name" : "elasticsearch",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 20,
  "active_shards" : 20,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 5,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 80.0
}
root@elk:/var/log/elasticsearch#
cdr_QCH_02_201908200440_265815   cdr_QCH_02_201908271248_275157  cdr_QCH_03_201908121313_242818   cdr_QCH_03_201908201151_251989  cdr_QCH_03_201908280649_261219  cdr_QCH_05_201908160109_83727   cdr_QCH_06_201908132144_169149   cdr_QCH_06_201908240659_177908
cdr_QCH_02_201908200441_265816   cdr_QCH_02_201908271251_275158  cdr_QCH_03_201908121314_242819   cdr_QCH_03_201908201152_251990  cdr_QCH_03_201908280650_261220  cdr_QCH_05_201908160111_83728   cdr_QCH_06_201908132147_169150   cdr_QCH_06_201908240702_177909
root@elk:/home/ftp# ls | wc
  78778   78778 2432987
root@elk:/home/ftp# ls | wc -1
wc: invalid option -- '1'
Try 'wc --help' for more information.
root@elk:/home/ftp# ls | wc -l
78779
root@elk:/home/ftp#

i lied, after close all index. it seems working again

root@elk:/home/ftp# ls | grep 201908 | wc
  27532   27532  850315
root@elk:/home/ftp#

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.