Stopping elasticsearch, failed to delete river on stop

In last days I change owner of elasticsearch dictionary from "user2" to "user1" (located in /home/user1, all time). After kill old ES process and start script as user1, after few minutes script shutdown without (in my opinion) logical message. Cluster health is yellow, here is _cat/health: 1513003812 15:50:12 asport yellow 1 1 31 31 0 0 31 0

At the moment stats says ES was stop working after about 4 minutes. I have to set CRON task every 5 minutes that check if ES is up. If not then run it again. But this is not good solution...

I use jdbc river to get data from MySQL database.

All log file available here: Log file

[2017-12-20 22:48:24,944][INFO ][index.shard              ] [Asport.pl] [searcher1][1] updating refresh_interval from [-1000] to [1s]
[2017-12-20 22:48:24,944][INFO ][index.shard              ] [Asport.pl] [searcher1][0] updating refresh_interval from [-1000] to [1s]
[2017-12-20 22:48:25,089][INFO ][river.jdbc.RiverMetrics  ] pipeline org.xbib.elasticsearch.plugin.jdbc.RiverPipeline@42523a6e complete: river jdbc/ajax_products metrics: 46673 rows, 94.75080635476978 mean, (544.3957929976019 137.63446132255365 47.84630872419299), ingest metrics: elapsed 45 seconds, 124.56 MB bytes, 2.73 KB avg, 2.763 MB/s
[2017-12-20 22:49:22,575][INFO ][node                     ] [Asport.pl] stopping ...
[2017-12-20 22:49:22,577][INFO ][river.jdbc.JDBCRiver     ] river closed [jdbc/ajax_products]
[2017-12-20 22:49:22,580][INFO ][river.jdbc.JDBCRiver     ] river state deleted [jdbc/ajax_products]
[2017-12-20 22:49:22,581][INFO ][river.jdbc.JDBCRiver     ] river closed [jdbc/producers]
[2017-12-20 22:49:22,582][INFO ][river.jdbc.JDBCRiver     ] river state deleted [jdbc/producers]
[2017-12-20 22:49:22,583][INFO ][river.jdbc.JDBCRiver     ] river closed [jdbc/count_unfiltered]
[2017-12-20 22:49:22,583][INFO ][river.jdbc.JDBCRiver     ] river closed [jdbc/categories]
[2017-12-20 22:49:22,584][WARN ][river                    ] [Asport.pl] failed to delete river on stop [jdbc]/[count_unfiltered]
org.elasticsearch.ElasticsearchException: unable to delete, river state missing: count_unfiltered
    at org.xbib.elasticsearch.plugin.jdbc.state.RiverStateService$3.execute(RiverStateService.java:314)
    at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:374)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:196)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:162)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
[2017-12-20 22:49:22,587][INFO ][river.jdbc.JDBCRiver     ] river state deleted [jdbc/categories]
[2017-12-20 22:49:22,587][INFO ][river.jdbc.JDBCRiver     ] river closed [jdbc/most_search]
[2017-12-20 22:49:22,587][WARN ][river                    ] [Asport.pl] failed to delete river on stop [jdbc]/[most_search]
org.elasticsearch.ElasticsearchException: unable to delete, river state missing: most_search
    at org.xbib.elasticsearch.plugin.jdbc.state.RiverStateService$3.execute(RiverStateService.java:314)
    at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:374)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:196)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:162)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
[2017-12-20 22:49:22,588][INFO ][river.jdbc.JDBCRiver     ] river closed [jdbc/auctions]
[2017-12-20 22:49:22,588][WARN ][river                    ] [Asport.pl] failed to delete river on stop [jdbc]/[auctions]
org.elasticsearch.ElasticsearchException: unable to delete, river state missing: auctions
    at org.xbib.elasticsearch.plugin.jdbc.state.RiverStateService$3.execute(RiverStateService.java:314)
    at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:374)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:196)
    at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:162)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
[2017-12-20 22:49:22,702][INFO ][node                     ] [Asport.pl] stopped
[2017-12-20 22:49:22,703][INFO ][node                     ] [Asport.pl] closing ...
[2017-12-20 22:49:22,710][INFO ][node                     ] [Asport.pl] closed

Rivers are gone. Use logstash instead. See its jdbc input plugin.

Hello @dadoonet,

Is there any way to install logstash on Java 7 (java version "1.7.0_80") ? What version I need? Never use logstash before,

Maybe I need to talk to admin to upgrade Java to 8 and Elasticsearch to newest version?
Current ES is 1.7.1
{
"status" : 200,
"name" : "Asport.pl",
"cluster_name" : "asport",
"version" : {
"number" : "1.7.1",
"build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
"build_timestamp" : "2015-07-29T09:54:16Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}

EDIT: Or have I re-implement import to (for example) this https://github.com/jprante/elasticsearch-jdbc ?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.