I'm a newbie to Elasticsearch, and I would like to know about writing
custom scripts in Elasticsearch. Basically, I want to have a script which
takes a table name and the columns in it as parameters, and would then
start a JDBC river plugin and index the corresponding data into my
Elasticsearch. Basically, I'm kind of looking for a mechanism through which
I can automatically index data into my Elasticsearch on just specifying my
required table and columns. And I would very much like to know if this
viable and also any other ideas through which I could implement this?
If you can set up shell scripting, it should be viable to define a curl
command in a script that copies the table and columns into an SQL statement
and performs something similar to step 7 in
I'm a newbie to Elasticsearch, and I would like to know about writing
custom scripts in Elasticsearch. Basically, I want to have a script which
takes a table name and the columns in it as parameters, and would then
start a JDBC river plugin and index the corresponding data into my
Elasticsearch. Basically, I'm kind of looking for a mechanism through which
I can automatically index data into my Elasticsearch on just specifying my
required table and columns. And I would very much like to know if this
viable and also any other ideas through which I could implement this?
Thank you. I have a doubt though, once I run the script, the river plugin
is started and the data gets indexed into Elasticsearch, I want to know, if
the plugin would be running after that, or does it stop once the script
execution comes to an end?
On Sunday, January 11, 2015 at 12:16:05 AM UTC+5:30, Jörg Prante wrote:
If you can set up shell scripting, it should be viable to define a curl
command in a script that copies the table and columns into an SQL statement
and performs something similar to step 7 in Quickstart · jprante/elasticsearch-jdbc Wiki · GitHub
Jörg
On Fri, Jan 9, 2015 at 2:45 PM, Amtul Nazneen <amtuln...@gmail.com
<javascript:>> wrote:
Hi,
I'm a newbie to Elasticsearch, and I would like to know about writing
custom scripts in Elasticsearch. Basically, I want to have a script which
takes a table name and the columns in it as parameters, and would then
start a JDBC river plugin and index the corresponding data into my
Elasticsearch. Basically, I'm kind of looking for a mechanism through which
I can automatically index data into my Elasticsearch on just specifying my
required table and columns. And I would very much like to know if this
viable and also any other ideas through which I could implement this?
Thank you. I have a doubt though, once I run the script, the river plugin
is started and the data gets indexed into Elasticsearch, I want to know, if
the plugin would be running after that, or does it stop once the script
execution comes to an end?
It executes once. You could consider running that script on a schedule and
doing incremental updates using timestamps.
On Sunday, January 11, 2015 at 9:24:28 PM UTC-8, Amtul Nazneen wrote:
Thank you. I have a doubt though, once I run the script, the river plugin
is started and the data gets indexed into Elasticsearch, I want to know, if
the plugin would be running after that, or does it stop once the script
execution comes to an end?
Ohkay. So the river runs only once when the script starts? And after that
won't it be running in the background to fetch the updates according to a
schedule?
On Monday, January 12, 2015 at 1:23:08 PM UTC+5:30, Ed Kim wrote:
It executes once. You could consider running that script on a schedule and
doing incremental updates using timestamps.
On Sunday, January 11, 2015 at 9:24:28 PM UTC-8, Amtul Nazneen wrote:
Thank you. I have a doubt though, once I run the script, the river plugin
is started and the data gets indexed into Elasticsearch, I want to know, if
the plugin would be running after that, or does it stop once the script
execution comes to an end?
Ohkay. So the river runs only once when the script starts? And after that won't it be running in the background to fetch the updates according to a schedule?
On Monday, January 12, 2015 at 1:23:08 PM UTC+5:30, Ed Kim wrote:
It executes once. You could consider running that script on a schedule and doing incremental updates using timestamps.
On Sunday, January 11, 2015 at 9:24:28 PM UTC-8, Amtul Nazneen wrote:
Thank you. I have a doubt though, once I run the script, the river plugin is started and the data gets indexed into Elasticsearch, I want to know, if the plugin would be running after that, or does it stop once the script execution comes to an end?
Thank you. Is it the "interval" parameter or "schedule" parameter? If I set
the schedule parameter, then the Elasticsearch will poll the tables
accordingly right?
On Wednesday, January 14, 2015 at 2:31:07 PM UTC+5:30, David Pilato wrote:
I guess you need to set interval. See doc plugin on the home page of the
JDBC river.
interval - a time value for the delay between two river runs (default:
not set)
--
David
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Le 14 janv. 2015 à 06:01, Amtul Nazneen <amtuln...@gmail.com <javascript:>>
a écrit :
Ohkay. So the river runs only once when the script starts? And after that
won't it be running in the background to fetch the updates according to a
schedule?
On Monday, January 12, 2015 at 1:23:08 PM UTC+5:30, Ed Kim wrote:
It executes once. You could consider running that script on a schedule
and doing incremental updates using timestamps.
On Sunday, January 11, 2015 at 9:24:28 PM UTC-8, Amtul Nazneen wrote:
Thank you. I have a doubt though, once I run the script, the river
plugin is started and the data gets indexed into Elasticsearch, I want to
know, if the plugin would be running after that, or does it stop once the
script execution comes to an end?
Thank you. Is it the "interval" parameter or "schedule" parameter? If I
set the schedule parameter, then the Elasticsearch will poll the tables
accordingly right?
On Wednesday, January 14, 2015 at 2:31:07 PM UTC+5:30, David Pilato wrote:
I guess you need to set interval. See doc plugin on the home page of the
JDBC river.
interval - a time value for the delay between two river runs (default:
not set)
--
David
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Ohkay. So the river runs only once when the script starts? And after that
won't it be running in the background to fetch the updates according to a
schedule?
On Monday, January 12, 2015 at 1:23:08 PM UTC+5:30, Ed Kim wrote:
It executes once. You could consider running that script on a schedule
and doing incremental updates using timestamps.
On Sunday, January 11, 2015 at 9:24:28 PM UTC-8, Amtul Nazneen wrote:
Thank you. I have a doubt though, once I run the script, the river
plugin is started and the data gets indexed into Elasticsearch, I want to
know, if the plugin would be running after that, or does it stop once the
script execution comes to an end?
On Friday, January 16, 2015 at 6:27:48 PM UTC+5:30, Jörg Prante wrote:
"schedule" is triggering the JDBC plugin by wall clock time of the
machine, where "interval" simply waits the given time period between two
runs.
Jörg
On Fri, Jan 16, 2015 at 11:12 AM, Amtul Nazneen <amtuln...@gmail.com
<javascript:>> wrote:
Thank you. Is it the "interval" parameter or "schedule" parameter? If I
set the schedule parameter, then the Elasticsearch will poll the tables
accordingly right?
On Wednesday, January 14, 2015 at 2:31:07 PM UTC+5:30, David Pilato wrote:
I guess you need to set interval. See doc plugin on the home page of the
JDBC river.
interval - a time value for the delay between two river runs (default:
not set)
--
David
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
Ohkay. So the river runs only once when the script starts? And after
that won't it be running in the background to fetch the updates according
to a schedule?
On Monday, January 12, 2015 at 1:23:08 PM UTC+5:30, Ed Kim wrote:
It executes once. You could consider running that script on a schedule
and doing incremental updates using timestamps.
On Sunday, January 11, 2015 at 9:24:28 PM UTC-8, Amtul Nazneen wrote:
Thank you. I have a doubt though, once I run the script, the river
plugin is started and the data gets indexed into Elasticsearch, I want to
know, if the plugin would be running after that, or does it stop once the
script execution comes to an end?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.