Elasticsearch 0.20.2 / jdbc river / Gram

Hello,

I have some problem to create an index with gram parameters by using
elastic's search jdbc river (
https://github.com/jprante/elasticsearch-river-jdbc).

I'm using the following request :

curl -XPUT 'localhost:9200/_river/my_jdbc_test/_meta' -d '{
"type":"jdbc",
"jdbc": {
XXXX
},
"index": {
"index":"jdbc",
"type":"jdbc",
"analysis" : {
"analyzer" : {
"my_analyzer" : {
"type" : "custom",
"tokenizer" :
"my_ngram_tokenizer",
"filter" :
["my_ngram_filter"]
}
},
"filter" : {
"my_ngram_filter" : {
"type" : "nGram",
"min_gram" : 3,
"max_gram" : 10
}
},
"tokenizer" : {
"my_ngram_tokenizer" : {
"type" : "nGram",
"min_gram" : 3,
"max_gram" : 10
}
}
}
},
"mappings": {
"jdbc": {
"properties": {
"field1": {
"type": "string",
"analyzer": "my_analyzer"
}
}
}
}
}'

But with these settings, index jdbc is able to grab database field1 but my
Gram doesn't work..

Maybe someone already got this problem and could give some help ?

Thanks for your help,

Regards,

--

I will just latch onto this question. I've been setting up my index first
and then running the jdbc river. It doesn't always seem to populate. I've
found that if I delete everything, setup the index and then setup the
river, population is immediate and it works.

So, it's possible to setup the river and the index at the same time, cool?
Is this is a best practice?

On Thursday, January 17, 2013 10:30:07 AM UTC-5, Paul Bertonio wrote:

Hello,

I have some problem to create an index with gram parameters by using
elastic's search jdbc river (
GitHub - jprante/elasticsearch-jdbc: JDBC importer for Elasticsearch).

I'm using the following request :

curl -XPUT 'localhost:9200/_river/my_jdbc_test/_meta' -d '{
"type":"jdbc",
"jdbc": {
XXXX
},
"index": {
"index":"jdbc",
"type":"jdbc",
"analysis" : {
"analyzer" : {
"my_analyzer" : {
"type" : "custom",
"tokenizer" :
"my_ngram_tokenizer",
"filter" :
["my_ngram_filter"]
}
},
"filter" : {
"my_ngram_filter" : {
"type" : "nGram",
"min_gram" : 3,
"max_gram" : 10
}
},
"tokenizer" : {
"my_ngram_tokenizer" : {
"type" : "nGram",
"min_gram" : 3,
"max_gram" : 10
}
}
}
},
"mappings": {
"jdbc": {
"properties": {
"field1": {
"type": "string",
"analyzer": "my_analyzer"
}
}
}
}
}'

But with these settings, index jdbc is able to grab database field1 but my
Gram doesn't work..

Maybe someone already got this problem and could give some help ?

Thanks for your help,

Regards,

--

IMHO, from a maintainability point of view, I will spilt the two operations:

  1. create the index, mappings, aliases,...
  2. when everything is ready start the river (aka create the river)

My 2 cents

Le 17 janvier 2013 à 16:34, jtreher@gmail.com a écrit :

I will just latch onto this question. I've been setting up my index first and
then running the jdbc river. It doesn't always seem to populate. I've found
that if I delete everything, setup the index and then setup the river,
population is immediate and it works.

So, it's possible to setup the river and the index at the same time, cool? Is
this is a best practice?

On Thursday, January 17, 2013 10:30:07 AM UTC-5, Paul Bertonio wrote:

Hello,

I have some problem to create an index with gram parameters by using
elastic's search jdbc river (
GitHub - jprante/elasticsearch-jdbc: JDBC importer for Elasticsearch).
https://github.com/jprante/elasticsearch-river-jdbc
https://github.com/jprante/elasticsearch-river-jdbc
I'm using the following request :
https://github.com/jprante/elasticsearch-river-jdbc
https://github.com/jprante/elasticsearch-river-jdbc
curl -XPUT 'localhost:9200/_river/my_jdbc_test/_meta' -d '{
https://github.com/jprante/elasticsearch-river-jdbc
"type":"jdbc",
"jdbc": {
XXXX
},
"index": {
"index":"jdbc",
"type":"jdbc",
"analysis" : {
"analyzer" : {
"my_analyzer" : {
"type" : "custom",
"tokenizer" :
"my_ngram_tokenizer",
"filter" :
["my_ngram_filter"]
}
},
"filter" : {
"my_ngram_filter" : {
"type" : "nGram",
"min_gram" : 3,
"max_gram" : 10
}
},
"tokenizer" : {
"my_ngram_tokenizer" : {
"type" : "nGram",
"min_gram" : 3,
"max_gram" : 10
}
}
}
},
"mappings": {
"jdbc": {
"properties": {
"field1": {
"type": "string",
"analyzer": "my_analyzer"
}
}
}
}
}'
https://github.com/jprante/elasticsearch-river-jdbc
But with these settings, index jdbc is able to grab database field1 but
my Gram doesn't work.. https://github.com/jprante/elasticsearch-river-jdbc
https://github.com/jprante/elasticsearch-river-jdbc
Maybe someone already got this problem and could give some help ?
https://github.com/jprante/elasticsearch-river-jdbc
https://github.com/jprante/elasticsearch-river-jdbc
Thanks for your help,
https://github.com/jprante/elasticsearch-river-jdbc
https://github.com/jprante/elasticsearch-river-jdbc
Regards, https://github.com/jprante/elasticsearch-river-jdbc

--

https://github.com/jprante/elasticsearch-river-jdbc

https://github.com/jprante/elasticsearch-river-jdbc

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

Hello,

I have split both part, thanks for your advice.

But i don't understand why analyzer still doesn't work :confused:

On Thursday, January 17, 2013 4:44:08 PM UTC+1, David Pilato wrote:

IMHO, from a maintainability point of view, I will spilt the two
operations:

  1. create the index, mappings, aliases,...
  2. when everything is ready start the river (aka create the river)

My 2 cents

Le 17 janvier 2013 à 16:34, jtr...@gmail.com <javascript:> a écrit :

I will just latch onto this question. I've been setting up my index first
and then running the jdbc river. It doesn't always seem to populate. I've
found that if I delete everything, setup the index and then setup the
river, population is immediate and it works.

So, it's possible to setup the river and the index at the same time, cool?
Is this is a best practice?

On Thursday, January 17, 2013 10:30:07 AM UTC-5, Paul Bertonio wrote:

Hello,

I have some problem to create an index with gram parameters by using
elastic's search jdbc river (
GitHub - jprante/elasticsearch-jdbc: JDBC importer for Elasticsearch).https://github.com/jprante/elasticsearch-river-jdbc
https://github.com/jprante/elasticsearch-river-jdbc
I'm using the following request :https://github.com/jprante/elasticsearch-river-jdbc
https://github.com/jprante/elasticsearch-river-jdbc
curl -XPUT 'localhost:9200/_river/my_jdbc_test/_meta' -d '{https://github.com/jprante/elasticsearch-river-jdbc
"type":"jdbc",
"jdbc": {
XXXX
},
"index": {
"index":"jdbc",
"type":"jdbc",
"analysis" : {
"analyzer" : {
"my_analyzer" : {
"type" : "custom",
"tokenizer" :
"my_ngram_tokenizer",
"filter" :
["my_ngram_filter"]
}
},
"filter" : {
"my_ngram_filter" : {
"type" : "nGram",
"min_gram" : 3,
"max_gram" : 10
}
},
"tokenizer" : {
"my_ngram_tokenizer" : {
"type" : "nGram",
"min_gram" : 3,
"max_gram" : 10
}
}
}
},
"mappings": {
"jdbc": {
"properties": {
"field1": {
"type": "string",
"analyzer": "my_analyzer"
}
}
}
}
}'
https://github.com/jprante/elasticsearch-river-jdbc
But with these settings, index jdbc is able to grab database field1 but
my Gram doesn't work..https://github.com/jprante/elasticsearch-river-jdbc
https://github.com/jprante/elasticsearch-river-jdbc
Maybe someone already got this problem and could give some help ?https://github.com/jprante/elasticsearch-river-jdbc
https://github.com/jprante/elasticsearch-river-jdbc
Thanks for your help,https://github.com/jprante/elasticsearch-river-jdbc
https://github.com/jprante/elasticsearch-river-jdbc
Regards, https://github.com/jprante/elasticsearch-river-jdbc

--

https://github.com/jprante/elasticsearch-river-jdbc

https://github.com/jprante/elasticsearch-river-jdbc

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

I also tried to create index and mapping first and then create the river
but i have still the same situation

On Thursday, January 17, 2013 5:27:27 PM UTC+1, Paul Bertonio wrote:

Hello,

I have split both part, thanks for your advice.

But i don't understand why analyzer still doesn't work :confused:

On Thursday, January 17, 2013 4:44:08 PM UTC+1, David Pilato wrote:

IMHO, from a maintainability point of view, I will spilt the two
operations:

  1. create the index, mappings, aliases,...
  2. when everything is ready start the river (aka create the river)

My 2 cents

Le 17 janvier 2013 à 16:34, jtr...@gmail.com a écrit :

I will just latch onto this question. I've been setting up my index first
and then running the jdbc river. It doesn't always seem to populate. I've
found that if I delete everything, setup the index and then setup the
river, population is immediate and it works.

So, it's possible to setup the river and the index at the same time,
cool? Is this is a best practice?

On Thursday, January 17, 2013 10:30:07 AM UTC-5, Paul Bertonio wrote:

Hello,

I have some problem to create an index with gram parameters by using
elastic's search jdbc river (
GitHub - jprante/elasticsearch-jdbc: JDBC importer for Elasticsearch).https://github.com/jprante/elasticsearch-river-jdbc
https://github.com/jprante/elasticsearch-river-jdbc
I'm using the following request :https://github.com/jprante/elasticsearch-river-jdbc
https://github.com/jprante/elasticsearch-river-jdbc
curl -XPUT 'localhost:9200/_river/my_jdbc_test/_meta' -d '{https://github.com/jprante/elasticsearch-river-jdbc
"type":"jdbc",
"jdbc": {
XXXX
},
"index": {
"index":"jdbc",
"type":"jdbc",
"analysis" : {
"analyzer" : {
"my_analyzer" : {
"type" : "custom",
"tokenizer" :
"my_ngram_tokenizer",
"filter" :
["my_ngram_filter"]
}
},
"filter" : {
"my_ngram_filter" : {
"type" : "nGram",
"min_gram" : 3,
"max_gram" : 10
}
},
"tokenizer" : {
"my_ngram_tokenizer" : {
"type" : "nGram",
"min_gram" : 3,
"max_gram" : 10
}
}
}
},
"mappings": {
"jdbc": {
"properties": {
"field1": {
"type": "string",
"analyzer": "my_analyzer"
}
}
}
}
}'
https://github.com/jprante/elasticsearch-river-jdbc
But with these settings, index jdbc is able to grab database field1 but
my Gram doesn't work..https://github.com/jprante/elasticsearch-river-jdbc
https://github.com/jprante/elasticsearch-river-jdbc
Maybe someone already got this problem and could give some help ?https://github.com/jprante/elasticsearch-river-jdbc
https://github.com/jprante/elasticsearch-river-jdbc
Thanks for your help,https://github.com/jprante/elasticsearch-river-jdbc
https://github.com/jprante/elasticsearch-river-jdbc
Regards, https://github.com/jprante/elasticsearch-river-jdbc

--

https://github.com/jprante/elasticsearch-river-jdbc

https://github.com/jprante/elasticsearch-river-jdbc

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

--

Could you gist a full curl recreation as we can see what's going on here?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 17 janv. 2013 à 17:52, Paul Bertonio pbertonio@gmail.com a écrit :

I also tried to create index and mapping first and then create the river but i have still the same situation

On Thursday, January 17, 2013 5:27:27 PM UTC+1, Paul Bertonio wrote:

Hello,

I have split both part, thanks for your advice.

But i don't understand why analyzer still doesn't work :confused:

On Thursday, January 17, 2013 4:44:08 PM UTC+1, David Pilato wrote:

IMHO, from a maintainability point of view, I will spilt the two operations:

  1. create the index, mappings, aliases,...
  2. when everything is ready start the river (aka create the river)

My 2 cents

Le 17 janvier 2013 à 16:34, jtr...@gmail.com a écrit :

I will just latch onto this question. I've been setting up my index first and then running the jdbc river. It doesn't always seem to populate. I've found that if I delete everything, setup the index and then setup the river, population is immediate and it works.

So, it's possible to setup the river and the index at the same time, cool? Is this is a best practice?

On Thursday, January 17, 2013 10:30:07 AM UTC-5, Paul Bertonio wrote:
Hello,

I have some problem to create an index with gram parameters by using elastic's search jdbc river ( GitHub - jprante/elasticsearch-jdbc: JDBC importer for Elasticsearch).

I'm using the following request :

curl -XPUT 'localhost:9200/_river/my_jdbc_test/_meta' -d '{
"type":"jdbc",
"jdbc": {
XXXX
},
"index": {
"index":"jdbc",
"type":"jdbc",
"analysis" : {
"analyzer" : {
"my_analyzer" : {
"type" : "custom",
"tokenizer" : "my_ngram_tokenizer",
"filter" : ["my_ngram_filter"]
}
},
"filter" : {
"my_ngram_filter" : {
"type" : "nGram",
"min_gram" : 3,
"max_gram" : 10
}
},
"tokenizer" : {
"my_ngram_tokenizer" : {
"type" : "nGram",
"min_gram" : 3,
"max_gram" : 10
}
}
}
},
"mappings": {
"jdbc": {
"properties": {
"field1": {
"type": "string",
"analyzer": "my_analyzer"
}
}
}
}
}'

But with these settings, index jdbc is able to grab database field1 but my Gram doesn't work..

Maybe someone already got this problem and could give some help ?

Thanks for your help,

Regards,

--

--
David Pilato
http://www.scrutmydocs.org/
http://dev.david.pilato.fr/
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
--

--