[hadoop] Getting elasticsearch-hadoop working with Shark

Hi Costin

Sorry for the silence on this issue. This went a bit quiet.

But the good news is I've come back to it and managed to get it all working
with the new shark 0.9.1 release and 2.0.0RC1. Actually if I used ADD JAR I
got the same exception but when I just put the JAR into the shark lib/
folder it worked fine (which seems to point to the classpath issue you
mention).

However, I seem to have an issue with date <-> timestamp conversion.

I have a field in ES called "_ts" that has type "date" and the default
format "dateOptionalTime". When I do a query that includes the timestamp it
comes back NULL:

select ts from table ...
(note I use a correct es.mapping.names to map the _ts field in ES to ts
field in Hive/Shark that has timestamp type).

below is some of the debug-level output:

14/05/13 19:19:47 DEBUG lazy.LazyPrimitive: Data not in the TIMESTAMP data
type range so converted to null. Given data is :96997506-06-30
19:08:168:16.768
14/05/13 19:19:47 DEBUG lazy.LazyPrimitive: Data not in the TIMESTAMP data
type range so converted to null. Given data is :96997605-06-28
19:08:168:16.768
14/05/13 19:19:47 DEBUG lazy.LazyPrimitive: Data not in the TIMESTAMP data
type range so converted to null. Given data is :96997624-06-28
19:08:168:16.768
14/05/13 19:19:47 DEBUG lazy.LazyPrimitive: Data not in the TIMESTAMP data
type range so converted to null. Given data is :96997629-06-28
19:08:168:16.768
14/05/13 19:19:47 DEBUG lazy.LazyPrimitive: Data not in the TIMESTAMP data
type range so converted to null. Given data is :96997634-06-29
19:08:168:16.768
NULL
NULL
NULL
NULL
NULL

The data that I index in the _ts field is timestamp in ms (long). It
doesn't seem to be converted correctly but the data is correct (in ms at
least) and I can query against it using date formats and date math in ES.

Example snippet from debug log from above:
,"_ts":1397130475607}}]}}"

Any ideas or am I doing something silly?

I do see that the Hive timestamp expects either seconds since epoch of a
string-based format that has nanosecond granularity. Is this the issue with
just ms long timestamp data?

Thanks
Nick

On Thu, Mar 27, 2014 at 4:50 PM, Costin Leau costin.leau@gmail.com wrote:

Using the latest hive and hadoop is preferred as they contain various bug
fixes.
The error suggests a classpath issue - namely the same class is loaded
twice for some reason and hence the casting fails.

Let's connect on IRC - give me a ping when you're available (user is
costin).

Cheers,

On 3/27/14 4:29 PM, Nick Pentreath wrote:

Thanks for the response.

I tried latest Shark (cdh4 version of 0.9.1 here
http://cloudera.rst.im/shark/ ) - this uses hadoop 1.0.4 and hive 0.11
I believe, and build elasticsearch-hadoop from github master.

Still getting same error:
org.elasticsearch.hadoop.hive.EsHiveInputFormat$EsHiveSplit cannot be
cast to
org.elasticsearch.hadoop.hive.EsHiveInputFormat$EsHiveSplit

Will using hive 0.11 / hadoop 1.0.4 vs hive 0.12 / hadoop 1.2.1 in
es-hadoop master make a difference?

Anyone else actually got this working?

On Thu, Mar 20, 2014 at 2:44 PM, Costin Leau <costin.leau@gmail.com<mailto:
costin.leau@gmail.com>> wrote:

I recommend using master - there are several improvements done in

this area. Also using the latest Shark (0.9.0) and
Hive (0.12) will help.

On 3/20/14 12:00 PM, Nick Pentreath wrote:

    Hi

    I am struggling to get this working too. I'm just trying locally

for now, running Shark 0.8.1, Hive 0.9.0 and ES
1.0.1
with ES-hadoop 1.3.0.M2.

    I managed to get a basic example working with WRITING into an

index. But I'm really after READING and index.

    I believe I have set everything up correctly, I've added the jar

to Shark:
ADD JAR /path/to/es-hadoop.jar;

    created a table:
    CREATE EXTERNAL TABLE test_read (name string, price double)

    STORED BY 'org.elasticsearch.hadoop.__hive.EsStorageHandler'

    TBLPROPERTIES('es.resource' = 'test_index/test_type/_search?

__q=*');

    And then trying to 'SELECT * FROM test _read' gives me :

    org.apache.spark.__SparkException: Job aborted: Task 3.0:0

failed more than 0 times; aborting job
java.lang.ClassCastException: org.elasticsearch.hadoop.hive.
__EsHiveInputFormat$ESHiveSplit cannot be cast to
org.elasticsearch.hadoop.hive.__EsHiveInputFormat$ESHiveSplit

    at org.apache.spark.scheduler.__DAGScheduler$$anonfun$__

abortStage$1.apply(__DAGScheduler.scala:827)

    at org.apache.spark.scheduler.__DAGScheduler$$anonfun$__

abortStage$1.apply(__DAGScheduler.scala:825)

    at scala.collection.mutable.__ResizableArray$class.foreach(_

_ResizableArray.scala:60)

    at scala.collection.mutable.__ArrayBuffer.foreach(__

ArrayBuffer.scala:47)

    at org.apache.spark.scheduler.__DAGScheduler.abortStage(__

DAGScheduler.scala:825)

    at org.apache.spark.scheduler.__DAGScheduler.processEvent(__

DAGScheduler.scala:440)

    at org.apache.spark.scheduler.__DAGScheduler.org
    <http://org.apache.spark.scheduler.DAGScheduler.org>$

apache$spark$__scheduler$DAGScheduler$$run(__DAGScheduler.scala:502)

    at org.apache.spark.scheduler.__DAGScheduler$$anon$1.run(__

DAGScheduler.scala:157)

    FAILED: Execution Error, return code -101 from

shark.execution.SparkTask

    In fact I get the same error thrown when trying to READ from the

table that I successfully WROTE to...

    On Saturday, 22 February 2014 12:31:21 UTC+2, Costin Leau wrote:

         Yeah, it might have been some sort of network configuration

issue where services where running on different
machines
and
localhost pointed to a different location.

         Either way, I'm glad to hear things have are moving forward.

         Cheers,

         On 22/02/2014 1:06 AM, Max Lang wrote:
         > I managed to get it working on ec2 without issue this

time. I'd say the biggest difference was that this
time I set up a
> dedicated ES machine. Is it possible that, because I was
using a cluster with slaves, when I used
"localhost" the slaves
> couldn't find the ES instance running on the master? Or do
all the requests go through the master?
>
>
> On Wednesday, February 19, 2014 2:35:40 PM UTC-8, Costin
Leau wrote:
>
> Hi,
>
> Setting logging in Hive/Hadoop can be tricky since the
log4j needs to be picked up by the running JVM
otherwise you
> won't see anything.
> Take a look at this link on how to tell Hive to use
your logging settings [1].
>
> For the next release, we might introduce dedicated
exceptions for the simple fact that some
libraries, like Hive,
> swallow the stack trace and it's unclear what the
issue is which makes the exception
(IllegalStateException) ambiguous.
>
> Let me know how it goes and whether you will encounter
any issues with Shark. Or if you don't :slight_smile:
>
> Thanks!
>
> [1]https://cwiki.apache.org/__
confluence/display/Hive/__GettingStarted#GettingStarted-__ErrorLogs
<Home - Apache Hive - Apache Software Foundation
GettingStarted#GettingStarted-ErrorLogs>
<https://cwiki.apache.org/__confluence/display/Hive/__
GettingStarted#GettingStarted-__ErrorLogs
<Home - Apache Hive - Apache Software Foundation
GettingStarted#GettingStarted-ErrorLogs>>
> <https://cwiki.apache.org/__confluence/display/Hive/__
GettingStarted#GettingStarted-__ErrorLogs
<Home - Apache Hive - Apache Software Foundation
GettingStarted#GettingStarted-ErrorLogs>
<https://cwiki.apache.org/__confluence/display/Hive/__
GettingStarted#GettingStarted-__ErrorLogs

    <https://cwiki.apache.org/confluence/display/Hive/

GettingStarted#GettingStarted-ErrorLogs>>>
>
> On 20/02/2014 12:02 AM, Max Lang wrote:
> > Hey Costin,
> >
> > Thanks for the swift reply. I abandoned EC2 to take
that out of the equation and managed to get
everything working
> > locally using the latest version of everything
(though I realized just now I'm still on hive 0.9).
I'm guessing you're
> > right about some port connection issue because I
definitely had ES running on that machine.
> >
> > I changed hive-log4j.properties and added
> > |
> > #custom logging levels
> > #log4j.logger.xxx=DEBUG
> > log4j.logger.org http://log4j.logger.org.__
elasticsearch.hadoop.rest=__TRACE
> >log4j.logger.org.__elasticsearch.hadoop.mr <
http://log4j.logger.org.elasticsearch.hadoop.mr>
<http://log4j.logger.org.__elasticsearch.hadoop.mr <
http://log4j.logger.org.elasticsearch.hadoop.mr>>
<http://log4j.logger.org.__elasticsearch.hadoop.mr <
http://log4j.logger.org.elasticsearch.hadoop.mr>
<http://log4j.logger.org.__elasticsearch.hadoop.mr <
http://log4j.logger.org.elasticsearch.hadoop.mr>>>=__TRACE

         >     > |
         >     >
         >     > But I didn't see any trace logging. Hopefully I can

get it working on EC2 without issue, but, for
the future, is this
> > the correct way to set TRACE logging?
> >
> > Oh and, for reference, I tried running without ES up
and I got the following, exceptions:
> >
> > 2014-02-19 13:46:08,803 ERROR shark.SharkDriver
(Logging.scala:logError(64)) - FAILED: Hive
Internal Error:
> > java.lang.__IllegalStateException(Cannot discover
Elasticsearch version)
> > java.lang.__IllegalStateException: Cannot discover
Elasticsearch version
> > at org.elasticsearch.hadoop.hive.
__EsStorageHandler.init(__EsStorageHandler.java:101)
> > at
org.elasticsearch.hadoop.hive.EsStorageHandler.
configureOutputJobProperties(__EsStorageHandler.java:83)
> > at
org.apache.hadoop.hive.ql.plan.PlanUtils.
configureJobPropertiesForStora__geHandler(PlanUtils.java:706)
> > at
org.apache.hadoop.hive.ql.plan.PlanUtils.
configureOutputJobPropertiesFo__rStorageHandler(PlanUtils.java:675)
> > at org.apache.hadoop.hive.ql.

exec.FileSinkOperator.__augmentPlan(FileSinkOperator.java:764)
> > at org.apache.hadoop.hive.ql.

parse.SemanticAnalyzer.__putOpInsertMap(SemanticAnalyzer.java:1518)
> > at org.apache.hadoop.hive.ql.

parse.SemanticAnalyzer.__genFileSinkPlan(__SemanticAnalyzer.java:4337)
> > at
org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.
genPostGroupByBodyPlan(SemanticAnalyzer.java:6207)
> > at org.apache.hadoop.hive.ql.

parse.SemanticAnalyzer.__genBodyPlan(SemanticAnalyzer.java:6138)
> > at org.apache.hadoop.hive.ql.

parse.SemanticAnalyzer.__genPlan(SemanticAnalyzer.java:__6764)
> > at shark.parse.SharkSemanticAnalyzer.
analyzeInternal(SharkSemanticAnalyzer.scala:149)
> > at org.apache.hadoop.hive.ql.

parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:244)
> > at shark.SharkDriver.compile(

SharkDriver.scala:215)
> > at org.apache.hadoop.hive.ql.

Driver.compile(Driver.java:336)
> > at org.apache.hadoop.hive.ql.

Driver.run(Driver.java:895)
> > at shark.SharkCliDriver.

processCmd(SharkCliDriver.scala:324)
> > at org.apache.hadoop.hive.cli.

CliDriver.processLine(CliDriver.java:406)
> > at shark.SharkCliDriver$.main(

SharkCliDriver.scala:232)
> > at shark.SharkCliDriver.main(__SharkCliDriver.scala)

         >     > Caused by: java.io.IOException: Out of nodes and

retries; caught exception
> > at org.elasticsearch.hadoop.rest.
__NetworkClient.execute(__NetworkClient.java:81)
> > at org.elasticsearch.hadoop.rest.
__RestClient.execute(RestClient.__java:221)
> > at org.elasticsearch.hadoop.rest.
__RestClient.execute(RestClient.__java:205)
> > at org.elasticsearch.hadoop.rest.
__RestClient.execute(RestClient.__java:209)
> > at org.elasticsearch.hadoop.rest.
__RestClient.get(RestClient.__java:103)
> > at org.elasticsearch.hadoop.rest.
__RestClient.esVersion(__RestClient.java:274)
> > at
org.elasticsearch.hadoop.rest.InitializationUtils.
discoverEsVersion(__InitializationUtils.java:84)
> > at org.elasticsearch.hadoop.hive.
__EsStorageHandler.init(__EsStorageHandler.java:99)

         >     > ... 18 more
         >     > Caused by: java.net.ConnectException: Connection

refused
> > at java.net.PlainSocketImpl.__socketConnect(Native
Method)
> > at java.net
http://java.net.__AbstractPlainSocketImpl.doConnect(
AbstractPlainSocketImpl.java:__339)
> > at java.net
http://java.net.__AbstractPlainSocketImpl.connectToAddress(
AbstractPlainSocketImpl.java:200)
> > at java.net http://java.net.

AbstractPlainSocketImpl.__connect(__AbstractPlainSocketImpl.java:182)
> > at java.net.SocksSocketImpl.

connect(SocksSocketImpl.java:__391)
> > at java.net.Socket.connect(__Socket.java:579)
> > at java.net.Socket.connect(__Socket.java:528)
> > at java.net.Socket.(Socket.__java:425)
> > at java.net.Socket.(Socket.__java:280)
> > at
org.apache.commons.httpclient.protocol.
DefaultProtocolSocketFactory.createSocket(
DefaultProtocolSocketFactory.__java:80)
> > at
org.apache.commons.httpclient.protocol.
DefaultProtocolSocketFactory.createSocket(
DefaultProtocolSocketFactory.__java:122)
> > at org.apache.commons.httpclient.
__HttpConnection.open(__HttpConnection.java:707)
> > at org.apache.commons.httpclient.
__HttpMethodDirector.__executeWithRetry(__HttpMethodDirector.java:387)
> > at org.apache.commons.httpclient.
__HttpMethodDirector.__executeMethod(__HttpMethodDirector.java:171)
> > at org.apache.commons.httpclient.
__HttpClient.executeMethod(__HttpClient.java:397)
> > at org.apache.commons.httpclient.
__HttpClient.executeMethod(__HttpClient.java:323)
> > at
org.elasticsearch.hadoop.rest.commonshttp.
CommonsHttpTransport.execute(__CommonsHttpTransport.java:160)
> > at org.elasticsearch.hadoop.rest.
__NetworkClient.execute(__NetworkClient.java:74)

         >     > ... 25 more
         >     >
         >     > Let me know if there's anything in particular you'd

like me to try on EC2.
> >
> > (For posterity, the versions I used were: hadoop
2.2.0, hive 0.9.0, shark 8.1, spark 8.1, es-hadoop
1.3.0.M2, java
> > 1.7.0_15, scala 2.9.3, elasticsearch 1.0.0)
> >
> > Thanks again,
> > Max
> >
> > On Tuesday, February 18, 2014 10:16:38 PM UTC-8,
Costin Leau wrote:
> >
> > The error indicates a network error - namely
es-hadoop cannot connect to Elasticsearch on the
default (localhost:9200)
> > HTTP port. Can you double check whether that's
indeed the case (using curl or even telnet on
that port) - maybe the
> > firewall prevents any connections to be made...
> > Also you could try using the latest Hive, 0.12
and a more recent Hadoop such as 1.1.2 or 1.2.1.
> >
> > Additionally, can you enable TRACE logging in
your job on es-hadoop packages
org.elasticsearch.hadoop.rest and
> >org.elasticsearch.hadoop.mr <
http://org.elasticsearch.hadoop.mr>
<http://org.elasticsearch.__hadoop.mr <http://org.elasticsearch.
hadoop.mr>>
<http://org.elasticsearch.__hadoop.mr <http://org.elasticsearch.
hadoop.mr>
<http://org.elasticsearch.__hadoop.mr <
http://org.elasticsearch.hadoop.mr>>>
<http://org.elasticsearch.__hadoop.mr <http://org.elasticsearch.
hadoop.mr> <http://org.elasticsearch.__hadoop.mr
http://org.elasticsearch.hadoop.mr>

         >     <http://org.elasticsearch.__hadoop.mr <

http://org.elasticsearch.hadoop.mr>
<http://org.elasticsearch.__hadoop.mr <http://org.elasticsearch.
hadoop.mr>>>> packages and report back ?

         >     >
         >     >     Thanks,
         >     >
         >     >     On 19/02/2014 4:03 AM, Max Lang wrote:
         >     >     > I set everything up using this
    guide:https://github.com/__amplab/shark/wiki/Running-__

Shark-on-EC2
https://github.com/amplab/shark/wiki/Running-Shark-on-EC2
<https://github.com/amplab/__shark/wiki/Running-Shark-on-__
EC2
https://github.com/amplab/shark/wiki/Running-Shark-on-EC2>
<https://github.com/amplab/__shark/wiki/Running-Shark-on-__
EC2
https://github.com/amplab/shark/wiki/Running-Shark-on-EC2
<https://github.com/amplab/__shark/wiki/Running-Shark-on-__EC2
https://github.com/amplab/shark/wiki/Running-Shark-on-EC2>>
> > <https://github.com/amplab/__
shark/wiki/Running-Shark-on-__EC2
https://github.com/amplab/shark/wiki/Running-Shark-on-EC2
<https://github.com/amplab/__shark/wiki/Running-Shark-on-__EC2
https://github.com/amplab/shark/wiki/Running-Shark-on-EC2>
> <https://github.com/amplab/__
shark/wiki/Running-Shark-on-__EC2
https://github.com/amplab/shark/wiki/Running-Shark-on-EC2
<https://github.com/amplab/__shark/wiki/Running-Shark-on-__
EC2

    <https://github.com/amplab/shark/wiki/Running-Shark-on-EC2>>>>

on an ec2 cluster. I've
> > > copied the elasticsearch-hadoop jars into the
hive lib directory and I have elasticsearch
running on localhost:9200. I'm
> > > running shark in a screen session with
--service screenserver and connecting to it at the
same time using shark -h
> > > localhost.
> > >
> > > Unfortunately, when I attempt to write data
into elasticsearch, it fails. Here's an example:
> > >
> > > |
> > > [localhost:10000]shark>CREATE EXTERNAL TABLE
wiki (id BIGINT,title STRING,last_modified
STRING,xml STRING,text
> > > STRING)ROW FORMAT DELIMITED FIELDS TERMINATED
BY '\t'LOCATION
's3n://spark-data/wikipedia-__sample/';

         >     >     > Timetaken (including network

latency):0.159seconds
> > > 14/02/1901:23:33INFO CliDriver:Timetaken
(including network latency):0.159seconds
> > >
> > > [localhost:10000]shark>SELECT title FROM wiki
LIMIT 1;
> > > Alpokalja
> > > Timetaken (including network
latency):2.23seconds
> > > 14/02/1901:23:48INFO CliDriver:Timetaken
(including network latency):2.23seconds
> > >
> > > [localhost:10000]shark>CREATE EXTERNAL TABLE
es_wiki (id BIGINT,title STRING,last_modified
STRING,xml STRING,text
> > > STRING)STORED BY
'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES('es.resource'='__wikipedia/article');

         >     >     > Timetaken (including network

latency):0.061seconds
> > > 14/02/1901:33:51INFO CliDriver:Timetaken
(including network latency):0.061seconds
> > >
> > > [localhost:10000]shark>INSERT OVERWRITE TABLE
es_wiki SELECTw.id
http://w.id,w.title,w.last___modified,w.xml,w.text FROM wiki w;
> > > [HiveError]:Queryreturned non-zero
code:9,cause:FAILED:__ExecutionError,returncode
-101fromshark.execution.__SparkTask

         >     >     > Timetaken (including network

latency):3.575seconds
> > > 14/02/1901:34:42INFO CliDriver:Timetaken
(including network latency):3.575seconds
> > > |
> > >
> > > The stack trace looks like this:
> > >
> > > org.apache.hadoop.hive.ql.__
metadata.HiveException
(org.apache.hadoop.hive.ql.__metadata.HiveException:
java.io.IOException:

         >     >     > Out of nodes and retries; caught exception)
         >     >     >
         >     >     >
    org.apache.hadoop.hive.ql.__exec.FileSinkOperator.__

processOp(FileSinkOperator.java:602)shark.execution.
FileSinkOperator$$anonfun$processPartition$1.apply(
FileSinkOperator.scala:84)shark.execution.
FileSinkOperator$$anonfun$processPartition$1.apply(
FileSinkOperator.scala:81)scala.collection.Iterator$
class.foreach(Iterator.scala:772)scala.collection.
Iterator$
$anon$19.foreach(Iterator.__scala:399)shark.
execution.__FileSinkOperator.processPartition(
FileSinkOperator.scala:81)shark.execution.
FileSinkOperator$.writeFiles$__1(FileSinkOperator.scala:207)
_shark.execution.FileSinkOperator$$anonfun$
executeProcessFileSinkPartitio__n$1.apply(FileSinkOperator.

scala:211)shark.execution.FileSinkOperator$$anonfun$
executeProcessFileSinkPartitio__n$1.apply(FileSinkOperator.

_scala:211)org.apache.spark._scheduler.ResultTask.runTask(
_ResultTask.scala:107)org.apache.spark.scheduler.Task.
run(Task.scala:53)org.apache.__spark.executor.Executor$__Task

Runner$$anonfun$run$1.__apply$mcV$sp(Executor.scala:__215)org.apac

he.spa


         rk.dep
         >
         >     loy.Sp
         >     >
         >     >
    arkHadoopUtil.runAsUser(__SparkHadoopUtil.scala:50)org._

_apache.spark.executor.Executor$TaskRunner.run(
Executor.scala:182)java.util.concurrent.ThreadPoolExecutor.
runWorker(ThreadPoolExecutor.java:1145)java.util.
concurrent.ThreadPoolExecutor$__Worker.run(ThreadPoolExecutor.__java:615)
java.lang.Thread.run(__Thread.java:744

         >
         >     >
         >     >     > I should be using Hive 0.9.0, shark 0.8.1,

elasticsearch 1.0.0, Hadoop 1.0.4, and java 1.7.0_51
> > > Based on my cursory look at the hadoop and
elasticsearch-hadoop sources, it looks like hive
is just rethrowing an
> > > IOException it's getting from Spark, and
elasticsearch-hadoop is just hitting those exceptions.
> > > I suppose my questions are: Does this look
like an issue with my ES/elasticsearch-hadoop
config? And has anyone gotten
> > > elasticsearch working with Spark/Shark?
> > > Any ideas/insights are appreciated.
> > > Thanks,Max
> > >
> > > --
> > > You received this message because you are
subscribed to the Google Groups "elasticsearch" group.
> > > To unsubscribe from this group and stop
receiving emails from it, send an email to
> > >elasticsearc...@googlegroups.__com <mailto:
elasticsearc...@googlegroups.com> <javascript:>.

         >     >     > To view this discussion on the web visit
         >     >
     >https://groups.google.com/d/__msgid/elasticsearch/9486faff-

__3eaf-4344-8931-3121bbc5d9c7%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/9486faff-
3eaf-4344-8931-3121bbc5d9c7%40googlegroups.com>

    <https://groups.google.com/d/__msgid/elasticsearch/9486faff-

__3eaf-4344-8931-3121bbc5d9c7%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/9486faff-
3eaf-4344-8931-3121bbc5d9c7%40googlegroups.com>>
>
<https://groups.google.com/d/__msgid/elasticsearch/9486faff-
__3eaf-4344-8931-3121bbc5d9c7%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/9486faff-
3eaf-4344-8931-3121bbc5d9c7%40googlegroups.com>

    <https://groups.google.com/d/__msgid/elasticsearch/9486faff-

__3eaf-4344-8931-3121bbc5d9c7%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/9486faff-
3eaf-4344-8931-3121bbc5d9c7%40googlegroups.com>>>
> >
<https://groups.google.com/d/__msgid/elasticsearch/9486faff-
__3eaf-4344-8931-3121bbc5d9c7%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/9486faff-
3eaf-4344-8931-3121bbc5d9c7%40googlegroups.com>

    <https://groups.google.com/d/__msgid/elasticsearch/9486faff-

__3eaf-4344-8931-3121bbc5d9c7%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/9486faff-
3eaf-4344-8931-3121bbc5d9c7%40googlegroups.com>>
>
<https://groups.google.com/d/__msgid/elasticsearch/9486faff-
__3eaf-4344-8931-3121bbc5d9c7%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/9486faff-
3eaf-4344-8931-3121bbc5d9c7%40googlegroups.com>

    <https://groups.google.com/d/__msgid/elasticsearch/9486faff-

__3eaf-4344-8931-3121bbc5d9c7%40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/9486faff-
3eaf-4344-8931-3121bbc5d9c7%40googlegroups.com>>>>.
> > > For more options, visithttps://groups.google.

com/groups/opt_out
http://groups.google.com/groups/opt_out <
http://groups.google.com/__groups/opt_out
http://groups.google.com/groups/opt_out> <
http://groups.google.com/__groups/opt_out
http://groups.google.com/groups/opt_out

         <http://groups.google.com/__groups/opt_out <

http://groups.google.com/groups/opt_out>>>
<https://groups.google.com/__groups/opt_out <
https://groups.google.com/groups/opt_out>
<https://groups.google.com/__groups/opt_out <
https://groups.google.com/groups/opt_out>>
> <https://groups.google.com/__groups/opt_out <
https://groups.google.com/groups/opt_out>
<https://groups.google.com/__groups/opt_out <
https://groups.google.com/groups/opt_out>>>>.

         >     >
         >     >     --
         >     >     Costin
         >     >
         >     > --
         >     > You received this message because you are subscribed

to the Google Groups "elasticsearch" group.
> > To unsubscribe from this group and stop receiving
emails from it, send an email to
> >elasticsearc...@googlegroups.__com <mailto:
elasticsearc...@googlegroups.com> <javascript:>.

         >     > To view this discussion on the web visit
         >
     >https://groups.google.com/d/__msgid/elasticsearch/86187c3a-

__0974-4d10-9689-e83da788c04a%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/86187c3a-
0974-4d10-9689-e83da788c04a%40googlegroups.com>

    <https://groups.google.com/d/__msgid/elasticsearch/86187c3a-

__0974-4d10-9689-e83da788c04a%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/86187c3a-
0974-4d10-9689-e83da788c04a%40googlegroups.com>>
>
<https://groups.google.com/d/__msgid/elasticsearch/86187c3a-
__0974-4d10-9689-e83da788c04a%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/86187c3a-
0974-4d10-9689-e83da788c04a%40googlegroups.com>

    <https://groups.google.com/d/__msgid/elasticsearch/86187c3a-

__0974-4d10-9689-e83da788c04a%40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/86187c3a-
0974-4d10-9689-e83da788c04a%40googlegroups.com>>>.
> > For more options, visithttps://groups.google.

com/groups/opt_out
http://groups.google.com/groups/opt_out <
http://groups.google.com/__groups/opt_out
http://groups.google.com/groups/opt_out> <
https://groups.google.com/__groups/opt_out
https://groups.google.com/groups/opt_out
<https://groups.google.com/__groups/opt_out <
https://groups.google.com/groups/opt_out>>>.

         >
         >     --
         >     Costin
         >
         > --
         > You received this message because you are subscribed to

the Google Groups "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails
from it, send an email to
>elasticsearc...@googlegroups.__com <mailto:elasticsearc...@
googlegroups.com> <javascript:>.

         > To view this discussion on the web visit

     >https://groups.google.com/d/__msgid/elasticsearch/e29e342d-

__de74-4ed6-93d4-875fc728c5a5%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/e29e342d-
de74-4ed6-93d4-875fc728c5a5%40googlegroups.com>

    <https://groups.google.com/d/__msgid/elasticsearch/e29e342d-

__de74-4ed6-93d4-875fc728c5a5%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/e29e342d-
de74-4ed6-93d4-875fc728c5a5%40googlegroups.com>>.

         > For more options, visithttps://groups.google.__

com/groups/opt_out
http://groups.google.com/groups/opt_out <
https://groups.google.com/__groups/opt_out

    <https://groups.google.com/groups/opt_out>>.

         --
         Costin

    --
    You received this message because you are subscribed to the

Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it,
send an email to
elasticsearch+unsubscribe@__googlegroups.com <mailto:
elasticsearch%2Bunsubscribe@googlegroups.com>
<mailto:elasticsearch+__unsubscribe@googlegroups.com <mailto:
elasticsearch%2Bunsubscribe@googlegroups.com>>.

    To view this discussion on the web visit
    https://groups.google.com/d/__msgid/elasticsearch/c1081bf2-_

_117a-4af2-ba90-2c38a4572782%__40googlegroups.com
<https://groups.google.com/d/msgid/elasticsearch/c1081bf2-
117a-4af2-ba90-2c38a4572782%40googlegroups.com>
<https://groups.google.com/d/__msgid/elasticsearch/c1081bf2-
__117a-4af2-ba90-2c38a4572782%_40googlegroups.com?utm
medium=__email&utm_source=footer
<https://groups.google.com/d/msgid/elasticsearch/c1081bf2-
117a-4af2-ba90-2c38a4572782%40googlegroups.com?utm_medium=
email&utm_source=footer>>.

    For more options, visit https://groups.google.com/d/__optout <

https://groups.google.com/d/optout>.

--
Costin

--

You received this message because you are subscribed to a topic in

the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/__
topic/elasticsearch/S-__BrzwUHJbM/unsubscribe
<https://groups.google.com/d/topic/elasticsearch/S-
BrzwUHJbM/unsubscribe>.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@__googlegroups.com
mailto:elasticsearch%2Bunsubscribe@googlegroups.com.

To view this discussion on the web visit
https://groups.google.com/d/__msgid/elasticsearch/532AE2B5._

_8080004%40gmail.com
<https://groups.google.com/d/msgid/elasticsearch/532AE2B5.
8080004%40gmail.com>.

For more options, visit https://groups.google.com/d/__optout <

https://groups.google.com/d/optout>.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to
elasticsearch+unsubscribe@googlegroups.com <mailto:elasticsearch+
unsubscribe@googlegroups.com>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CALD%
2B6GNJD0wMJPzXwQqvfL4%2B0nZmw4XzFrPdEc%2BOPLZVeNuZpw%40mail.gmail.com
<https://groups.google.com/d/msgid/elasticsearch/CALD%
2B6GNJD0wMJPzXwQqvfL4%2B0nZmw4XzFrPdEc%2BOPLZVeNuZpw%40mail.gmail.
com?utm_medium=email&utm_source=footer>.

For more options, visit https://groups.google.com/d/optout.

--
Costin

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit https://groups.google.com/d/
topic/elasticsearch/S-BrzwUHJbM/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/53343AA6.1000405%40gmail.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CALD%2B6GMsgCB2Yqs2LLsbGinXSBOhB4ULVX1eaMm0vTvGpgLY7A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.