Error while upgrading 0.19.8 to 0.20.2 java.io.StreamCorruptedException: invalid internal transport message format

Hi ,
I recently upgrade my elasticsearch cluster from 0.19.8 to 0.20.2 it started normally and no any log error . But when i configure it with scrutmydocs-0.2.0 it gives error java.io.StreamCorruptedException: invalid internal transport message format
Before that it works well with 0.19.8 for indexing and it gives error while searching . While i search on google for error and i got somewhere that that bug is fixed in 0.20 so i upgrade it .

Please help me !!!!

Thanks ,
Sanjay

Hi Sanjay,

We did not push an update for the scrutmydocs project using Elasticsearch 0.20.x. So, master and 0.2.0 use ES 0.19.x
But, I can work on it and try to release a 0.2.0 in some days.

Let me know.

David

Le 30 janv. 2013 à 11:19, sbbagal sbbagal13@gmail.com a écrit :

Hi ,
I recently upgrade my elasticsearch cluster from 0.19.8 to 0.20.2 it
started normally and no any log error . But when i configure it with
scrutmydocs-0.2.0 it gives error java.io.StreamCorruptedException: invalid
internal transport message format
Before that it works well with 0.19.8 for indexing and it gives error while
searching . While i search on google for error and i got somewhere that that
bug is fixed in 0.20 so i upgrade it .

Please help me !!!!

Thanks ,
Sanjay

--
View this message in context: http://elasticsearch-users.115913.n3.nabble.com/Error-while-upgrading-0-19-8-to-0-20-2-java-io-StreamCorruptedException-invalid-internal-transport-mt-tp4029022.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi dadoonet,
Thanks for quick reply..
I have some questions on indexing a document . Give me some idea how can i index big text file document using JAVA transport client . Is there is need to define custom map for that .
Is there is any generic map that i can use all types of files like csv , txt, json, pdf etc ..

Thanks ,
Sanjay

Do you mean indexing using scrutmydocs or with Elasticsearch out of the box?

What does map is? Do you mean mapping?

You can have a look at Test of attachments plugin · GitHub but I don't know if it answers your question…

David.

Le 30 janv. 2013 à 14:17, sbbagal sbbagal13@gmail.com a écrit :

Hi dadoonet,
Thanks for quick reply..
I have some questions on indexing a document . Give me some idea how can i
index big text file document using JAVA transport client . Is there is need
to define custom map for that .
Is there is any generic map that i can use all types of files like csv ,
txt, json, pdf etc ..

Thanks ,
Sanjay

--
View this message in context: http://elasticsearch-users.115913.n3.nabble.com/Error-while-upgrading-0-19-8-to-0-20-2-java-io-StreamCorruptedException-invalid-internal-transport-mt-tp4029022p4029037.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi David ,
I mean indexing with Elasticsearch out of box . I want to connect ES cluster using Transport client and index the document.
and I am asking for generic mapping . I had already gone through https://gist.github.com/3907010 but not sure how to do same thing using Transport client .

Thanks ,
Sanjay

When you use the transport Client, you have only to encode your file in BASE64 before sending it to Elasticsearch.

Something like: https://github.com/dadoonet/fsriver/blob/master/src/main/java/org/elasticsearch/river/fs/FsRiver.java#L589

Does it help?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 31 janv. 2013 à 06:24, sbbagal sbbagal13@gmail.com a écrit :

Hi David ,
I mean indexing with Elasticsearch out of box . I want to connect ES cluster
using Transport client and index the document.
and I am asking for generic mapping . I had already gone through
https://gist.github.com/3907010 but not sure how to do same thing using
Transport client .

Thanks ,
Sanjay

--
View this message in context: http://elasticsearch-users.115913.n3.nabble.com/Error-while-upgrading-0-19-8-to-0-20-2-java-io-StreamCorruptedException-invalid-internal-transport-mt-tp4029022p4029076.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi David ,
Thanks for reply and suggestion , i am able to index document's in that way you suggest but as contents are stored in Bas64 encoded format , i am not able to search on that.
Give me some trick to search ...

Thanks,
Sanjay

Try to see if your mapping is correct (aka if your field is attachment typed)

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 31 janv. 2013 à 11:53, sbbagal sbbagal13@gmail.com a écrit :

Hi David ,
Thanks for reply and suggestion , i am able to index document's in that way
you suggest but as contents are stored in Bas64 encoded format , i am not
able to search on that.
Give me some trick to search ...

Thanks,
Sanjay

--
View this message in context: http://elasticsearch-users.115913.n3.nabble.com/Error-while-upgrading-0-19-8-to-0-20-2-java-io-StreamCorruptedException-invalid-internal-transport-mt-tp4029022p4029107.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi David,
I am not using any specific mapping in my code. My cluster having plugin FS- river and attachement . I see that document is indexed using Head plugin. but not able to see the contents. contents are in the encoded format. below is the code .

public void indexFile(File file) throws Exception {

	FileInputStream fileReader = new FileInputStream(file);

	
	bulk = client.prepareBulk();
	
	byte[] buffer = new byte[1024];
	ByteArrayOutputStream bos = new ByteArrayOutputStream();
	int i = 0;
	while (-1 != (i = fileReader.read(buffer))) {
	bos.write(buffer, 0, i);
	}
	byte[] data = bos.toByteArray();

	fileReader.close();
	bos.close();

	          
	              esIndex(index,
	                        type,
	                        SignTool.sign(file.getAbsolutePath()),
	                        jsonBuilder()
	                                .startObject()
	                                .field(FsRiverUtil.DOC_FIELD_NAME, file.getName())
	                                .field(FsRiverUtil.DOC_FIELD_DATE,
	                                        file.lastModified())
	                                .field(FsRiverUtil.DOC_FIELD_PATH_ENCODED,
	                                        SignTool.sign(file.getParent()))
	                                        .startObject("file").field("_name", file.getName())
	                                .field("content", Base64.encodeBytes(data))
	                                .endObject().endObject());
	            }

    private void esIndex(String index, String type, String id,
		XContentBuilder xb) throws Exception {
	
    	System.out.println("\nin :: "+index+"\nty :: "+type+"\nid :: "+id+"\nxb :: "+xb);
    	System.out.println(client);
    	System.out.println(xb.string());
    	System.out.println("BULK :: "+bulk);
		bulk.add(client.prepareIndex(index, type, id).setSource(xb));
	
		commitBulkIfNeeded();
		}
    private void commitBulkIfNeeded() throws Exception {
    	System.out.println(bulk.numberOfActions());
    	if (bulk != null && bulk.numberOfActions() > 0 && bulk.numberOfActions() >= bulkSize) {
    
    	BulkResponse response = bulk.execute().actionGet();
    	if (response.hasFailures()) {
    
    System.out.println("Failed to index'");
    	}
    	System.out.println("Succeded");
    	// Reinit a new bulk
    	bulk = client.prepareBulk();
    	}
    	}

please suggest where i am going wrong .and searching clue ...

Thanks,
Sanjay

You don't need FSRiver if you don't want to create a river.

Just use attachment plugin. Look at the doc here: GitHub - elastic/elasticsearch-mapper-attachments: Mapper Attachments Type plugin for Elasticsearch

It will explain how to create a mapping before indexing any doc.

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 31 janv. 2013 à 12:17, sbbagal sbbagal13@gmail.com a écrit :

Hi David,
I am not using any specific mapping in my code. My cluster having plugin FS-
river and attachement . I see that document is indexed using Head plugin.
but not able to see the contents. contents are in the encoded format. below
is the code .

public void indexFile(File file) throws Exception {

   FileInputStream fileReader = new FileInputStream(file);

   
   bulk = client.prepareBulk();
   
   byte[] buffer = new byte[1024];
   ByteArrayOutputStream bos = new ByteArrayOutputStream();
   int i = 0;
   while (-1 != (i = fileReader.read(buffer))) {
   bos.write(buffer, 0, i);
   }
   byte[] data = bos.toByteArray();

   fileReader.close();
   bos.close();

             
                 esIndex(index,
                           type,
                           SignTool.sign(file.getAbsolutePath()),
                           jsonBuilder()
                                   .startObject()
                                   .field(FsRiverUtil.DOC_FIELD_NAME,

file.getName())
.field(FsRiverUtil.DOC_FIELD_DATE,
file.lastModified())
.field(FsRiverUtil.DOC_FIELD_PATH_ENCODED,
SignTool.sign(file.getParent()))

.startObject("file").field("_name", file.getName())
.field("content",
Base64.encodeBytes(data))
.endObject().endObject());
}

   private void esIndex(String index, String type, String id,
       XContentBuilder xb) throws Exception {
   
       System.out.println("\nin :: "+index+"\nty :: "+type+"\nid ::

"+id+"\nxb :: "+xb);
System.out.println(client);
System.out.println(xb.string());
System.out.println("BULK :: "+bulk);
bulk.add(client.prepareIndex(index, type, id).setSource(xb));

       commitBulkIfNeeded();
       }
   private void commitBulkIfNeeded() throws Exception {
       System.out.println(bulk.numberOfActions());
       if (bulk != null && bulk.numberOfActions() > 0 &&

bulk.numberOfActions() >= bulkSize) {

       BulkResponse response = bulk.execute().actionGet();
       if (response.hasFailures()) {
   
   System.out.println("Failed to index'");
       }
       System.out.println("Succeded");
       // Reinit a new bulk
       bulk = client.prepareBulk();
       }
       }

please suggest where i am going wrong .and searching clue ...

Thanks,
Sanjay

--
View this message in context: http://elasticsearch-users.115913.n3.nabble.com/Error-while-upgrading-0-19-8-to-0-20-2-java-io-StreamCorruptedException-invalid-internal-transport-mt-tp4029022p4029110.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Is it necessary to create mapping before indexing ?
give me some java trick to create mapping .
or can i use mapping from this
https://github.com/dadoonet/fsriver/blob/master/src/main/java/org/elasticsearch/river/fs/FsRiver.java

Thanks,
Sanjay

Yes. It's mandatory.

Try something like:

XContentBuilder xbMapping = jsonBuilder().startObject()
.startObject("yourtype").startObject("properties")
.startObject("file").field("type", "attachment")
.endObject()
.endObject().endObject().endObject();

PutMappingResponse response = client.admin().indices()
.preparePutMapping("yourindex")
.setType("yourtype")
.setSource(xbMapping)
.execute().actionGet();

Le 31 janv. 2013 à 13:00, sbbagal sbbagal13@gmail.com a écrit :

Is it necessary to create mapping before indexing ?
give me some java trick to create mapping .
or can i use mapping from this
https://github.com/dadoonet/fsriver/blob/master/src/main/java/org/elasticsearch/river/fs/FsRiver.java

Thanks,
Sanjay

--
View this message in context: http://elasticsearch-users.115913.n3.nabble.com/Error-while-upgrading-0-19-8-to-0-20-2-java-io-StreamCorruptedException-invalid-internal-transport-mt-tp4029022p4029112.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi David ,
I defined mapping as you suggest but after mapping is created successfully but while indexing my program get stuck and doesn't giving any exception or error and no any output.

My program get stuck at the point of execute().actionget() while indexing.

client.prepareIndex(index, type, id).setSource(xb).execute().actionGet();

What can i do for it ?

Thanks !!!
Sanjay

Hard to say.
Anything in logs?

Without more information, I'm afraid I can't help.

Le 1 févr. 2013 à 05:49, sbbagal sbbagal13@gmail.com a écrit :

Hi David ,
I defined mapping as you suggest but after mapping is created successfully
but while indexing my program get stuck and doesn't giving any exception or
error and no any output.

My program get stuck at the point of execute().actionget() while indexing.

client.prepareIndex(index, type, id).setSource(xb).execute().actionGet();

What can i do for it ?

Thanks !!!
Sanjay

--
View this message in context: http://elasticsearch-users.115913.n3.nabble.com/Error-while-upgrading-0-19-8-to-0-20-2-java-io-StreamCorruptedException-invalid-internal-transport-mt-tp4029022p4029158.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

No logs. just looking process is running and running ...

Can you share your code somewhere as I can have a look at it?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 1 févr. 2013 à 13:02, sbbagal sbbagal13@gmail.com a écrit :

No logs. just looking process is running and running ...

--
View this message in context: http://elasticsearch-users.115913.n3.nabble.com/Error-while-upgrading-0-19-8-to-0-20-2-java-io-StreamCorruptedException-invalid-internal-transport-mt-tp4029022p4029181.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi David
Below is the code please review and tell me where i m going wrong

package org;

import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.Serializable;
import java.util.Map;

import org.elasticsearch.ElasticSearchException;
import org.elasticsearch.common.Base64;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.metadata.MappingMetaData;

import static org.elasticsearch.common.xcontent.XContentFactory.*;
import org.elasticsearch.common.xcontent.XContentBuilder;
import org.elasticsearch.index.query.QueryBuilder;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.action.admin.indices.create.CreateIndexRequestBuilder;
import org.elasticsearch.action.admin.indices.create.CreateIndexResponse;
import org.elasticsearch.action.admin.indices.mapping.put.PutMappingResponse;
import org.elasticsearch.action.bulk.BulkRequestBuilder;
import org.elasticsearch.action.bulk.BulkResponse;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.search.SearchRequestBuilder;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.search.SearchType;

/*import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.action.search.SearchType;
import org.elasticsearch.client.Client;
import org.elasticsearch.cluster.ClusterState;
import org.elasticsearch.cluster.block.ClusterBlockException;
import org.elasticsearch.cluster.metadata.IndexMetaData;
import org.elasticsearch.cluster.metadata.MappingMetaData;
import org.elasticsearch.common.Base64;
import org.elasticsearch.common.inject.Inject;
import org.elasticsearch.common.joda.time.format.ISODateTimeFormat;
import org.elasticsearch.common.util.concurrent.EsExecutors;

import org.elasticsearch.common.xcontent.support.XContentMapValues;
import org.elasticsearch.index.query.QueryBuilders;
import org.elasticsearch.indices.IndexAlreadyExistsException;
import org.elasticsearch.river.AbstractRiverComponent;
import org.elasticsearch.river.River;
import org.elasticsearch.river.RiverName;
import org.elasticsearch.river.RiverSettings;
import org.elasticsearch.search.SearchHit;
import org.elasticsearch.ExceptionsHelper;
import org.elasticsearch.action.admin.indices.mapping.put.PutMappingResponse;*/

public class Index implements Serializable {
private static final long serialVersionUID = 1L;

private static String index = null;
private static String type = null;
private String analyzer = null;
private BulkRequestBuilder bulk;
private Client client ;

public void setClient(Client client){
	this.client=client;
}
private final long bulkSize = 0L;

public Index() {
	this(SMDSearchProperties.INDEX_NAME, SMDSearchProperties.INDEX_TYPE_DOC, null);
}

public Index(String index, String type, String analyzer) {
	super();
	this.index = index;
	this.type = type;
	this.analyzer = analyzer;
}


/**
 * @return the index
 */
public String getIndex() {
	return index;
}
/**
 * @param index the index to set
 */
public void setIndex(String index) {
	this.index = index;
}
/**
 * @return the type
 */
public String getType() {
	return type;
}
/**
 * @param type the type to set
 */
public void setType(String type) {
	this.type = type;
}

/**
 * @return the analyzer
 */
public String getAnalyzer() {
	return analyzer;
}

/**
 * @param analyzer the analyzer to set
 */
public void setAnalyzer(String analyzer) {
	this.analyzer = analyzer;
}

@Override
public boolean equals(Object obj) {
	if (obj == null) return false;
	if (!(obj instanceof Index)) return false;
	
	Index index = (Index) obj;
	
	if (this.index != index.index && this.index != null && !this.index.equals(index.index)) return false;
	if (this.type != index.type && this.type != null && !this.type.equals(index.type)) return false;
	if (this.analyzer != index.analyzer && this.analyzer != null && !this.analyzer.equals(index.analyzer)) return false;

	return true;
}

private boolean isMappingExist(String index, String type) {
	ClusterState cs = client.admin().cluster().prepareState().setFilterIndices(index).execute().actionGet().getState();
	IndexMetaData imd = cs.getMetaData().index(index);

	if (imd == null) return false;

	MappingMetaData mdd = imd.mapping(type);

	if (mdd != null) return true;
	return false;
	}



public  void pushMapping(){
	
	boolean mappingExist = isMappingExist(index, type);
	if (!mappingExist) {		
		 System.out.println("Index  Creating First Time...");
         CreateIndexResponse createIndexResponse = new CreateIndexRequestBuilder( client.admin().indices(), index ).execute().actionGet();
	XContentBuilder xbMapping = null;
	try {
		xbMapping = jsonBuilder().startObject()
				.startObject(type).startObject("properties")
				.startObject("file").field("type", "attachment")
				.endObject()
				.endObject().endObject().endObject();
	} catch (IOException e) {
		// TODO Auto-generated catch block
		e.printStackTrace();
	}



			PutMappingResponse response = client.admin().indices()
			.preparePutMapping(index)
			.setIndices(index)
			.setType(type)
			.setSource(xbMapping)
			.execute().actionGet();
	}
	else 
		System.out.println("Mapping Exist!!!");
}

public void indexFile(File file) throws Exception {

	FileInputStream fileReader = new FileInputStream(file);

	

	bulk = client.prepareBulk();
	byte[] buffer = new byte[1024];
	ByteArrayOutputStream bos = new ByteArrayOutputStream();
	int i = 0;
	while (-1 != (i = fileReader.read(buffer))) {
	bos.write(buffer, 0, i);
	}
	byte[] data = bos.toByteArray();

	fileReader.close();
	bos.close();

	              esIndex(index,
	                        type,
	                        SignTool.sign(file.getAbsolutePath()),
	                        jsonBuilder()
	                                .startObject()
	                               .field(FsRiverUtil.DOC_FIELD_NAME, file.getName())
	                                .field(FsRiverUtil.DOC_FIELD_DATE,
	                                        file.lastModified())
	                                .field(FsRiverUtil.DOC_FIELD_PATH_ENCODED,
	                                        SignTool.sign(file.getParent()))
	                                        .startObject("file").field("_name", file.getName())
	                                .field("content", new String(Base64.encodeBytes(data)))
	                                .endObject().endObject());
	            }

    private void esIndex(String index, String type, String id,
		XContentBuilder xb) throws Exception {

    	System.out.println("\nin :: "+index+"\nty :: "+type+"\nid :: "+id+"\nxb :: "+xb);
    	System.out.println(client);
    	System.out.println(xb.string());
    	System.out.println("BULK :: "+bulk);
		bulk.add(client.prepareIndex(index, type, id).setSource(xb));

		commitBulkIfNeeded();
		}
    private void commitBulkIfNeeded() {
    	System.out.println(bulk.numberOfActions());
    	try {
			if (bulk != null && bulk.numberOfActions() > 0 && bulk.numberOfActions() >= bulkSize) {
			//if (logger.isDebugEnabled()) logger.debug("ES Bulk Commit is needed");
				System.out.println("bulk start execute");
			BulkResponse response = bulk.execute().actionGet();
			System.out.println("bulk executed");
			if (response.hasFailures()) {
			//logger.warn("Failed to execute "
			//+ response.buildFailureMessage());

System.out.println("loger failed 'comitted'");
}
System.out.println("Succeded");
// Reinit a new bulk
bulk = client.prepareBulk();
}
} catch (ElasticSearchException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}

    	public static void destroy(Client client) throws Exception {
    	    client.close();
    	}

}

By sharing your code, I meant "upload a full running test case with your issue" to GitHub or create a Gist that can be easily run by anybody.
You only copy and paste a source code that came from FSRiver.

I don't know what you are doing to launch it.

So, could you please create a full running project (pom.xml, src, tests….)? I will be happy to try to reproduce your concern.

Don't copy and paste your code in mail body. See best practices here: Elasticsearch Platform — Find real-time answers at scale | Elastic

David.

Le 4 févr. 2013 à 13:28, sbbagal sbbagal13@gmail.com a écrit :

Hi David
Below is the code please review and tell me where i m going wrong

package org;

[SKIP]

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

ok. Sure .. Sorry i really don't know about these things . I am new to this one.