The following code, when I attempt to connect to a host that is down,
throws a NoNodeAvailableException.
No problem there - however, in getting to that point ElasticSearch
apparently recurses quite a lot, so that by the time it throws the
Exception, the stack is so deep that log4j complains about it being
too big, and I can't find the actual point of the problem...
try {
TransportClient client = new
TransportClient().addTransportAddress(new
InetSocketTransportAddress(myHost, myPort);
If I just System.out.println() the caught exception, I see:
-- org.elasticsearch.client.transport.NoNodeAvailableException: No
node available
or if I call e.printStackTrace(), I see a reasonable-sized stacktrace.
However, if I try to print the stacktrace in log4j, I get:
java.lang.StackOverflowError
at java.lang.Throwable.getLocalizedMessage(Throwable.java:267)
at java.lang.Throwable.toString(Throwable.java:343)
at java.lang.String.valueOf(String.java:2826)
at
org.apache.log4j.spi.VectorWriter.println(ThrowableInformation.java:
181)
at java.lang.Throwable.printStackTrace(Throwable.java:509)
at
org.apache.log4j.spi.ThrowableInformation.extractStringRep(ThrowableInformation.java:
67)
at
org.apache.log4j.spi.ThrowableInformation.extractStringRep(ThrowableInformation.java:
99)
at
org.apache.log4j.spi.ThrowableInformation.extractStringRep(ThrowableInformation.java:
99)
... ad infinitum.
I'm wondering what the stacktrace contains that might be causing
this... possibly many branching "caused-by" entries? Anyone else seen
this issue with log4j?
Do you have more stack trace when it happens in the log4j? See where the recursion happens?
On Friday, July 8, 2011 at 10:32 PM, TimOnGmail wrote:
Hi all...
The following code, when I attempt to connect to a host that is down,
throws a NoNodeAvailableException.
No problem there - however, in getting to that point Elasticsearch
apparently recurses quite a lot, so that by the time it throws the
Exception, the stack is so deep that log4j complains about it being
too big, and I can't find the actual point of the problem...
try {
TransportClient client = new
TransportClient().addTransportAddress(new
InetSocketTransportAddress(myHost, myPort);
If I just System.out.println() the caught exception, I see:
-- org.elasticsearch.client.transport.NoNodeAvailableException: No
node available
or if I call e.printStackTrace(), I see a reasonable-sized stacktrace.
However, if I try to print the stacktrace in log4j, I get:
java.lang.StackOverflowError
at java.lang.Throwable.getLocalizedMessage(Throwable.java:267)
at java.lang.Throwable.toString(Throwable.java:343)
at java.lang.String.valueOf(String.java:2826)
at
org.apache.log4j.spi.VectorWriter.println(ThrowableInformation.java:
181)
at java.lang.Throwable.printStackTrace(Throwable.java:509)
at
org.apache.log4j.spi.ThrowableInformation.extractStringRep(ThrowableInformation.java:
67)
at
org.apache.log4j.spi.ThrowableInformation.extractStringRep(ThrowableInformation.java:
99)
at
org.apache.log4j.spi.ThrowableInformation.extractStringRep(ThrowableInformation.java:
99)
... ad infinitum.
I'm wondering what the stacktrace contains that might be causing
this... possibly many branching "caused-by" entries? Anyone else seen
this issue with log4j?
Do you have more stack trace when it happens in the log4j? See where the recursion happens?
On Friday, July 8, 2011 at 10:32 PM, TimOnGmail wrote:
Hi all...
The following code, when I attempt to connect to a host that is down,
throws a NoNodeAvailableException.
No problem there - however, in getting to that point Elasticsearch
apparently recurses quite a lot, so that by the time it throws the
Exception, the stack is so deep that log4j complains about it being
too big, and I can't find the actual point of the problem...
try {
TransportClient client = new
TransportClient().addTransportAddress(new
InetSocketTransportAddress(myHost, myPort);
If I just System.out.println() the caught exception, I see:
-- org.elasticsearch.client.transport.NoNodeAvailableException: No
node available
or if I call e.printStackTrace(), I see a reasonable-sized stacktrace.
However, if I try to print the stacktrace in log4j, I get:
java.lang.StackOverflowError
at java.lang.Throwable.getLocalizedMessage(Throwable.java:267)
at java.lang.Throwable.toString(Throwable.java:343)
at java.lang.String.valueOf(String.java:2826)
at
org.apache.log4j.spi.VectorWriter.println(ThrowableInformation.java:
181)
at java.lang.Throwable.printStackTrace(Throwable.java:509)
at
org.apache.log4j.spi.ThrowableInformation.extractStringRep(ThrowableInformation.java:
67)
at
org.apache.log4j.spi.ThrowableInformation.extractStringRep(ThrowableInformation.java:
99)
at
org.apache.log4j.spi.ThrowableInformation.extractStringRep(ThrowableInformation.java:
99)
... ad infinitum.
I'm wondering what the stacktrace contains that might be causing
this... possibly many branching "caused-by" entries? Anyone else seen
this issue with log4j?
Do you have more stack trace when it happens in the log4j? See where the recursion happens?
On Friday, July 8, 2011 at 10:32 PM, TimOnGmail wrote:
Hi all...
The following code, when I attempt to connect to a host that is down,
throws a NoNodeAvailableException.
No problem there - however, in getting to that point Elasticsearch
apparently recurses quite a lot, so that by the time it throws the
Exception, the stack is so deep that log4j complains about it being
too big, and I can't find the actual point of the problem...
try {
TransportClient client = new
TransportClient().addTransportAddress(new
InetSocketTransportAddress(myHost, myPort);
If I just System.out.println() the caught exception, I see:
-- org.elasticsearch.client.transport.NoNodeAvailableException: No
node available
or if I call e.printStackTrace(), I see a reasonable-sized stacktrace.
However, if I try to print the stacktrace in log4j, I get:
java.lang.StackOverflowError
at java.lang.Throwable.getLocalizedMessage(Throwable.java:267)
at java.lang.Throwable.toString(Throwable.java:343)
at java.lang.String.valueOf(String.java:2826)
at
org.apache.log4j.spi.VectorWriter.println(ThrowableInformation.java:
181)
at java.lang.Throwable.printStackTrace(Throwable.java:509)
at
org.apache.log4j.spi.ThrowableInformation.extractStringRep(ThrowableInformation.java:
67)
at
org.apache.log4j.spi.ThrowableInformation.extractStringRep(ThrowableInformation.java:
99)
at
org.apache.log4j.spi.ThrowableInformation.extractStringRep(ThrowableInformation.java:
99)
... ad infinitum.
I'm wondering what the stacktrace contains that might be causing
this... possibly many branching "caused-by" entries? Anyone else seen
this issue with log4j?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.