TypeMissingException


(Charlie Ott) #1

Thanks in advance for anyone taking time to read this post and provide
feedback.

We have been using elasticsearch since version 0.17 for our enterprise
web application. As more documents are being indexed, i have
increased the number of nodes to 3 nodes with 6 shards and 2 replicas.

I shutdown the nodes last week as done in the past. But now, when i
brought them back up, i am getting an exception in my JSON structured
query response:

{

took: 16
timed_out: false
_shards: {
    total: 6
    successful: 5
    failed: 1
    failures: [
        {
            status: 404
            reason: TypeMissingException[[ssp] type[record]

missing: failed to find type loaded for doc [99001]]
}
]
}
hits: {
total: 150006
max_score: 1
hits: [ ]
}

}

i am guessing that 1 of the 6 shards is corrupted/failed? Could
someone please explain why this has happened so that i can avoid
having it happen again, or explain what i need to do in order to
remedy the problem.

Here is some additional index information. I have heard from a co-
worker that 'deleted_docs' can cause issues? is this true?
index: {

primary_size: 1.9gb
primary_size_in_bytes: 2139289724
size: 5.9gb
size_in_bytes: 6417846270

}
translog: {

operations: 0

}
docs: {

num_docs: 150006
max_doc: 155212
deleted_docs: 5206

}
merges: {

current: 0
current_docs: 0
current_size: 0b
current_size_in_bytes: 0
total: 0
total_time: 0s
total_time_in_millis: 0
total_docs: 0
total_size: 0b
total_size_in_bytes: 0

}
refresh: {

total: 6
total_time: 0s
total_time_in_millis: 0

}
flush: {

total: 0
total_time: 0s
total_time_in_millis: 0

}


(Charlie Ott) #2

I also see this in my log:

[2012-06-19 10:17:01,571][DEBUG][action.search.type ] [Havok] [26]
Failed to execute fetch phase
org.elasticsearch.indices.TypeMissingException: [ssp] type[record] missing:
failed to find type loaded for doc [99001]

followed by the error stack from Thread.run() to FetchPhase.execute()


(Charlie Ott) #3

Using MOBZ also. (see screenshot)


(Charlie Ott) #4

Probably should have included this information in the beginning:

Windows 7 64-bit
jdk 1.6 u32 (32-bit)
elasticsearch 0.19

Using Tanuki service wrapper:
set.ELASTIC_HOME=C:\ES\Node1
set.JAVA_HOME=C:\Program Files (x86)\Java\jdk1.6.0_32
wrapper.java.command=%JAVA_HOME%/bin/java

wrapper.java.mainclass=org.tanukisoftware.wrapper.WrapperSimpleApp

Java Additional Parameters

wrapper.java.additional.1=-server
wrapper.java.additional.2=-Xss128k
wrapper.java.additional.3=-XX:+UseParNewGC
wrapper.java.additional.4=-XX:+UseConcMarkSweepGC
wrapper.java.additional.5=-XX:+CMSParallelRemarkEnabled
wrapper.java.additional.6=-XX:SurvivorRatio=8
wrapper.java.additional.7=-XX:MaxTenuringThreshold=1
wrapper.java.additional.8=-XX:CMSInitiatingOccupancyFraction=75
wrapper.java.additional.9=-XX:+UseCMSInitiatingOccupancyOnly
wrapper.java.additional.10=-XX:+HeapDumpOnOutOfMemoryError
wrapper.java.additional.11=-Djline.enabled=false
wrapper.java.additional.12=-Delasticsearch
wrapper.java.additional.13=-Des-foreground=yes
wrapper.java.additional.14=-Des.path.home=%ELASTIC_HOME%

Initial Java Heap Size (in MB)

wrapper.java.initmemory=256

Maximum Java Heap Size (in MB)

wrapper.java.maxmemory=1024

Java Classpath (include wrapper.jar) Add class path elements as

needed starting from 1

wrapper.java.classpath.1=%ELASTIC_HOME%/lib/elasticsearch-0.19.0.jar
wrapper.java.classpath.2=%ELASTIC_HOME%/lib/*
wrapper.java.classpath.3=%ELASTIC_HOME%/lib/sigar/*

Java Bits. On applicable platforms, tells the JVM to run in 32 or 64-bit

mode.
wrapper.java.additional.auto_bits=TRUE

#For 32-bit architectures
wrapper.java.library.path.1=%ELASTIC_HOME%\bin\native\lib

Application parameters. Add parameters as needed starting from 1

wrapper.app.parameter.1=org.elasticsearch.bootstrap.ElasticSearch


(Charlie Ott) #5

I was able to resolve the issue by removing the single record reporting no
'type'

curl -XDELETE 'http://localhost:9200/index/type/99001'

Now, i just need to find out how the record was indexed w/o TYPE while all 155k others were.


(Shay Banon) #6

Was there any upgrade happening when you shutdown the cluster?

On Tue, Jun 19, 2012 at 10:19 PM, Charlie Ott charlieott@gmail.com wrote:

I was able to resolve the issue by removing the single record reporting no
'type'

curl -XDELETE 'http://localhost:9200/index/type/99001'

Now, i just need to find out how the record was indexed w/o TYPE while all 155k others were.


(system) #7