Regarding NGramTokeniser

Dear elasticUs,

I am indexing a text data which has special characters,spaces and alpanumeric. I am pasting sample indexed data in of one document.

{"timestamp":"Fri Apr 17 16:16:47 IST 2015",

"NODE_TYPE_NAME":"CAEPART",

"CONTENT":["MATRIX=1.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 0.000000 1.000000
0.000000 0.000000 0.000000 0.000000 1.000000</C>
VERSION=V1</C>
MATERIAL_NAME=ALUMINIUM</C>
ELEMENT_SIZE=</C>
PART_MATERIAL=</C>
PART_VERSION=</C>
PART_THICKNESS=0.000000</C>
ELEMENT_TYPE=</C>
PART_NUMBER=G000119492</C>
PART_NAME=LONGERON AR DROIT</C>
ACTIVE MESH FILE=3.p","</C>
LOCALMATRIX=1.000000 0.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 0.000000
1.000000 0.000000 0.000000 0.000000 0.000000 1.000000</C>;
NODE_TYPE=CAEPART;
NODE_NAME=LONGERON AR DROIT;
PROJECT_NAME=Test12"],

"ASSEMBLY_ID":"SD001PR571",

"LAST_MODIFIED":"2015-04-15 12:10:00.0",

"ID":"SD001PR565",

"NODE_ID":"SD001PR565",

"NODE_NAME":"LONGERON AR DROIT",

"NODEFROM":"PROJECT",

"ASSEMBLY_NAME":"asdfsf",

"PROJECT_NAME":"Test12"}

I am using in shema.xml

I dont know where i am going wrong I want to query with partial search. please help