Elasticsearch IllegalArgumentException


#1

Hello.

I am new to Elasticsearch. i am using Fluentd, Elasticsearch, Kibana.

my Fluentd configuration :

<source>
type tail
path /var/log/access/assets.access/*
pos_file /var/log/access/assets.access/readlog.pos
read_from_head true
format multiline
format nginx
tag assets.access
time_format %d/%b/%Y:%H:%M:%S %z
keep_time_key true
</source>

<match assets.access>
type elasticsearch
host 192.168.1.32
port 9200
index_name assets.access
type_name access
flush_interval 10s
format json
</match>

and Elasticsearch Index mapping like this :

curl -XPOST 'elastic.local:9200/assets.access' -d '
{
"mappings":{
"assets.access":{
"properties":{
"time":{
"type":"date",
"format": "dd/MMM/yyyy:HH:mm:ss Z"
}
}
}
}
}'

Index mapping and fluentd configuration generated no error.
but in the process of inserting document to index, i got this error :

Suppressed: MapperParsingException[failed to parse [time]]; nested: IllegalArgumentException[Invalid format: "28/Mar/2016:15:03:49 +0900" is malformed at "/Mar/2016:15:03:49 +0900"];

In a different way,
i configure fluentd like this :

<source>
type tail
path /var/log/access/assets.access/*
pos_file /var/log/access/assets.access/readlog.pos
read_from_head true
format multiline
format nginx
tag sample
</source>

<filter **>
type record_transformer
<record>
date ${time}
</record>
</filter>

<match sample>
type record_reformer
output_tag assets.access
date ${time.strftime('%Y-%m-%d %H:%M:%S %z')}
</match>

<match assets.access>
type elasticsearch
host 192.168.1.32
port 9200
index_name assets.access
type_name access
flush_interval 10s
format json
</match>

and Elasticsearch Index mapping :

curl -XPOST 'elastic.local:9200/assets.access' -d '
{
"mappings":{
"assets.access":{
"properties":{
"date":{
"type":"date",
"format": "yyyy-MM-dd HH:mm:ss Z"
}
}
}
}
}'

and i got this error :
MapperParsingException[failed to parse [date]]; nested: IllegalArgumentException[Invalid format: "2016-04-05 15:50:47 +0900" is malformed at " 15:50:47 +0900"];

why this error happened?
thanks.


(Daniel Mitterdorfer) #2

Hi,

with the data you post it is not exactly to tell but you definitely have a wrong mapping. A mapping is valid for an index and a type The index name is "assets.access" in your FluentD configuration and in your Elasticsearch config. But the type name is called "access" in FluentD and once again "assets.access" in Elasticsearch. This is definitely wrong.

The mapping that matches your FluentD configuration is:

POST /assets.access
{
   "mappings": {
      "access": {
         "properties": {
            "date": {
               "type": "date",
               "format": "yyyy-MM-dd HH:mm:ss Z"
            }
         }
      }
   }
}

Note that the element below "mappings" is called "access" which matches your FluentD configuration. I have chosen the date pattern from the second example.

Also note that the mapping for a field named "foo" must be identical for all types within an index so I suggest you delete the index entirely (just issue DELETE /assets.access) and start from scratch with the mapping above (assuming that you are on a test system).

Daniel


#3

Hi danielmitterdorfer, Thank you for answering this topic.
i try it, define Index_name as assets.access and type_name as access, mapping as you suggested.
then logs insert to elasticsearch with no error and creating index in Kibana work fine.

but Discover tab shows 0hits, No results found.
and get this error
[logstash-*] IndexNotFoundException[no such index]

i don't understand why kibana cannot find index that already create one in Kibana.


(Daniel Mitterdorfer) #4

Hi,

Kibana needs to know at which indices it show look. It assumes that is used together with Logstash by default. So I think you just need to configure another index pattern as default in the "Settings" tab that matches your index name (which is "assets.access").

Daniel


#5

is that mean matching Index pattern as "assets.access"?
i had configured index pattern to same as Index name when create Index.(exclude using logstash_format)

that is,
i configured index pattern as assets.access and created.
also assets.access lists every field (in Settings -> Indices).
nevertheless occured that error.

thank you


(Daniel Mitterdorfer) #6

Hi,

do you mind sharing a screenshot of your "Settings" and your "Discover" tabs?

Daniel


#7

Hi Daniel, Thanks for Your Patience.




(Daniel Mitterdorfer) #8

Hi,

That looks ok so far. It just doesn't find any results in the "Discover" screen but the time range is rather short (last 15 minutes). What if you expand the time range to something like a year (see the dropdown in the upper right corner)

Daniel


#9

it still doesn't work..


(Daniel Mitterdorfer) #10

Hmm,

are there any data in Elasticsearch? What does this command (on the command line) return?

curl 'http://elastic.local:9200/assets.access/_search' -d '
{
    "query": {
        "match_all": {}
    }
}'

It shows you the first few documents and the total number of hits (which should be greater than zero).

Daniel


#11

this is in ElasticHQ

and your command results :
{"took":1,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":117,"max_score":1.0,"hits":[{"_index":"assets.access","_type":"access","_id":"AVQTau4b5xT2ZO9Ht133","_score":1.0,"_source":{ ...

it is working correct. but why kibana occured error..?

thank you


(Daniel Mitterdorfer) #12

Hi,

you stopped at the most interesting part, namely the query result. :wink: Can you please share a few of the results? What could happen is that the date format is different from what Kibana expects and that's why it does not find anything.

Daniel


#14

Hi Daniel!

...i look into the result again, there are no date field!
but logs contain time contents ([15/Mar/2016:16:21:58 +0900])

results:
{"took":1,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":117,"max_score":1.0,"hits":[{"_index":"assets.access","_type":"access","_id":"AVQTau4b5xT2ZO9Ht133","_score":1.0,"_source":{"remote":"192.168.1.1","host":"-","user":"-","method":"GET","path":"/favicon.ico","code":"401","size":"605","referer":"-","agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36"}},{"_index":"assets.access","_type":"access","_id":"AVQTau4b5xT2ZO9Ht135","_score":1.0,"_source":{"remote":"192.168.1.1","host":"-","user":"-","method":"GET","path":"/favicon.ico","code":"401","size":"605","referer":"-","agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36"}},{"_index":"assets.access","_type":"access","_id":"AVQTau4b5xT2ZO9Ht14A","_score":1.0,"_source":{"remote":"192.168.1.1","host":"-","user":"-","method":"GET","path":"/public/

just to be sure, Providing fluentd plugin manual,
"Parser removes time field from event record by default. If you want to keep time field in record, set true to keep_time_key. Default is false."

so, then add the "keep_time_key true" in fluentd config file,
results:
,{"_index":"assets.access","_type":"access","_id":"AVQnE6AH2yWbZbMVF9R9","_score":1.0,"_source":{"remote":"192.168.1.1","host":"-","user":"-","time":"16/Mar/2016:15:37:48 +0900","method":"GET","path":"/public/data_enc.zip.sha","code":"200","size":"55","referer":"-","agent":"Dalvik/2.1.0 (Linux; U; Android 5.0.1; LG-F320K Build/LRX21Y)"}},

this results aren't match the index mapping! (field name, time format)

Thank you.


#15

wow, after adding keep_time_key true and changing mapping(field name and time format), it works at kibana!
thank you Daniel!:smile:


(Daniel Mitterdorfer) #16

Great it finally worked out. That was definitely tricky! :slight_smile:

Daniel


(Uemsh) #17

I have installed Kibana and elasticsearch; while starting kibana i am seeing the following error

[17:43:54.891] [error][status][plugin:elasticsearch] Status changed from green to red - [illegal_argument_exception] [field_sort] unknown field [ignore_unmapped], parser not found

Any idea how to fix this?