Thanks for helping me in all the stuff. Now, I am facing a weird problem.
These are my logs
2017-01-03 05:40:50.522 INFO main ---> org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.retry.annotation.RetryConfiguration' of type [class org.springframework.retry.annotation.RetryConfiguration$$EnhancerBySpringCGLIB$$88c2216e] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2017-01-03 05:40:50.543 INFO main ---> org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [class org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$af188c46] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
And my logstash server configuration file is this.
Now I want to sort the data on the basis of event_time. But when I am trying to do the same. it says field data is not true. Can somebody help me what is the mistake .
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [event_time] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory."
}
],
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 0,
"index": "logstash-2017.07.03",
"node": "FDpTDtSMQo2YpsSoSrPKGg",
"reason": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [event_time] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory."
}
}
],
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [event_time] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory."
}
},
"status": 400
}
The default field for the event's timestamp is @timestamp. Unless you really want the field to be named event_time you can save yourself some trouble by sticking to the defaults.
It could be that ES doesn't recognize "2017-01-03 05:40:50.522" as a timestamp and therefore mapped the field as text. If you use the date filter you can transform the timestamp into something that ES will recognize as a timestamp and then your query should work just fine. (But note that you'll have to reindex to change the mapping of the event_time field.)
I[quote="magnusbaeck, post:4, topic:91636"]
If you use the date filter you can transform the timestamp into something that ES will recognize as a timestamp and then your query should work just fine. (But note that you'll have to reindex to change the mapping of the event_time field.)
[/quote]
Thanks. I am struggling to visualize this part. Can you please give me a small example or how I can proceed with that.
Hi @magnusbaeck, I believe there is some confuse in our communication. I don't want to convert or change the name of @timestamp into event_time. Please check my logstash filter, I want to break log message into event_time, log level and other parts.
Now this event_time is coming as text, but I want it as date.
I know.
Use the date filter to parse the event_time field. You can save yourself some trouble by saving the parsed result into the @timestamp field but you can also use the target option to overwrite the existing event_time value with the parsed value.
The string parsed by the date filter will be recognized as a timestamp by Elasticsearch, but the mapping of existing indexes will not change. I suggest you delete your current index(es) and have Logstash recreate them.
See this is my logs which also contains multiple lines log stacktrace.
2017-01-03 05:40:49.681 INFO main --- org.springframework.context.annotation.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext@41d16cc3: startup date [Tue Jan 03 05:40:49 UTC 2017]; root of context hierarchy
2017-01-03 05:40:49.693 INFO main --- com.getsentry.raven.DefaultRavenFactory : Using an HTTP connection to Sentry.
2017-01-03 05:40:49.935 INFO background-preinit --- org.hibernate.validator.internal.util.Version : HV000001: Hibernate Validator 5.2.4.Final
2017-01-03 05:40:50.355 INFO main --- org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor : JSR-330 'javax.inject.Inject' annotation found and supported for autowiring
2017-01-03 05:40:50.522 INFO main --- org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.retry.annotation.RetryConfiguration' of type [class org.springframework.retry.annotation.RetryConfiguration$$EnhancerBySpringCGLIB$$88c2216e] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2017-01-03 05:40:50.543 INFO main --- org.springframework.context.support.PostProcessorRegistrationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [class org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$af188c46] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2017-01-03 05:40:51.430 INFO main --- com.getsentry.raven.connection.AsyncConnection : Gracefully shutdown sentry threads.
2017-01-03 05:40:52.430 WARN main --- com.getsentry.raven.connection.AsyncConnection : Graceful shutdown took too much time, forcing the shutdown.
2017-01-03 05:40:52.430 INFO main --- com.getsentry.raven.connection.AsyncConnection : 5 tasks failed to execute before the shutdown.
2017-01-03 05:40:52.430 INFO main --- com.getsentry.raven.connection.AsyncConnection : Shutdown finished.
2017-01-03 05:40:52.445 INFO main --- com.getsentry.raven.DefaultRavenFactory : Using an HTTP connection to Sentry.
You've configured the date filter to save the parsed result in the @timestamp field, yet it's the event_time field you're trying to sort on. That doesn't make sense.
Output : grep parse failure. There is no event_time in it.
Thats my stupidity. Now I check the mapping. I can find event_time as date field. Thanks to you.
But for my kibana query. I am getting only this.
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [event_time] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."
}
],
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 0,
"index": "logstash-2017.07.04",
"node": "suo9gTyRRxWBZiqGOt3nzg",
"reason": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [event_time] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."
}
}
]
},
"status": 400
}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.