[2017-06-12 01:00:00,155][ERROR][marvel.agent.exporter.local] local exporter [default_local] - failed to delete indices
RemoteTransportException[[data-1][x.x.x.x:9200][indices:admin/delete]]; nested: IndexNotFoundException[no such index];
Caused by: [.marvel-es-1-2017.06.05] IndexNotFoundException[no such index]
at org.elasticsearch.cluster.metadata.MetaDataDeleteIndexService$1.execute(MetaDataDeleteIndexService.java:91)
at org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:45)
at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:468)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[2017-06-12 07:33:08,197][DEBUG][index.search.slowlog.query] [data-0] [datacontent-jun-2017][1] took[98.6ms], took_millis[98], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[15], source[{"size":100,"query":{"nested":{"query":{"bool":{"must":[{"term":{"List.Id":{"value":136003}}},{"terms":{"List.Id":[353174427,353339434]}}]}},"path":"List"}}}], extra_source[],
I am sending two different logs from two different locations through filebeat to logstash . My filebeat configuration is:
Now output says that the source file is /var/log/elasticsearch/escluster.log and type is exceptions, which is correct according to your filebeat config.
Alright, your multiline codec may be mixing your logs together. I would recommend to use multiline in filebeat instead.
If you want to keep multiline in logstash, you should use one multiline codec for each type. But I am not sure if logstash will allow to do that.
I think the problem is the javastack log (i.e.first log that i had posted) only needs multiline codec the second one does not need it. When i ran the filebeat beat by giving one path in input prospector it worked fine but when i used two input prospectors the output is not coming properly.
Why i am getting some null beats without any message or anything in it(i.e.which i pasted above) ? Is it possible to run 2 filebeats by giving different input prospectors?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.