Using filebeat sending logs to logstash?

Hi,

Two types of Logs:

[2017-06-12 01:00:00,155][ERROR][marvel.agent.exporter.local] local exporter [default_local] - failed to delete indices
RemoteTransportException[[data-1][x.x.x.x:9200][indices:admin/delete]]; nested: IndexNotFoundException[no such index];
Caused by: [.marvel-es-1-2017.06.05] IndexNotFoundException[no such index]
        at org.elasticsearch.cluster.metadata.MetaDataDeleteIndexService$1.execute(MetaDataDeleteIndexService.java:91)
        at org.elasticsearch.cluster.ClusterStateUpdateTask.execute(ClusterStateUpdateTask.java:45)
        at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:468)
        at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
        at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
[2017-06-12 07:33:08,197][DEBUG][index.search.slowlog.query] [data-0] [datacontent-jun-2017][1] took[98.6ms], took_millis[98], types[], stats[], search_type[QUERY_THEN_FETCH], total_shards[15], source[{"size":100,"query":{"nested":{"query":{"bool":{"must":[{"term":{"List.Id":{"value":136003}}},{"terms":{"List.Id":[353174427,353339434]}}]}},"path":"List"}}}], extra_source[],

I am sending two different logs from two different locations through filebeat to logstash . My filebeat configuration is:

filebeat.prospectors:

- input_type: log
  paths:
    - "/var/log/elasticsearch/escluster.log"
  document_type: exceptions

- input_type: log
  paths:
    - "/var/log/elasticsearch/escluster_index_search_slowlog.log"
  document_type: slowlogs


output.logstash:
  hosts: ["10.0.0.5:5044"]

logging:
  to_files: true
  files:
    path: /etc/filebeat/logs
  level: debug
  selectors: ["*"]

Here in this context the type should be slowlogs but it is exceptions i dont know where the problem is?

Thanks

Your filebeat config says nothing about the path we see in your output. This is confusing.

sry @Oozza

It was bad copy paste. I updated my question.

Thanks

Now output says that the source file is /var/log/elasticsearch/escluster.log and type is exceptions, which is correct according to your filebeat config.

But the above log is in [quote="Yaswanth, post:1, topic:89030"]
/var/log/Elasticsearch/escluster_index_search_slowlog.log
[/quote]

The filebeat is showing the log with different source and type. Is there any problem in my logstash config file?

Alright, your multiline codec may be mixing your logs together. I would recommend to use multiline in filebeat instead.
If you want to keep multiline in logstash, you should use one multiline codec for each type. But I am not sure if logstash will allow to do that.

Thanks @Oozza

Based on your suggestion i used the multiline codec in the filebeat but it is not parsing this type of log properly

I HAD used below configuration:

 multiline.pattern: '^\[[0-9]{4}-[0-9]{2}-[0-9]{2}'
 multiline.negate: true
 multiline.match: after

Thanks

Can you share your filebeat config with multiline? Is multiline configuration with correct prospector and indentation correct?

According to this go playground you pattern looks good.

Thanks @steffens

THis my filebeat configuration

filebeat.prospectors:

- input_type: log
  paths:
    - "/var/log/elasticsearch/escluster.log"
  document_type: exceptions

- input_type: log
  paths:
    - "/var/log/elasticsearch/escluster_index_search_slowlog.log"
  document_type: slowlogs

multiline.pattern: '^\[[0-9]{4}-[0-9]{2}-[0-9]{2}'
multiline.negate: true
multiline.match: after

output.logstash:
  hosts: ["10.0.0.5:5044"]


logging:
  to_files: true
  files:
    path: /etc/filebeat/logs
  level: debug
  selectors: ["*"]

I think the problem is the javastack log (i.e.first log that i had posted) only needs multiline codec the second one does not need it. When i ran the filebeat beat by giving one path in input prospector it worked fine but when i used two input prospectors the output is not coming properly.

Why i am getting some null beats without any message or anything in it(i.e.which i pasted above) ? Is it possible to run 2 filebeats by giving different input prospectors?

Thanks

The multiline configuration is per prospector and needs to be indented at the same level as the other prospector options. For example:

- input_type: log
  paths:
    - "/var/log/elasticsearch/escluster_index_search_slowlog.log"
  document_type: slowlogs
  multiline.pattern: '^\[[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.