FileBeat Output to Solr

Hi Filebeat Community,

I have requirement to use Filebeat to collect logs and push them into Solr(SolrCloud cluster) directly as the client doesn't want use any management/enrichment layer like LogStash in the pipeline. Upon checking with the reference documentation, I could find the output support for LogStash, EleasticSearch , Kafka, etc.

So,

  1. What options do I have to push the output to Solr? Did any one has implemented this? If so, can you please share how?
  2. I am not so sure, but can we use Elastic Search output in yaml config and still point it to Solr(URL)?
  3. Any other route to implement the Filebeat -> Solr Integration.

Any advise or help is greatly appreciated.

Thanks,
Raja.

I don't know details about the current Solr API but I assume it is not going to work with our current outputs. Not surprisingly, filebeat was designed to work best with elasticsearch. You could probably send your data to logstash and from there use a plugin to send it to Solr. Then you have LS in the equation for routing.

I advice you to dig deeper on the requirements on why LS is not an option and why potentially ES is not an option :wink:

Thanks for getting back @ruflin . Yes, Filebeat is closely integrated with ELK stack. The client has a access to Solr backed HadoopSearch(by major vendor in Hadoop space) and so going with ELK is a longshot.

I probably might have to look for other collectors that work with Solr or induce Kafka in the pipeline and so Filebeat can talk to it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.