I have requirement to use Filebeat to collect logs and push them into Solr(SolrCloud cluster) directly as the client doesn't want use any management/enrichment layer like LogStash in the pipeline. Upon checking with the reference documentation, I could find the output support for LogStash, EleasticSearch , Kafka, etc.
So,
What options do I have to push the output to Solr? Did any one has implemented this? If so, can you please share how?
I am not so sure, but can we use Elastic Search output in yaml config and still point it to Solr(URL)?
Any other route to implement the Filebeat -> Solr Integration.
I don't know details about the current Solr API but I assume it is not going to work with our current outputs. Not surprisingly, filebeat was designed to work best with elasticsearch. You could probably send your data to logstash and from there use a plugin to send it to Solr. Then you have LS in the equation for routing.
I advice you to dig deeper on the requirements on why LS is not an option and why potentially ES is not an option
Thanks for getting back @ruflin . Yes, Filebeat is closely integrated with ELK stack. The client has a access to Solr backed HadoopSearch(by major vendor in Hadoop space) and so going with ELK is a longshot.
I probably might have to look for other collectors that work with Solr or induce Kafka in the pipeline and so Filebeat can talk to it.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.