Big data: Log analysis with ELK and Spark

I'm new to ELK stack and Big data. Now, i'm trying to build a one node Log Analytics tool based on ELK+Kafka+Spark Machine Learning.
I'm trying to implement the lambda architecture with the following layers:
Batch: Spark
Speed: Logstash+ES
Serving: Kibana
I want to check if this is the right implementation or do you have a better architecture.
If you have a useful Tutorial, i would be thankful.

This looks like a perfectly reasonable approach. Please note that there are a few differences between how the Hadoop connector works with how Logstash approaches things. These aren't documented anywhere since they are separate projects, but it's something that is worth being aware of.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.