Hi,
I want to implement a logstash pipeline to parallelly send unprocessed data to s3 and processed data to ES.
I currently have this implementation but data is not publishing to s3 when ES is not responding.
how can I run both processes (publish to s3 and publish to es) without depending one on another?
I am planning to use this for an auto-scale application and the elk pipeline can not be stopped at any given time.
- pipeline.id: beats-server
queue.type: persisted
config.string: |
input {
beats {
port => 5044
}
}
output {
pipeline { send_to => [toes, tos3] }
}
###
- pipeline.id: toes-processing
queue.type: persisted
config.string: |
input { pipeline { address => toes } }
output {
elasticsearch {
}
}
###
- pipeline.id: tos3-processing
config.string: |
input { pipeline { address => tos3 } }
output {
s3 {
}
}