The avoid having duplicates you would need to use a self-generated unique id for your documents.
If your original documents already have an unique id, then you can use this id as the document _id field in elasticsearch.
If they do not have an unique id, then you can create one combining some fields using the fingerprint filter.
In both cases you would need to use the option document_id in your elasticsearch ouput in logstash.
You can read more in this blog post by elastic.
Anyway, the likehood of having duplicates when using Kafka and Logstash is pretty low, I've been running many pipelines with this configuration and never faced a situation that would cause a duplicated message.
But as rcowart said, It can happen.
In my case I did not have the need in all those years to tackle an inexistent problem for me at the moment.