Azure EventHub Plugin for Logstash Errors

We have Logstash installed on Kubernetes running on 2 pods. My main pipeline is configured to receive events from 2 separate EventHub instances.

Here's my Pipeline Input:

input {
    azure_event_hubs {
        config_mode => "advanced"
        decorate_events => true
        storage_connection => "#{EventHubStorageConnection}#"
        event_hubs => [
            {
                "#{EventHubProdName}#" => {
                    event_hub_connection => "#{EventHubProdConnection}#"
                    consumer_group => "#{EventHubProdConsumerGroup}#"
                }
            },
            {
                "#{EventHubPreProdName}#" => {
                    event_hub_connection => "#{EventHubPreProdConnection}#"
                    consumer_group => "#{EventHubPreProdConsumerGroup}#"
                }
            }
        ]
    }
}

I'm able to pull messages from EventHub, however, I see a ton of errors in my Logstash logs, and I'm trying to figure out what's going on.

Primarily, I see a bunch of warnings akin to:

[2023-11-09T12:26:21,277][WARN ][com.microsoft.azure.eventhubs.impl.MessageReceiver][main][f7cd609664f9c6569d47d23f2cfa20cd07c9bd17585b47492dfde701e196a62c] clientId[PR_3a896f_1699532768250_MF_e723ae_1699532768092-InternalReceiver], receiverPath[#{EventHubPreProdName}#/ConsumerGroups/#{EventHubPreProdConsumerGroup}#/Partitions/0], linkName[LN_e74f6e_1699532768267_bc9c_G6], onError: com.microsoft.azure.eventhubs.ReceiverDisconnectedException: New receiver 'nil' with higher epoch of '249731' is created hence current receiver 'nil' with epoch '249731' is getting disconnected. If you are recreating the receiver, make sure a higher epoch is used. TrackingId:c417e8fd00016dfe0024ca29654ccfe0_G6_B40, SystemTracker:#{HOST}#:eventhub:#{EventHubPreProdName}#~1023|#{EventHubPreProdConsumerGroup}#, Timestamp:2023-11-09T12:26:21

They eventually lead to an exception:

java.util.concurrent.CompletionException: com.microsoft.azure.eventhubs.ReceiverDisconnectedException: New receiver 'nil' with higher epoch of '249731' is created hence current receiver 'nil' with epoch '249731' is getting disconnected. If you are recreating the receiver, make sure a higher epoch is used. TrackingId:c417e8fd00016dfe0024ca29654ccfe0_G6_B40, SystemTracker:#{HOST}#:eventhub:#{EventHubPreProdName}#~1023|#{EventHubPreProdConsumerGroup}#, Timestamp:2023-11-09T12:26:21, errorContext[NS: #{HOST}#.servicebus.windows.net, PATH: #{EventHubPreProdName}#/ConsumerGroups/#{EventHubPreProdConsumerGroup}#/Partitions/0, REFERENCE_ID: LN_e74f6e_1699532768267_bc9c_G6, PREFETCH_COUNT: 300, LINK_CREDIT: 264, PREFETCH_Q_LEN: 0]
	at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:332) ~[?:?]
	at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:347) ~[?:?]
	at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:636) ~[?:?]
	at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510) ~[?:?]
	at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162) ~[?:?]
	at com.microsoft.azure.eventhubs.impl.ExceptionUtil.completeExceptionally(ExceptionUtil.java:104) ~[azure-eventhubs-2.3.2.jar:?]
	at com.microsoft.azure.eventhubs.impl.MessageReceiver.drainPendingReceives(MessageReceiver.java:466) ~[azure-eventhubs-2.3.2.jar:?]
	at com.microsoft.azure.eventhubs.impl.MessageReceiver.onError(MessageReceiver.java:451) ~[azure-eventhubs-2.3.2.jar:?]
	at com.microsoft.azure.eventhubs.impl.MessageReceiver.onClose(MessageReceiver.java:746) ~[azure-eventhubs-2.3.2.jar:?]
	at com.microsoft.azure.eventhubs.impl.BaseLinkHandler.processOnClose(BaseLinkHandler.java:70) ~[azure-eventhubs-2.3.2.jar:?]
	at com.microsoft.azure.eventhubs.impl.BaseLinkHandler.handleRemoteLinkClosed(BaseLinkHandler.java:106) ~[azure-eventhubs-2.3.2.jar:?]
	at com.microsoft.azure.eventhubs.impl.BaseLinkHandler.onLinkRemoteClose(BaseLinkHandler.java:48) ~[azure-eventhubs-2.3.2.jar:?]
	at org.apache.qpid.proton.engine.BaseHandler.handle(BaseHandler.java:176) ~[proton-j-0.33.9.jar:?]
	at org.apache.qpid.proton.engine.impl.EventImpl.dispatch(EventImpl.java:108) ~[proton-j-0.33.9.jar:?]
	at org.apache.qpid.proton.reactor.impl.ReactorImpl.dispatch(ReactorImpl.java:324) ~[proton-j-0.33.9.jar:?]
	at org.apache.qpid.proton.reactor.impl.ReactorImpl.process(ReactorImpl.java:291) ~[proton-j-0.33.9.jar:?]
	at com.microsoft.azure.eventhubs.impl.MessagingFactory$RunReactor.run(MessagingFactory.java:512) ~[azure-eventhubs-2.3.2.jar:?]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539) [?:?]
	at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) [?:?]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
	at java.lang.Thread.run(Thread.java:833) [?:?]
Caused by: com.microsoft.azure.eventhubs.ReceiverDisconnectedException: New receiver 'nil' with higher epoch of '249731' is created hence current receiver 'nil' with epoch '249731' is getting disconnected. If you are recreating the receiver, make sure a higher epoch is used. TrackingId:c417e8fd00016dfe0024ca29654ccfe0_G6_B40, SystemTracker:#{HOST}#:eventhub:#{EventHubPreProdName}#~1023|#{EventHubPreProdConsumerGroup}#, Timestamp:2023-11-09T12:26:21, errorContext[NS: #{HOST}#.servicebus.windows.net, PATH: #{EventHubPreProdName}#/ConsumerGroups/#{EventHubPreProdConsumerGroup}#/Partitions/0, REFERENCE_ID: LN_e74f6e_1699532768267_bc9c_G6, PREFETCH_COUNT: 300, LINK_CREDIT: 264, PREFETCH_Q_LEN: 0]
	at com.microsoft.azure.eventhubs.impl.ExceptionUtil.toException(ExceptionUtil.java:35) ~[azure-eventhubs-2.3.2.jar:?]
	at com.microsoft.azure.eventhubs.impl.MessageReceiver.onClose(MessageReceiver.java:745) ~[azure-eventhubs-2.3.2.jar:?]
	... 14 more

This occurs for both EventHub connections, and it's reoccurring constantly.

We're still able to consume events and get them to Elasticsearch, but this can hardly be optimal

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.