Updating Custom HTTP ingestion results in a 504 error

I have added an integration to my stack running in Azure Kubernetes using the "Custom HTTP" integration. When I first create the integration everything works as expected and documents are ingested when I send them to the endpoint.

However, if I update the integration configuration (for example changing the url endpoint) and then save the integration it will no longer index documents. - returning a 504 status code instead. Looking through the logs I see the following error message:

[elastic_agent.filebeat][error] Input 'http_endpoint' failed with: unable to start server due to error: pattern already exists for /user-activity-logger-staging old=http_endpoint-http_endpoint.generic-0b97de64-092f-48b0-a086-ea9aa099c13e new=http_endpoint-http_endpoint.generic-146f5ddf-bdf0-4633-a4d3-ba681d75131a

In this example the "new" and "old" patterns are different but I have also observed this error where both the "new" and "old" patterns are the same.

Anyone know how to sort this out? The only way to fix it is to delete the integration and recreate it using a new port number - reusing the same port number results in the error message again.

Looks like I have found the solution to this problem, at least on Kubernetes. After making a change to the integration it looks like the pods on kubernetes running the agents should be "restarted" - I do this by running the command:

To stop the agents:

kubectl scale --replicas=0 {name of agent deployment}

Then to start them up again:

kubectl scale --replicas=n {name of agent deployment}

Where n is the number of agents you want running. Once Fleet updates itself looking in the logs the HTTP endpoint is logged as started.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.