Registry HTTP Endpoint?

We have some fairly involved monitoring built on top of reading the registry and then inspecting the files at the offset. It's pretty great, but it means that we could be up to registry_flush behind. This isn't a huge deal except when a server is in the process of being decomissioned, and we'd like to have a more accurate view of the system.

What would you all think of exposing a registry endpoint, maybe within the existing pprof http end point? I would be willing to submit a patch, and have already signed the Elastico CLA. I suspect there is locking around the registry, so understand that if we were to poll it very quickly we could put ourselves in a very bad position, and would be willing to document that clearly.

I like the idea of having metrics exposed on our http endpoint. I'm currently improving our metric gathering to support different API endpoints: https://github.com/elastic/beats/pull/6836 This should make things like the above easier / possible.

Can you share a bit more details on which metrics you are collecting at the moment and what you are monitoring exactly?

Sure! So basically we read the registry, find the files that are still on disk, and seek to the offsets within.

Once we seek to that offset we can parse the next logline and see how old it is. Then we simply calculate the difference between that timestamp and "now" and we know how far behind filebeat is.

I would simply expose the entire registry, as it currently is, under /vars/registry.

ok, I see. It should be fairly easy to have an API endpoint that loads the registry from the file and shows it in the http output. As this is on request I would also not expect lots of overhead.

But TBH I'm hoping we could provide some stats on this in the future from our side. We would not necessarily process the log line (as we don't know the content and the timestamp there is not necessarily representative) we should be able to provide some data on the difference of the file size vs. the offset we are reading and how many events are in the queue.

In general my experience is that normally Filebeat is reading faster then most system write logs. So the reason Filebeat is behind is not it's reading speed but the sending in case the output is too slow. Now that we have a queue we should be able to have better stats for the above cases.

@frioux What are the uses cases for you where Filebeat is behind on reading?

Yeah the best stat you could emit would be bytes behind in total, and maybe also per file.

We monitor actual time behind for the obvious reasons. Filebeat has a well known pathology that if you end up with thousands of log files it just can't keep up. We have basically resolved that issue on our end by limiting the creation of new log files, but there are always other bugs we don't know about. It would be nice to not have to calculate the bytes we are behind, but there is no way we would stop actually parsing the log line, since it's the closest thing to measuring what actually matters.

Aside from that, most of the stats we are interested in are measured within Kafka, since that's central and cheaper than measuring at every host.

Should I draft a PR to expose the registry or do you think you would rather do that?

Oh btw, we actually don't need help reading the file off the disk, we are capable of that :slight_smile: I was more thinking that if we could actually read the in memory data structures in the same format we'd get a more up-to-date view of our backlog.

Our registry.flush_interval is set to 30s because we found that flushing more often had a real impact on the system, so reading the file could give us up to 30s old information.

A potential interesting note here is that the real time state is kept in memory per prospector. Perhaps we could expose the real time registry data per prospector instead of having all in one.

That sounds reasonable. Should I try to put together a pull request to implement this?

I'm afraid I have to pushback here. Reworking the registry support is still a big outstanding topic. It's basically a complete rewrite. Like more tight integration with libbeat, improved shutdown handling/timing, remove complicated logic in filebeat, more efficient updates to disk (flush with big registry is super expensive right now), generalize entry format in order to support multiple source types...

We are aware we've got a many users reading the registry file in custom scripts. So we will have to see how we can properly support this use-case. Some possible solutions we have in mind:

  • http endpoint (via unix socket?) to query complete state
  • filebeat registry state command to query the current state.
  • Event stream scripts can subscribe too
  • Alternative 'backends', still outputing in old (inefficient) JSON format

But whatever we build now to query the registry, might either complicate the refactoring (having more existing features we have to keep backwards compatible), or might break (at worst will be removed) if required.

I would recommend not to use the local prospector states. Using the per prospectors state is somewhat insecure. The local prospector state is active prior to outputs. The individual prospector states are combined into the global registry only after the events/documents have been ACKed by outputs. That is, the global state is 1) always behind the local prospector states 2) available even if prospector/harvester has been finished or gc'ed state. Upon restart, filebeat is restarted from the global registry state as well. That is, if filebeat is restarted or just blocked by unavailable outputs, you might have used a state value that is not "true". The future of local prospector registry/state is not clear yet. With registry rewrite, the local state might not be required in it's current form anymore.

Totally fair! I just know that we have hosts that could shut down sooner if they didn't have to wait till filebeat flushed to disk. I'll wait for the big refactor and see how it looks then. Feel free to reach out when you are making decisions and I can say if they'd work for us; otherwise, worst case, we can keep reading the json format indefinitely.

Thanks.

Some questions:

How do you execute the script? It's a cron job?

Does you script keep some state? E.g. check last modification time of the registry file?

Any preference beside HTTP for getting the registry state? E.g. unix socket with filebeat command or event stream your script can subscribe to (live state updates upon ACKs or file close).

It's not a cronjob but it could be. It is run via NRPE.

It does not keep state, it always parses the registry and looks at many (but not all) of the files that filebeat is or should be tailing.

Unix domain socket is fine. I'd rather avoid needing to have the script be persistent since that is state that can get out of sync, but I can see the value. As long as we could get the current state, adding a mode where we could get the current state and receive updates could be nice and far more efficient.

Thank you. Thats some helpful input.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.