I am trying to get the hostname of a given IP in a field (source.ip), I am doing this for an agent deployed within a system integration, I have defined some processors in the "processors" field some drop_event.
But when I add the dns processor, the agent keeps updating in the log section and stops receiving events. But if I remove this, everything works again.
If I type in the processors field the full configuration, everything works but I don't see the new field with the hostname and I don't see the label error in the document (tag_on_failure).
It is a bit tricky to help debug if we are not seeing a specific error when the pipeline is not working. Would it be possible for you to test the pipeline with example input and report back what error you are seeing.
As I understand from the documentation, there is no processor in a pipeline to do this reverse DNS.
The Reverse DNS Processor is in the Filebeat documentation, if there is a way to do the Reverse DNS function using a pipeline in Elasticsearch, could you send me the docs?
Is there an agent or filebeat log file where I can see how filebeat processes the documents with the filters defined in the processors section?
I am doing this using system integration, not Windows.
There is a strange behavior, because if I do the configuration that is in the screenshot, the agent stops logging, I need to specify the "nameservers" configuration and it starts working again, but it does not do any reverse function.
We should be able to see agent logs in Kibana in Fleet - I was hoping you'd have seen something useful in there already:
We could try starting a standalone agent with your config (as a temporary workaround). Standalone agent logging is configured by setting something like:
I finally found the solution, all about adding fields and so it worked, nothing weird in the logging system so I started checking the index pipes, the final document fields are "post processors" fields. So the source.ip field doesn't work in this step.
If you put only the basic DNS settings, the agent stops logging in as I said, I added the nameservers to the config lines (everything started working again) and then I checked the "original fields", I mean the system in my case from the manifest of the package that I send.
So maybe I'll add this to the troubleshooting guide, the tag_on_failure function doesn't tag a wrong value from a wrong field. And if you don't specify the nameservers on the system against a Windows server that doesn't work, there is a "On Windows, you must always provide at least one nameserver" in the documentation. But I'll specify something like "windows event log" (just an idea).
Another thing that could be great is to attach the manifest to the documentation or specify that those fields are the fields that will work in the "pre-ingest pipelines" step.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.