Unfortunately no, you would need to write your ingest pipelines based on your Logstash filters.
Not everything that can be done in Logstash can be done in an Ingest pipeline.
Do you really need Logstash in this case? Elastic Agent uses integrations, each integration already has an Ingest Pipeline built that will parse your data.
You would only need to build an ingest pipeline if you are collecting custom logs for example.
Too many to list I think, while both Logstash pipelines and Ingest Pipelines are used to parse and transform messages and work in a similar way, they are not the same.
Ingest pipelines have processors that are equivalent to Logstash filters, like the parsing processors json, grok, dissect etc, some other processors are equivalent to mutate actions like the set processor could be considered equivalent to the add_field action from mutate.
But depending on your Logstash configuration, there are things that are not possible to do in Ingest Pipelines.
Elastic Agent can send data to Logstash without any issues, but since the integrations use Ingest Pipelines to parse your data, you should not change the raw message in Logstash or it may break the Ingest Pipelines.
Just very tedious to have to rebuild our data ingestion approach. It's like why even introduce ingest pipelines, if you can't replace Logstash?
Perhaps the goal was never to replace but now it adds more complexity to our set up. To add, ingest pipelines require dedicated ingest nodes or it will ramp up your cluster's resources.
If you're not interested in using integrations or ingest pipelines, you can also setup elastic agent without the app specific integrations, instead using the generic custom log or filestream integration and send the data to your existing Logstash processing pipelines.
With Elastic Agent, the custom log/filestream integration is just launching Filebeat with the settings configured in the integration.
You do not need dedicated Ingest nodes unless you have a very high event rate and is facing issues with the ingest.
I have something around 100k e/s using Elastic Agent integrations and I do not use dedicated ingest nodes, the ingestion is done by the hot data nodes.
The main goal of Elastic Agent and integrations is to make it easier to onboard data in the stack as there are hundred of already built pipelines to parse multiple data sources.
So, depending on what you are collecting you do not need to migrate your logstash configuration as it may already exist an Ingest Pipeline for your data, for example, if you are collecting logs from a Fortinet firewall, there are already an ingest pipeline in the Fortinate integration that will parse those messages.
You may also not use the ingest pipeline from integrations and just replace beats with elastic agent and still use your same logstash configurations.
wait thats impressive! 100 e/s and you aren't having issues? I guess it depends on your architecture. How does your set up look like?
We are ingesting way less but we are experiencing thread pool write rejections due to our hot nodes being overworked, we noticed this happened when we migrated majority of our beats to elastic agent, where the transforming took place in Logstash, now they take place on the hot nodes.
We are attempting to repurpose our Logstash nodes into ingest nodes to alleviate the effect that ingest pipelines have had on our cluster. But it seems like I need to still keep logstash nodes and ingest nodes as they serve similar but different purposes.
Yes, I understand that integrations exist already but not the ones I did custom parsing for. For example, we are collecting SAP logs (6 different types of logs) and there's not one integration built for SAP logs.
Yeah, its seems like that's possible to just keep our Logstash configs intact but like I said its adds complexity to our set up based of our experience with ingest pipelines causing thread pool write rejections.
Echoing my previous comment -- While there are numerous benefits to utilizing Elastic's integration content, you absolutely can just send data from elastic agent to your existing Logstash pipelines similar to how you do it with beats today. You do not need to adopt ingest pipelines if you do not want to.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.