we are planning our upgrade from 8.19.x to 9.2.x
For some ingestflows we use the logstash-filter-elastic_integration
In the upgrade documentation Elastic recommends upgrading Elasticsearch first, followed by Kibana and finally logstash, beats, agents, ...
The upgrade documenation also mentions the plugin for elastic_integration requires special attention as mentioned in: https://www.elastic.co/docs/reference/logstash/upgrading-logstash#upgrading-when-elastic_integration-in-pipeline
Will the logging of data that is ingested with Logstash using this plugin be interrupted during the upgrade or is the 8.19.x plugin fully compatible with version 9.x?
You need to follow this recommendation, when upgrading the stack you need to upgrade Logstash before you upgrade Kibana.
You cannot have Logstash version 8.19 and Elasticsearch + Kibana on version 9.X if you are using the elastic_integration filter.
I had this issue recently, it will break your ingestion, Logstash will not be able to load the Ingest Pipeline and as a fallback the parse will be done in Elasticsearch, any enrichment you have later in Logstash will not work.
So, as soon as you upgrade Elasticsearch, start upgrading your Logstash to avoid more issues.
We use 8 nodes that have the ingest role. Could we upgrade Elasticsearch on those four nodes, then upgrade logstash and finally upgrade Elasticsearch on the last four nodes?
Is it correct that this logstash-bridge feature enables logstash 9.2.x to be backwards compatible with elasticsearch 8.19.x?
If yes, I should upgrade logstash before I upgrade elasticsearch and kibana (we are upgrading from 8.19.11 to 9.2.5). Does anyone has experience with this approach?
I have no idea what this logstash-bridge is, never heard about it and I'm not sure what it does, it seems to be something internal.
As I mentioned, if you use logstash-filter-elastic_integration the only recomendation is to upgrade Logstash before Elasticsearch.
In my case I didn't do that and when Logstash tried to refresh the ingest pipeline it was using, it broke the ingestion and the events started being processed in Elasticsearch again.
Upgrading Logstash solved the issue.
The filter will check Elasticsearch every minute to see if there is any changes to the pipeline, if this check fails it will use the cached information until it expires in 24 hours.
This was my issue, version 8.19 was not able to check the changes because Elasticsearch was on 9.2, than when the cache expired it stopped processing logs in Logstash and started processing in Elastiscsearch again.
Since this is an enterprise feature you may get a better answer opening a support ticket, but they will also direct you to the same doc I mentioned.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.