How to obtain mappings and ingest pipelines from Elastic integrations without using Fleet?

We’re working on ingesting logs from network devices (e.g., Cisco IOS) that send their logs via Syslog directly to Logstash, which then forwards the data to Elasticsearch. We manage all components through custom automation (e.g., Ansible)without using Elastic Agent or Fleet Server .

For operational and security reasons, we want to avoid using Fleet and Elastic Agent, mainly due to concerns about vendor lock-in.

We would like to make use of official Elastic integrations from GitHub:
integrations/packages/cisco_ios at main · elastic/integrations · GitHub

And extract from them:
:white_check_mark: index mappings (without requiring Fleet or Elastic Agent)

What we’ve tried:

We took the fields.yml from the integration: https://github.com/elastic/integrations/blob/main/packages/cisco_ios/data_stream/log/fields/fields.yml

Then attempted to generate ECS-compliant mappings using the ECS generator: https://github.com/elastic/ecs

We placed the fields.yml into usage-example/fields/custom/, created a custom subset.yml, and manually added missing attributes like level and description. We ran the generator with:

python3 scripts/generator.py \
  --ref v8.0.0 \
  --include usage-example/fields/custom/ \
  --subset usage-example/fields/subset.yml \
  --out output2/ \
  --template-settings-legacy usage-example/fields/template-settings-legacy.json \
  --template-settings usage-example/fields/template-settings.json \
  --mapping-settings usage-example/fields/mapping-settings.json \
  --semconv-version v1.23.0

However, we ran into multiple validation errors like:

ValueError: Field is missing the following mandatory attributes: description.

Even after adding level, description, etc., the generator continues to fail. It seems that the fields.yml from integrations is not directly compatible with the ECS schema generator.

:red_question_mark: Questions

  1. Is it officially supported or recommended to extract and apply the index_template/default.yml and ingest_pipeline/default.yml from an integration package via API (PUT _index_template, PUT _ingest/pipeline)?
  2. Is there an official tool or supported method to extract only the ingest pipeline and mappings from an Elastic integration without using Fleet?
  3. If we don’t want to use Elasticsearch ingest nodes, but process everything via Logstash, is there any way to get or convert the integration’s ingest pipeline into a Logstash pipeline format (e.g., filter { ... })?

Our goals are:
:white_check_mark: No Fleet or Elastic Agent
:white_check_mark: Full control via automation (e.g., Ansible)
:white_check_mark: Manual setup of index templates
:white_check_mark: Prefer using Logstash instead of ingest nodes, ideally with ready-to-use pipeline definitions (e.g., grok, date, etc.) — without manually extracting or rewriting them from ingest pipelines

Thanks in advance for your help or guidance!

— Václav Šulc

I ran into the built-in ingest-convert.sh helper in Logstash 8.17.4:

bin/ingest-convert.sh \
  --input /tmp/ingest-pipeline.json \
  --output /tmp/logstash.conf

and tried it on the Cisco-IOS integration’s ingest pipeline. Unfortunately I keep hitting a NullPointerException (missing value_contents) during conversion—looks like some processors (inline pattern_definitions, empty fields) aren’t directly supported. I’m in the process of stripping out those unsupported bits (e.g. inline Grok patterns and null-valued params) to get a clean logstash.conf. Any pointers on handling those edge cases?