I am exporting a metricbeat index from elastic using logstash. I am using the following pipeline:
- pipeline.id: export-process
pipeline.workers: 4
config.string: |
input {
elasticsearch {
hosts => "http://localhost:9200"
ssl => "false"
index => "metricbeat-*"
docinfo => true
}
}
output {
file {
gzip => "true"
path => "/usr/share/logstash/export/export_%{[@metadata][_index]}.json.gz"
}
}
Then I am trying to import the resulting json document back into another elastic instance using logstash with this import pipeline:
- pipeline.id: import-process
pipeline.workers: 4
config.string: |
input {
file {
path => "/usr/share/logstash/export/export_metricbeat-7.17.7-2023.03.15-000001.json"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
index => "metricbeat-7.17.7-2023.03.15-000001"
ssl => "false"
}
}
Unfortunately, when it imports it's not creating the index properly. It looks like this:
{
"_index": "metricbeat-7.17.7-2023.03.15-000001",
"_id": "i3XL5IYBqplkWF1zeYbz",
"_score": 1,
"_ignored": [
"message.keyword"
],
"_source": {
"@timestamp": "2023-03-15T10:23:05.122Z",
"@version": "1",
"path": "/usr/share/logstash/export/export_metricbeat-7.17.7-2023.03.15-000001.json",
"message": "{\"metricset\":{\"period\":60000,\"name\":\"collector\"},\"@timestamp\":\"2023-03-15T07:31:08.405Z\",\"service\":{\"address\":\"http://udsf-data-repository-db-metrics.udsf-chf.svc.cluster.local:8484/metrics\",\"type\":\"prometheus\"},\"prometheus\":{\"metrics\":{\"voltdb_table_tuple_count\":0,\"voltdb_table_tuple_allocated_memory_bytes\":2097152,\"voltdb_table_inline_tuple_bytes\":0,\"voltdb_table_non_inline_data_bytes\":0},\"labels\":{\"hostname\":\"udsf-data-repository-db-cluster-0\",\"instance\":\"udsf-data-repository-db-metrics.udsf-chf.svc.cluster.local:8484\",\"type\":\"PersistentTable\",\"job\":\"prometheus\",\"tablename\":\"UDSF_COUNT_BY_SESSION_TYPE\",\"partitionid\":\"18\"}},\"agent\":{\"version\":\"7.17.7\",\"hostname\":\"ip-10-10-25-65.eu-west-1.compute.internal\",\"id\":\"ddabf1ce-22f2-4ec6-a4af-e267544b6d5e\",\"type\":\"metricbeat\",\"name\":\"ip-10-10-25-65.eu-west-1.compute.internal\",\"ephemeral_id\":\"47cd9091-51e4-4718-bb38-264e68eccea7\"},\"ecs\":{\"version\":\"1.12.0\"},\"@version\":\"1\",\"host\":{\"name\":\"ip-10-10-25-65.eu-west-1.compute.internal\"},\"event\":{\"module\":\"prometheus\",\"dataset\":\"prometheus.collector\",\"duration\":160809113}}",
"host": "b70093e9aebc"
}
}
The majority of the document has been dumped under the message field. Is there some more transformation I need to do in the pipeline? I just want the document exactly as it was in the original elastic instance.