How can I parse logs that are already indexed? For example one of my log files has a lot of info and all the time shows up new ideas to extract the info in many different ways for many different use cases.
My question is if Logstash can read the files that are already indexed for create new fields and stuff we have like 4 years of data.
For example in Splunk all the time you can create new fields with the extraction of de data that are already indexed with the GUI it has.
You could use an elasticsearch input and an elasticsearch output, preserving the index name and document id from the docinfo metadata. Alternatively, if you are just adding new fields use an elasticsearch input and write out a file using the bulk and update APIs and then use curl to POST that into elasticsearch. This thread has some discussion of that.
Maybe. Kibana has support for scripted fields, which are evaluated when the data is fetched from elasticsearch. That might be enough for what you want to do. But note that they get evaluated every time a fetch occurs, and there is a cost to that.
If scripted fields are not powerful enough for your use case then yes, you need to reindex.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.