For a couple weeks I've been attempting to migrate my logs to ECS. I have a running ELK 6.x cluster and it works fine, but my new cluster I want to see if I can get ECS running. I am starting from scratch so old logs don't matter.
Log files are being fed from Apache with a custom log format. I can't seem to get the grok pattern to work no matter what I do. Can someone point me in the right direction? I've done lots of Googling and searching through the forums and documentation but I can't find any documentation on using logstash/grok with ECS.
Attempting to map as much as possible to ECS 1.1, only not sure if the core field: ecs.version is just a dotted name or true nested and what field set/nested field prefix to hide ones custom fields in?
I wish I could help! It seems we are being ignored which makes me think this was either a very bad time to launch ECS or they've given up on it already.
Don't think so, also this is a Community Forum so we can't expect ElasticMembers to answer all questions we post here. Also remember they're trying to make a living of possible consulting in similar areas
Anyway in general ECS makes a lot of sense for sharing various objects like dashboards etc. across use cases and work flows. So considering ECS a good initiative and will expect Elastic provided solutions to adapt to ECS.
Sorry this was overlooked. We need to promote the elastic-common-schema tag more, it seems. Make sure you apply the tag, any time you have an ECS related question
What problems are you encountering with your grok patterns? The obvious one I could guess is about field nesting. All fields in ECS should be nested, no dots in key names. Dots are used as a shorthand to represent the nesting. So in your grok you can get nested fields using square brackets, like %{IPORHOST:[url][domain]}. Here's a more fleshed out example
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.