I am new into Elasticsearch stack and want implement elasticsearch common data model for our project . I am not sure where to start this topic.and how to Implement this into our projects. I have already check some of blogs and document but not sure where to start . we have already implemented elasticsearch in our environment. now team want implement ECS for all comming data from Logstash , filebeats , syslogs and remedy ITSM and some of monitoring tools like IBM netcool, Zabbix, Nagios etc. .. Please help here how and where i can start this topic.
The ECS documentation is a great place to start. I'd recommend first reading the Getting Started section and then reviewing the rest of the Using ECS sections.
Any of your Beats data sources using currently supported modules will already be using ECS. If you have other data sources with custom fields to map to ECS, take a look at ecs-mapper. ECS-mapper is an experimental tool that uses a CSV to map custom fields to ECS fields and then generates starter pipeline configurations.
Some additional tooling is maintained in the ECS GitHub repo to help generate and maintain Elasticsearch index mappings based on ECS fields.
Finally, there's a lot of great discussions available in past posts here! Check out some of the past ECS tagged discussions here: Topics tagged ecs-elastic-common-schema
Many Thanks @ebeahan for your prompt response . still I am not found information regarding that repo and ecs-mapper where i can run that its need separate machine for that or its can be run same Elasticsearch machine .
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.