Hello i am new into kibana i was looking at this forum for solution but couldn't find solution.
I found a similar question and topic but it didn't helped me: Similar problem
So i have EFK stack with fluentd or fluentbit (same situtuation) and i have some logger which logs messages to output by this code:
Firstly, edited the Kibana URl- its not safe to have it here.
I think you need to process all the fields and include mappings ingest-node or logstash for processing, mappings either via template or on index creation.
Are you trying to visualize on log fields ? If it's the log field, you may want the json filter for example w/ ingest node. There are many use-cases where it is important to enrich incoming data. Node ingest implements a new type of ES node, which performs this enrichment prior to indexing.
Thank you for your atention @rashmi.
Kibana url was just some droplets with my use case i posted it to give deeper insight.
I am quite new in this sort of knowledge and yes i am trying to visualise on log fields.
I do not understand what you wrote here.
Could you please write it somehow understandable for me ?
I want to be able to visualise on log fields let's say value. But i am not able to do it out of the box.
What should i apply to make it possible?
log is just one field with json in it ... so kibana can't access log.Value. You would need to index your data differently, so each of the fields inside log json would be a separate field indexed by elastic search.
I suggest to ask in elasticsearch forum or check with fluentbeat if anything can be configured there.
As mentioned above, you will need to extract the data from the log string before indexing it into Elasticsearch if you want to build visualisations on it. If you were using Filebeat and/or Logstash I could show you how to do it, but I have unfortunately no experience at all using FluentD. If you can specify an ingest pipeline from FluentD, you could probably also extract the data using an ingest pipeline.
Yes, we are familiar with it. Before diving into this I would however recommend verifying that FluentD Elasticsearch output is able to specify an ingest pipeline. As it is a reasonably recent addition, it may not be supported. Based on what I can see here, it seems like it may not be supported, which means that you would need to handle this in your FluentD configuration.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.