How to get infrastructure UI working?



I've setup metricbeat not to directly write into elasticsearch, but I configured it to write to a logfile.
I like to get the metrics buffered in filesystem for the case that metricbeat or sth in the complete pipeline breaks, so that I don't lose events.
This metricbeat probe logfile I ship via filebeat to redis. Logstash is fetching the entires and processes them by this filter:

			data_type => "list"
			db => "0"
			host => "${REDIS_HOST}"
			key => "metricbeat"
			port => "${REDIS_PORT}"

			id => "json"
			source => "message"

	# delete message if no _jsonparsefailure
	if ("_jsonparsefailure" not in [tags])
					remove_field => ['message']

			hosts => ["${ES_HOST}:${ES_PORT}"]
			#index => "%{[logType]}-%{+YYYY.MM.dd}"
			index => "%{[logType]}-%{+YYYY.ww}"

In my eyes the result looks good in discover module:

But if I go for the infrastructure button, then no results are shown:

Could you help me and point at the error what I missed?

Yes, I removed the message field after successful parsing, but the json output of metricbeat log does not contain a field called message. So I doubt it has sth. to do with this.

I did not create the metricbeat dashboards and visualizations which are shipped metricbeat. Is that the problem?

Next week i have to present kibana to some people and I would like to get this module running until then.

Thanks a lot, Andreas

(Felix Stürmer) #2

Hi @asp,

buffering the ingested docs for recovery is a useful pattern indeed. Many users do that using logstash persistent queues or kafka. The Infrastructure UI should work in these cases too as long as the correct index mappings are used.

As long as metricbeat writes to Elasticsearch directly it takes care of installing appropriate index templates that ensure correct mappings. If the ingestion happens indirectly, metricbeat can not do that, so the user has to make sure to install the mappings. The easiest way I can think of would be to just run metricbeat once directly against the target elasticsearch cluster in order for the templates to be created and to make sure the indices created during normal operation afterwards match the pattern specified in these templates.

In your case that means you would have to adjust the index templates to match the pattern specified in your logstash output.

Let me know if that sounds reasonable.