Visualisation CPU percentage is always 0 ( or null?)

Struggling to get started here...

I've set up ELK and feeding it some logs via TopBeat.

I can see the data in Discover, (filter set to type:system) I get green bars at the top, 1 every 5 seconds, can see values per item in the table columns...

So,... save search and move to the Visualize tab...

When I change the metrics to X date Histogram @timestamp over Y AVG cpu.system_p nothing is shown in the visualisation.

The only agg that works is count....can zoom in to 1 per five seconds...

Any Idea what I'm doing wrong?

If you D oget a visualization for "count" metric , then your timestamps are obviously working and available for the specific time period... so I cant imagine why a "AVG" metric wont work other than the cpu.system_p values being empty.

But then again you said you Can see values for system_p on the discover tab ?

Maybe you can post a screenshot of your discover tab , with the system_p added to the display columns as well as your visualization configuration.

What is the mapping for the field? When I have seen this behaviour in the past it has quite often been due to fields being incorrectly mapped as strings instead of integers and/or floats.

My first thought as well , but the metric configuration wont let you choose string type fields - so I'm assuming its mapped as a numeric type.
And I believe a number type is a number - theres no strict integer casting that would round an average down to 0 (unless configured in the Kibana field formatter) ?

Update: Strike that , I was confused.

ELK Stack is on Ubuntu

Topbeat is running on Windows 7 - pretty much the default config just addresses changed:

Discover screen

this is visState from settings/Objects

{
"type": "histogram",
"params": {
	"shareYAxis": true,
	"addTooltip": true,
	"addLegend": true,
	"scale": "linear",
	"mode": "stacked",
	"times": [],
	"addTimeMarker": false,
	"defaultYExtents": false,
	"setYExtents": false,
	"yAxis": {}
},
"aggs": [
	{
	"id": "1",
	"type": "avg",
	"schema": "metric",
	"params": {
		"field": "cpu.system_p"
	}
	},
	{
	"id": "2",
	"type": "date_histogram",
	"schema": "segment",
	"params": {
		"field": "@timestamp",
		"interval": "auto",
		"customInterval": "2h",
		"min_doc_count": 1,
		"extended_bounds": {}
	}
	}
],
"listeners": {}
}

I'm sorry , I cant see any reason it should not be plotting.
Out of curiosity .. does average on for instance mem.used_p work ?
Or how about sum of cpu.system_p ?

I'll see if I can get some topbeat going at some point to simulate the situation.

...hmmm just got to work to double check your questions, added the Vis for avg CPU.system_p and user_p ( bar chart) ....and noticed values....strange...widened the time span to last 12hrs. I get AGGs from midnight only!

Two things I changed, 1, TopBeat is now installed as a service on the monitored machine, since about 10am yesterday, if that was the fix then then why from midnight? Is there some daily 'bucket' of data and the prior days bucket has bad data?

Logstash is running as Linux service...I was running it from the terminal before as I had it sending to STDOUT whilst I was trying to learn what was going on...but I'm wondering, if i had two copies of Logstash running, one as a service and one at the Terminal....

More poking around, the index/mapping for the none working days are set to type long...

/logstash-2015.12.01/_mapping/system/field/cpu.system_p

{
	"logstash-2015.12.01": {
		"mappings": {
			"system": {
				"cpu.system_p": {
					"full_name": "cpu.system_p",
					"mapping": {
					"system_p": {
					"type": "long"
						}
					}
				}
			}
		}
	}
}

but now, today, they are set to double...

/logstash-2015.12.02/_mapping/system/field/cpu.system_p

{
	"logstash-2015.12.02": {
		"mappings": {
			"system": {
				"cpu.system_p": {
					"full_name": "cpu.system_p",
					"mapping": {
						"system_p": {
							"type": "double"
						}
					}
				}
			}
		}
	}
}

Aw man - I should have picked that up , since I had almost exactly the same problem - but the other way around (one was double and the other long) which resulted averages in the trillions ... and I feel doubly :wink: bad as I previously stated that there is no strict integer vs double types. I was confusing myself with the types Kibana maps to the ES types.

I would definitely say the "double" would do the trick and is almost certainly why you Are seeing averages for today.
What is strange is that you Did still see percentage values for yesterday when it was mapped as long.

Maybe I should just keep quiet before I put my foot in my mouth again.

With topbeat comes a topbeat.template.json file that you must insert as a template to your elasticsearch cluster.

https://www.elastic.co/guide/en/beats/topbeat/current/topbeat-template.html