Abstractly, this gives us a mapping

When we talk about "mapping" in this forum, generally that means the type definitions of the fields in the actual documents in Elasticsearch. I'm not clear on if you have participant ID in your mapping, or the number of participants in your mapping.

Sounds to me like you are looking for trends in the number of participants in meetings.

```
{
"@date": <some timestamp>,
"meetingId": "meeting-12345",
"participantId": "participant-98765",
# maybe some other fields
}
```

or

```
{
"@date": <some timestamp>,
"meetingId": "meeting-12345",
"numParticipants": 20,
# maybe some other fields
}
```

## If you have a document for each meetingID / participantID combination:

We can get each meeting and the number of unique participants in each meeting, and compare visually for the commonalities of number of participants in each meeting. Beyond that, I don't see a way to count each number of participants in meetings and aggregate on that metric.

NOTE: I spent a bit of time playing with various aggregations in Elasticsearch to see what a raw query is capable of. If you look more into the documentation on the pipeline aggregation options, you might be able to find more drill-in than I was able to get. This query can provide some stats that you might find interesting:

```
{
"aggs": {
"meetings": {
"terms": {
"field": "meetingId",
"size": 100
},
"aggs": {
"num_participants": {
"cardinality": {
"field": "participantId"
}
}
}
},
"participant_stats": {
"stats_bucket": {
"buckets_path": "meetings>num_participants"
}
}
}
}
```

That will give you stats on the number of meetings and the min/max/average number of participants in each meeting. Not a histogram, but some high-level metrics. Example result:

```
"aggregations": {
"meetings": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "meeting-iuh5js77h3",
"doc_count": 29,
"num_participants": {
"value": 12
}
},
{
"key": "meeting-s6k67hl5sj",
"doc_count": 28,
"num_participants": {
"value": 10
}
},
{
"key": "meeting-45i6jhhls1",
"doc_count": 14,
"num_participants": {
"value": 9
}
},
{
"key": "meeting-sh58f6gihj",
"doc_count": 13,
"num_participants": {
"value": 8
}
},
{
"key": "meeting-4dglkd98ss",
"doc_count": 10,
"num_participants": {
"value": 8
}
},
{
"key": "meeting-g4sdfjldsj",
"doc_count": 6,
"num_participants": {
"value": 4
}
}
]
},
"participant_stats": {
"count": 6,
"min": 4,
"max": 12,
"avg": 8.5,
"sum": 51
}
}
```

## If you have `numParticipants`

in your mapping:

You can make a bar chart where the x-axis is a spread of the `numParticipants`

field. To have the X-Axis number appear in numerical order, order the X-Axis by a single-value metric of `numParticipants`

, which I put as the a max aggregation.

I had 100 meetings in my test data. Add up all the number of meetings held with each number of participants (add up all my Y values) and I have 100.

If you don't have `numParticipants`

as a field in your data, you always have the option of pre-processing that calculation at index time.