Possible ELK merges

Hello community,
I wanted to know which possibility was better for SQL data merging in ELK?

Also I find myself with hundred of resource to aggregate, is it better to do aggregates or jbc streaming between 2 data entries or to do it at the end of processing all entries?
First option :

- pipeline.id: "2-pipeline-moodle"
  path.config: "/etc/logstash/conf.d/pipeline/test/2-pipeline-moodle/{11-utilisateur-input,12-utilisateur-cours-input,13-cours-nom-input,14-cours-nom-achevement-input,15-cours-critere-input,16-cours-critere-achevement-input,17-moodle-input-calendrier,18-moodle-input-notification-resultat,19-moodle-input-cours-module,20-moodle-input-cours-module-achevement,21-moodle-input-trax-module,24-filter,25-output}.conf"
  pipeline.workers: 1

Second option:

- pipeline.id: "3-pipeline-moodle"
  path.config: "/etc/logstash/conf.d/pipeline/test/3-pipeline-moodle/{11-utilisateur-input,12-utilisateur-cours-input,121-filter,13-cours-nom-input,14-cours-nom-achevement-input,15-cours-critere-input,151-filter,16-cours-critere-achevement-input,17-moodle-input-calendrier,18-moodle-input-notification-resultat,181-filter,19-moodle-input-cours-module,20-moodle-input-cours-module-achevement,21-moodle-input-trax-module,212-filter,25-output}.conf"
  pipeline.workers: 1

Last question, as I merge a set of 100 tables given in ELK, I arrive at extreme cases of very consequent hits more than 1 million when I do not aggregate or millions of data in the available fields when I aggregate, as this dashboard service is intended for neophyte public of the digital, it would be perhaps more interesting to concentrate the data necessary to an index for a specific request on the data?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.