Dashboard with kibana

Hello, I have two CSV files. One file contains the names of applications, and the other file contains information about a specific application. Both files have the "Host" column in common. I have imported these two files into Elasticsearch, each in a separate index. I would like to be able to match these two files in such a way that I can filter my dashboard to display information about a specific application.
How can I achieve that?

Hi @Hanni,

Can you explain what you mean by match on the host column and what kind of view you're trying to build on top of these indices?

I want to be able to create a dashboard in which there will be visualizations for each application, but I don't want it to display all the visualizations. I would like to filter it by the name of each application. However, the application names are located in a different index.

Thanks for confirming @Hanni. I would recommend using something like an enrich policy to add the application values from the other index to the one with your metrics.

Is it not possible to create an index pattern with both files?

If you don't need to connect things together and both indices have the application attribute you can create a data view across multiple indices using a pattern and then build your dashboard on top.

I am not sure I understand what you are saying.
Let me put it in context.
On the one hand, I have csv files that give us information about various applications. In a csv file we have information about a given application and so on.
On the other hand I have another csv file that serves as a repository in which there is the name of each application.
What the files have in common is the ip address of the application. I would like to be able to visualize each application and to be able to filter in the dashboard so that when we want information on an application the dashboard is updated. So to do this I need the app's com. I was asking how to match the files containing the information of the application and the referential file.

Hi @Hanni,

Appreciate the additional details. Elasticsearch isn't like a DB in the way that you join two tables together on a particular field, in this case the ip adddress. So it's not a case of matching the field between two indices.

You can use an enrichment processor to match the ip address fields between the two indices, and then output them into an enriched index with the application name value added to the respective document. Then you can build your dashboard on top on this enriched index.

I would recommend taking look at the below resources on enrichment processing:

  1. Example: Enrich data based on exact values
  2. Set up an enrich processor
  3. How the enrich processor works

Can I have an exemple?

There's an example in the 1st link I've given above that walks through creation and execution of the enrich policy that you need. Is there something that you're unsure of in the example?

I had tried this :
But I don't see the "Nom_Application" add in my index

PUT _enrich/policy/app-enri
  "match": {
    "indices": ["app","referentiel"],
    "match_field": "ip",
    "enrich_fields": ["Nom_Application"]

POST /_enrich/policy/app-enri/_execute

Thanks for confirming @Hanni. The _execute command for the enrichment policy creates the enrich index rather than add the fields. So you have a couple of steps to do to enrich your data into a new index:

  1. Create an ingest pipeline. The above example gives the API steps, but you can also use the UI as an alternative which may be easier. See below example:

  1. Reindex your data into a new index using the ingest pipeline, which will make use of the policy you have created. The reindex documentation has an example.

Give those additional steps a try and see if that gives you what you need.

is it possible to automate this task? so that it is done automatically as soon as a new document is added to the index or a new index is added?

For a new document being added to an existing index, the ingest pipeline will enrich for each new document once you have set things up.

By a new index, do you mean creating a new index with it's own separate enrichment policy and ingest pipeline, or reusing the existing one? You should be able to script the steps using the Ingest API as per the documentation.

Hope that helps!