I've a unique requirement and trying few new things. My main objective is to display scanned data from Tenable Nessus (showing total count of scanned vulnerabilities - Critical, High and Medium) on to Kibana Dashboard after importing the .CSV report format from Nessus. I could load and import .csv Nessus Excel sheet scan report into Kibana and even I could properly display on the Kibana dashboard. Next thing is challenging.
For e.g.,
I have imported some February Scanned data from one .csv file (say "NessusReportFile1.csv") file in to Kibana and created a new index (say name of index "cscs_nessus-scan") and imported successfully all the .CSV fields and Data to created index "cscs_nessus-scan". And successfully created a Dashboard for that w.r.t "cscs_nessus-scan" index. Done perfectly
Now, I want to push some more data of March month scanned report (in form of .CSV say "NessusReportFile2.csv") in to the same Kibana Dashboard where I already created and existing "cscs_nessus-scan" index. So, that I want to add the data consecutively so to existing dashboard and index only, so that I do not want to create separate Index and multiple dashboard with each index. But when I try to do that, I'm getting error saying- Index is already existing.
What's the solution ? Is there any way we can add the new data consecutively to existing index in Kibana ?
Hi Praveen. Unfortunately, the Import data tool in Kibana can not be used to append to an existing index. We have an open issue for this, but it appears that we currently don't have plans to allow appending to an existing index.
If you intend to keep using the Import data tool to upload CSVs, you'll have to create a new index each time. In that case, you may create a Data View that matches each index name (e.g. cscs_nessus-scan-* to match cscs_nessus-scan-feb, cscs_nessus-scan_mar, etc).
Or you could use something like Logstash or a custom Python script using elasticsearch-py to append the data from the CSV file into the existing index.
Thank you for your response. Well, then currently it's a limitation for appending data into existing Index. Ok, that's fine. No option and have to live with it until some future solution available and you guys probably converting that open issue into a feature
Whatever you suggesting on creating different / multiple indices , even i too thought same. But, it won't be a efficient way of doing and configuring.
How about creating just a dashboard with data from all months would be in graph and just change month date range to see specific data instead of having so many graphs for each month.
Well, I'll try with #elastic-stack:logstash and will give it a shot to do something what Iwas looking for. Thanks for the suggestion.
Sorry, I really didn't get you. What pipeline to load data ? And, what's that normalizing CSV data ? I have CSV data in form of excel sheet and I'm feeding that excel sheet directly to Kibana.
And, in your case once you load normalized CSV data and load in to excel per day, then you're pushing data to kibana in same created index ?
I have python code which runs via linux cron once a day
it reads data from sharepoint ( excel file). and loads in to elasticsearch index ( append to existing index)
read excel file. put it in to dataframe.
fix some column if needed.
create list of dictionary
load in to existing elastic index (and it will append automatically)
Great to hear @elasticforme . Thanks for clarifying.
99% that's what actually I was looking for. 1 question. Does python code remove unnecessary columns when really not needed ? Because from Nessus whatever CSV we get its loaded with too much piece of info. We usually remove the columns manually whichever really not needed.
Well, so whatever python code you have, is that customized one particularly to be used by only few people or can that piece of python code be shared ? I'd like to check and see if I can use and leverage it to solve my issue.
well I don't know what do you mean by use by anyone.
python is just language you write code in just like C++ or bash.
my code is just for my type of data. you will have different code as your data is different.
You have to either learn python or get some help from people around you who knows it. I am not an expert but use internet to write what I need.
for example if your excel has five columns and four rows
a b c d e
1 2 3 4 5
6 7 8 9 1
2 3 4 5 6
7 8 9 1 2
3 4 5 6 7
I put that in python dataframe and it looks just like this
df =
a b c d e
1 2 3 4 5
6 7 8 9 1
2 3 4 5 6
7 8 9 1 2
3 4 5 6 7
now you can remove column 'e' if you want with single command
you can do lot of manuplation on this frame now.
once done I generally convert that to list of dictionary like [{a:1, b:2, c:3, d:4}, {a:6, b:7,c:8, d:9}]
and then load this in to index.
there is elasticsearch python module that you will have to use.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.