Reading existing indexes not created by beats/agents

Im circling back to a past issue after getting pulled in another direction. Long story short what im trying to do is get elastic security to read existing elastic indexes that were not created by any beats or elastic security agent, but contain the same data.

We have data sent to our elastic server, which actually uses a graylog front end over logstash for various reasons, but, the data from various windows hosts and other types of data collected and forwarded by syslog are already present on the elastic server, already index, and i can see them within kibana. This was not sent by beats, or any agent, just syslog.

So the need here is to get the elastic security to read these existing indexes, without using any agent, and without duplicating the data into another index when its already there.

Is there any possible way to do this so i don't not have to replicate data using redundant processes and simply tell it to read the existing indexes?

I have already added the index pattern to the kibana advanced options. This didn't seem to change anything, all elastic security menu options are blank, and appear to have no data.

Are the fields on those indices mapped according to the Elastic Common Schema?

For example, for a field containing a source ip, you need your index to have a source.ip field, mapped as an IP.

Also, some parts of the Security in Kibana needs the categorization fields, you need event.type, event.action, event.outcome etc in your documents.

On my cluster I have only custom indexes, nothing is created by beats or elastic agent, but the fields names and mappings are according to the ECS and I have some categorization, this way I was able to populate the security visualizations.

Likely not as they appear as they did natively... so i guess the question is how could i convert this?

For example here is a log from a windows host taken from one of those indexes viewed by selecting the right index pattern and search term within kibana
:
message:servername hostname Jan 21 10:38:55 hostname mswindows Oct 5 10:08:06 servername.domainname.com MSWinEventLog#0111#011Security#011557017#011Tue Oct 5 10:08:06 2021#0114672#011Microsoft-Windows-Security-Auditing#011N/A#011N/A#011Success Audit#011servername.domainname.com#011Special Logon#011#011Special privileges assigned to new logon. Subject: Security ID: S-1-5-18 Account Name: COMPUTER$ Account Domain: DOMAIN Logon ID: 0x7BD1D937 Privileges: SeSecurityPrivilege SeBackupPrivilege SeRestorePrivilege SeTakeOwnershipPrivilege SeDebugPrivilege SeSystemEnvironmentPrivilege SeLoadDriverPrivilege SeImpersonatePrivilege SeDelegateSessionUserImpersonatePrivilege SeEnableDelegationPrivilege#0114363968 full_message:<155>Jan 21 10:38:58 server hostname Jan 21 10:38:55 hostname mswindows Oct 5 10:08:06 servername.domainname.com

Any ideas on how to best convert this?

Does anyone have any reference on how to convert or transform that data? What would do this to create the indexes in ECS and add the fields?

Hey,

I would recommend going index by index (or even dataset by dataset) and perform a reindex.
This can be done by configuring a logstash pipelin with ES input and output.

You would use the input to query specific documents, the filter in logstash can then be used to mutate/convert etc to ECS, and the output will send it to a new index.

It's a long process but will work the best in my opinion.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.