Reindex documents following a query in order to make a snapshot

Hello,

I am currently looking at making a snapshot of my Active Directory users login events that i have been gathering using Winlogbeat.

I don't want to snapshot the whole winlogbeat index, just theses specifics events.

I made my researchs and i understood that elasticsearch doesn't know how to make a snapshot of some specifics docs in a index, It can only snapshot a whole index.

I found out that i workaround to accomplish my goal could be to reindex my winlogbeat index following a query into a new temporary index -> snapshot that temporary index -> delete that index.

However i don't really understand how the syntax work for the Reindex API that should help me do this. Here's what i found on the docs :

POST _reindex
{
  "source": {
    "index": "twitter",
    "query": {
      "term": {
        "user": "kimchy"
      }
    }
  },
  "dest": {
    "index": "new_twitter"
  }
}

For example i build on Kibana the specific query for the events that i would like to snapshot using fields like "user.name", "event.code" or "host.hostname".

Is there a way i could tell elasticsearch to reindex my winlogbeat index following that query that i made and backedup ? If not, is there a way i could convert that query to make it match the format that the reindex API expect ?

Thank you for your time.

Hey,

a couple of things to consider and to understand the problem better.

First, reindexing is not a cheap operation, which means it might be easier to accomodate more disk space for a snapshot of the winlogbeat index, than having the cluster do all the additional work of reindexing, snapshotting and then deleting the index (which also means you need more space in your cluster).

Second, if you go for reindexing into a new index, can you state what the exact problem is? The first you should do, is not playing around with the reindex API, but coming up with a query, that matches the documents you want to transfer and then putting that query into the reindex endpoint, plus specifying the correct indices. This means, you need to read a bit about the different kinds of queries Elasticsearch supports, see https://www.elastic.co/guide/en/elasticsearch/reference/7.6/query-dsl.html - a good place to start is probably the match query.

Regarding automation, you would need to trigger this manually, then wait until it is finished and then trigger the snapshot.

However, there is a tool to help you with taking snapshots automatically called Snapshot Lifecycle Management, see https://www.elastic.co/guide/en/elasticsearch/reference/7.6/snapshot-lifecycle-management-api.html (there is also a UI for this in kibana)

Hope that helps as a start.

Hi Alexander,

Thank you for your reply.

What you're saying about the load induced by reindexing operations is something i haven't taken into consideration before.

The reason why i want to reindex is because my end goal is to store 1 year of user's login information in order to be compliant with ISO 27001. Because of that large time span i can't afford to have daily snapshots thare are too big otherwise i won't have the storage capability for 1 year of logs.

I already built a query on Kibana with KQL that match the documents i want to snapshot. I was hopping there was i built-in way to use that KQL query for the reindex operation but apparently i have to redo that query in query dsl.

It might not be a problem right now if i can't automate the operation.

The snapshot lifecycle management is actually what i was planning to use. I haven't propely mastered that tool already but i was hoping i could create a daily snapshot job that would follow an index pattern matching that temporary index i was going to create.

that all makes sense to me. And yeah, you will have to convert the KQL query to the query DSL.

One last thing, if you do not want to store that data in your live cluster is to run reindex-from-remote from another cluster to store the smaller index into that cluster, and then snapshot it. But again, start with the easiest solution and go from there!

I was considering spining up an other small elastic node for remote clustermonitoring as elastic recommend to do. It would be the occasion to also use that node to do remote reindexing.

First i i will try to build my query in DSL then make a reindexing and a snapshot to evaluate the volume of data i get. If this happend to be to complicate i will consider having a processor in my winlogbeat.yml config file to filter the events i want and index them in a different index directly. This way i wouldn't have to go trough reindexing.

Thank you for your inputs. I will come back here with my final solution.

1 Like

Hello,

Finally i decided to not go into reindexing but instead try to put all the events i wanted into a separeted index that i could then snapshot.

As i said my goal was to get all events relates to users login which are 4624(logon succes), 4625 (logon failed) and 4634(log-out).

As far as i know in winlogbeat.yml whenever you want to process events you can only rely on the "winlog.event_data" fields because the other fields you see in your in Kibana doesn't exist yet at the moment when winlogbeat harvest the log. Only theses winlog.event_data fields exist and can be use to filter at source.

Hopefully my events 4624, 4625 and 4634 have a winlog.event_data.LogonType field that i used to get them into my index.

output.elasticsearch:
  hosts: ["https://host1:9200","https://host2:9200","https://host3:9200"]
  protocol: "https"
  username: "winlogbeat_agent"
  password: "x"
  ssl.certificate_authorities: ['x']
  ssl.certificate: 'x'
  ssl.key: 'x'
  indices:
    - index: "ad-users-logs-%{+yyyy.MM.dd}"
      when.and:
        - or:
          - equals.winlog.event_data.LogonType: '2'
          - equals.winlog.event_data.LogonType: '3'
          - equals.winlog.event_data.LogonType: '7'
          - equals.winlog.event_data.LogonType: '8'
          - equals.winlog.event_data.LogonType: '10'
          - equals.winlog.event_data.LogonType: '11'
        - not.regexp.winlog.event_data.TargetUserName: '^.*\$' #remove users which login end with $ (correspond to ad computer)
        - not.has_fields : ['winlog.event_data.GroupMembership'] # event from 4637 have logontype parameter but i don't whan them. hopefully they have a groupmembership parameter i can use to filter them out.

Voilà.

Thanks you for your help spinscale, finally i followed your exemple to use the simplest solution :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.