How can i send the backup start time and end time to azure cloud storage table?

(Yaswanth ) #1


To do my backup and restore i am using curator 4.2.6 and sending the all the snapshots to azure blob storage account through azure cloud plugin 2.4.1 but what i need is

  1. I need to send the snapshot start time and end time to azure table storage by this i can able to see whether the snapshot is success or not(i.e.if end time is there then it is success otherwise not)

Is this possible in Elasticsearch?


(Aaron Mildenstein) #2

No, but it might be possible in Logstash.

As a simple, high-level example:

  1. Make Curator log in JSON with logformat => json in the client YAML configuration file.
  2. Read this file with Logstash
  3. Capture only the begin/end events by matching text with grok
  4. Take the timestamps and send them to some other service which can put this into your cloud storage table.

For 4, you could write your own Logstash output plugin, if you have the time and resources. Otherwise, you can output to a file, or tcp, or the http_output plugin, or something like that and get the output where it needs to go.

(Yaswanth ) #3


To send the curator logs to logstash automatically i have to enable filebeats right?

Again to send this file to azure i have to do that manually everytime or is there any other way to do that automatically?

My thought:

Normally the Curator Snapshots Start time, End time is stored in Azure blob storage , Is it possible to copy the same to Azure table storage?

Correct me if i am wrong..


(Aaron Mildenstein) #4

Filebeat is a great way to do that, yes. Be sure to indicate in the config that the data is already in JSON.

I personally don't know how to input data into an Azure cloud storage table. Logstash can write to a file. It can send data over plain TCP. It can send data via http. Logstash has many other output formats. If there's a way you can find to use these to send to an Azure cloud storage table, then great. Otherwise, you'll have to find your own way to extend Logstash, or read from a file, or listen to TCP and get the information over there yourself.

Are you referring to the metadata Elasticsearch writes out to the repository? Curator doesn't write that. Curator talks to Elasticsearch via API, and Elasticsearch does anything else. As with the other statements here, you'll have to figure out how to get that data "the last mile" into your storage table on your own.

(system) #5

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.