Purge Data base Elasticsearch

Hi,

Pleaaase, can someone tell me, how to purge the Elasticsearch database automatically every week ???

Elasticsearch can not do that automatically, on it's own, just yet. You need to schedule an external job that does this for you. One tool that you can that helps writting this job is Curator

This article is also a good one that shows how to run it in AWS: https://www.elastic.co/blog/serverless-elasticsearch-curator-on-aws-lambda

I have a problem, I was sending logs from a server using Filebeat to my server. Once I set up the curator and I made the command : curator --config ..... ,The other server stopped sending the logs. Can you help me solve this problem ?

Did you execute it with an action.yml configuration? If so, attach this file here (if you are going to paste it's content make sure you do it between ```)

' # Remember, leave a key empty if there is no value. None will be a string,

not a Python "NoneType"

Also remember that all examples have 'disable_action' set to True. If you

want to use this action as a template, be sure to set this to False after

copying it.

actions:
1:
action: delete_indices
description: >-
Delete indices older than 7 days (based on index name), for logstash-
prefixed indices. Ignore the error if the filter does not result in an
actionable list of indices (ignore_empty_list) and exit cleanly.
options:
ignore_empty_list: True
timeout_override:
continue_if_exception: False
disable_action: True
filters:
- filtertype: pattern
kind: prefix
value: logstash-
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 7
exclude: '

It is very difficult to read yaml like this. Please post it between a pair of "```" (it's 3x `)

# Remember, leave a key empty if there is no value.  None will be a string,
# not a Python "NoneType"
#
# Also remember that all examples have 'disable_action' set to True.  If you
# want to use this action as a template, be sure to set this to False after
# copying it.
actions:
  1:
    action: delete_indices
    description: >-
      Delete indices older than 7 days (based on index name), for logstash-
      prefixed indices. Ignore the error if the filter does not result in an
      actionable list of indices (ignore_empty_list) and exit cleanly.
    options:
      ignore_empty_list: True
      timeout_override:
      continue_if_exception: False
      disable_action: True
    filters:
    - filtertype: pattern
      kind: prefix
      value: logstash-
      exclude:
    - filtertype: age
      source: name
      direction: older
      timestring: '%Y.%m.%d'
      unit: days
      unit_count: 7
      exclude: 

The problem is, it change my logstash, I can't start it as a service anymore !! I don't know why, everything changes.

Everythnig was working before I just install curator and rename my conf file in logstash and everythnig goes wrong ...

I put this in preformatted tags, as the triple back ticks were not cutting it, for some reason.

This action file won't work, because (as it says in the comment at the top), you have disable_action: True, which will prevent it from even running.

Also, this file does not need the comments, and it would be helpful if it started with ---, which indicates YAML.

I unistall curator, and my logstash still blocked. He is not listening and I have java erros !!

What are the errors?

You have done something else. Curator is not the cause of this. In your other post, you mention that you changed your logstash conf file name. This is probably what caused this, since it's not possible for Curator to have done it.

May 15 11:36:01 frghcslnetv10 systemd[1]: Starting logstash...
May 15 11:36:01 frghcslnetv10 polkitd[705]: Unregistered Authentication Agent for unix-process:2635:110069211 (system bus name :1.4650, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, loca
May 15 11:36:27 frghcslnetv10 systemd-journal[503]: Suppressed 12 messages from /system.slice/logstash.service
May 15 11:36:27 frghcslnetv10 logstash[2641]: Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
May 15 11:36:28 frghcslnetv10 logstash[2641]: 2018-05-15 11:36:28,335 main ERROR RollingFileManager (/var/log/logstash/logstash-plain.log) java.io.FileNotFoundException: /var/log/logstash/logstash-plai
May 15 11:36:28 frghcslnetv10 logstash[2641]: at java.io.FileOutputStream.open0(Native Method)
May 15 11:36:28 frghcslnetv10 logstash[2641]: at java.io.FileOutputStream.open(Unknown Source)
May 15 11:36:28 frghcslnetv10 logstash[2641]: at java.io.FileOutputStream.<init>(Unknown Source)
May 15 11:36:28 frghcslnetv10 logstash[2641]: at java.io.FileOutputStream.<init>(Unknown Source) ```






[2018-05-15T11:51:29,521][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-05-15T11:51:29,522][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-05-15T11:51:29,541][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-05-15T11:51:29,568][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-05-15T11:51:29,568][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-05-15T11:51:29,569][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-05-15T11:51:29,570][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-05-15T11:51:29,586][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-05-15T11:51:29,970][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-05-15T11:51:29,977][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-05-15T11:51:29,979][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x6bcbfd4c@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 sleep>"}
[2018-05-15T11:51:29,980][INFO ][logstash.agent           ] Pipelines running {:count=>1, :pipelines=>["main"]}
[2018-05-15T11:52:02,301][FATAL][logstash.runner          ] An unexpected error occurred! {:error=>#<SystemCallError: Unknown error (SystemCallError) - <STDOUT>>, :backtrace=>["org/jruby/RubyIO.java:1457:in `write'", "org/jruby/RubyIO.java:1428:in `write'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-stdout-3.1.4/lib/logstash/outputs/stdout.rb:43:in `block in multi_receive_encoded'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-stdout-3.1.4/lib/logstash/outputs/stdout.rb:42:in `multi_receive_encoded'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:90:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/single.rb:15:in `block in multi_receive'", "org/jruby/ext/thread/Mutex.java:148:in `synchronize'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/single.rb:14:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:49:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:477:in `block in output_batch'", "org/jruby/RubyHash.java:1343:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:476:in `output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:428:in `worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:386:in `block in start_workers'"]}
[2018-05-15T11:52:02,525][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: org.jruby.exceptions.RaiseException: (SystemExit) exit ```

Please, always remember to format your logs, configs, etc. I kindly ask you to edit or repost with preformatted tags, as the triple ticks.

Sorry :blush: I edit it

It seems that Logstash can not write to /var/log/logstash anymore. Does it still exists?

Yes it still exists !!

The error seems to be coming from a stdout output plugin. If there is one, try removing it.

I thought the same I comment it, but I still have the same errors. Logstash can't start as a service, he start only with : config.reload.automatic !!

It seems it is not being able to write to /var/log/logstash/logstash-plain.log. Check file and directory ownership (it should be owned by logstash user)