Did you fix the initial managed index?
Sounds good... we are there now...
I did fixed the intial write index, unfortunately i am seeing an error on the index created today.
Initial index:
GET _cat/aliases/wazuh-alerts-4.x
wazuh-alerts-4.x wazuh-alerts-4.x-2021.12.16-000001 - - - true
I had updated the size limit to 30G and the index does seem to be stopped at 30G
Error:
java.lang.IllegalArgumentException: index.lifecycle.rollover_alias [wazuh-alerts-4.x] does not point to index [wazuh-alerts-4.x-2021.12.17]
at org.elasticsearch.xpack.core.ilm.WaitForRolloverReadyStep.evaluateCondition(WaitForRolloverReadyStep.java:114)
at org.elasticsearch.xpack.ilm.IndexLifecycleRunner.runPeriodicStep(IndexLifecycleRunner.java:174)
at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggerPolicies(IndexLifecycleService.java:327)
at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered(IndexLifecycleService.java:265)
at org.elasticsearch.xpack.core.scheduler.SchedulerEngine.notifyListeners(SchedulerEngine.java:183)
at org.elasticsearch.xpack.core.scheduler.SchedulerEngine$ActiveSchedule.run(SchedulerEngine.java:216)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
at java.base/java.lang.Thread.run(Thread.java:832)
Index Settings for the current index:
{
"index.blocks.read_only_allow_delete": "false",
"index.priority": "1",
"index.query.default_field": [
"*"
],
"index.write.wait_for_active_shards": "1",
"index.lifecycle.name": "Wazuh",
"index.lifecycle.rollover_alias": "wazuh-alerts-4.x",
"index.routing.allocation.include._tier_preference": "data_content",
"index.refresh_interval": "10s",
"index.number_of_replicas": "1"
}
ILM Policy defination:
{
"policy": "Wazuh",
"phase_definition": {
"min_age": "0ms",
"actions": {
"rollover": {
"max_size": "30gb",
"max_age": "1d"
}
}
},
"version": 29,
"modified_date_in_millis": 1639680376131
}
GET /wazuh-alerts-4.x-2021.12.17/_ilm/explain
{
"indices" : {
"wazuh-alerts-4.x-2021.12.17" : {
"index" : "wazuh-alerts-4.x-2021.12.17",
"managed" : true,
"policy" : "Wazuh",
"lifecycle_date_millis" : 1639699202712,
"age" : "7.86h",
"phase" : "hot",
"phase_time_millis" : 1639727300525,
"action" : "rollover",
"action_time_millis" : 1639699701569,
"step" : "check-rollover-ready",
"step_time_millis" : 1639727300525,
"is_auto_retryable_error" : true,
"failed_step_retry_count" : 23,
"phase_execution" : {
"policy" : "Wazuh",
"phase_definition" : {
"min_age" : "0ms",
"actions" : {
"rollover" : {
"max_size" : "30gb",
"max_age" : "1d"
}
}
},
"version" : 29,
"modified_date_in_millis" : 1639680376131
}
}
}
}
Please let me know if any other logs / screenshot can help understand the error.
I just noticed that the template for wazuh-alerts is not showing up on the kibana dev tools console and other wazuh templates are ( they were created by wazuh itself)
The difference is ours is new and the others are legecy, does that make any difference ?
New template is fine
The alias
wazuh-alerts-4.x
points to
wazuh-alerts-4.x-2021.12.17-00001
But you are still actually writing to
wazuh-alerts-4.x-2021.12.17
That is not correct and why it is not working that is exactly what the error says.
The alias must be used to write to and then us used to rollover this index.
Do you set the index name how I showed you in the filebeat.yml?
It does not look like it.
output.elasticsearch:
hosts: ["HOSTS"]
protocol: http
username: "elastic"
password: "PASWD"
ssl.certificate_authorities:
- /etc/filebeat/certs/root-ca.pem
ssl.certificate: "/etc/filebeat/certs/filebeat.pem"
ssl.key: "/etc/filebeat/certs/filebeat-key.pem"
setup.ilm.enabled: false
filebeat.modules:
- module: wazuh
alerts:
enabled: true
archives:
enabled: false
index: 'wazuh-alerts-4.x'
logging.level: debug
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat
keepfiles: 7
permissions: 0644
Here is my current filebeat file.
Nope...
Look carefully I have index
under the module you have it top level.
The way it should work...
Filebeat writes to the alias
The alias points to the real initial index you created ending with 00001
ILM watches that real index
When rollover happens ILM creates a new managed index ....000002 and points the write alias to that.
I tried, got the below error
2021-12-17T10:08:25.284-0500 ERROR instance/beat.go:956 Exiting: error unpacking module config: error creating config from fileset wazuh/index: type 'string' is not supported on top level of config, only dictionary or list
filebeat.modules:
- module: wazuh
alerts:
enabled: true
archives:
enabled: false
index: 'wazuh-alerts-4.x'
Ahh yeah sorry you are using a module
not in input
type.
try FIXED...
FIXED / CHANGED
filebeat.modules:
- module: wazuh
alerts:
enabled: true
input.index: "wazuh-alerts-4.x"
archives:
enabled: false
you can also try putting in the ouput (take out of module)
output.elasticsearch:
hosts: ["XXXXXXXXXXXXXXX"]
protocol: http
username: "elastic"
password: "DDDDDDDD"
index: "wazuh-alerts-4.x"
Tried it both ways, for first ( inside module ) unfortunately results in same error "does not point to index"
For second ( under output ) results in below one
2021-12-17T10:43:57.544-0500 INFO instance/beat.go:373 filebeat stopped.
2021-12-17T10:43:57.552-0500 ERROR instance/beat.go:956 Exiting: setup.template.name and setup.template.pattern have to be set if index name is modified
Ok so lets use the second method now set
setup.ilm.enabled: false
setup.template.enabled: false
(I will check the input.index
... but lets use the output for now... the input allows different indices name for different modules)
EDIT :
AHHH Also I did not realized wazuh is community module it may not support the input settings... I do not have it so I can not test... I tried the input.index
with an official module like nginx and it worked.
I would need to see the exact error
This worked for nginx
filebeat.modules:
- module: nginx
access:
enabled: true
input.index: "nginx-*"
I am unclear where ""does not point to index" comes from I thought you mean there was an error from filebeat starting...
Of course you need to clean up everything.. this will not fix and existing index...
Stop Everything
Clean Up
set the index in the output section, and the 2 setting above.
Recreate your initial managed index with the alias
Start Filebeat...
Filebeat should be writing to the alias which should be pointed at the real index example
wazuh-alerts-4.x-2021.12.17-000001
Let me try above and will get back to you
I think i may need a walk in zen garden
- Stopped all app
- Deleted waz* indice
- Created an intial index ( write enabled )
- started apps
- Waited for rollback to kick in
java.lang.IllegalArgumentException: index.lifecycle.rollover_alias [wazuh-alerts-4.x] does not point to index [wazuh-alerts-4.x-2021.12.17]
at org.elasticsearch.xpack.core.ilm.WaitForRolloverReadyStep.evaluateCondition(WaitForRolloverReadyStep.java:114)
at org.elasticsearch.xpack.ilm.IndexLifecycleRunner.runPeriodicStep(IndexLifecycleRunner.java:174)
at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggerPolicies(IndexLifecycleService.java:327)
at org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered(IndexLifecycleService.java:265)
at org.elasticsearch.xpack.core.scheduler.SchedulerEngine.notifyListeners(SchedulerEngine.java:183)
at org.elasticsearch.xpack.core.scheduler.SchedulerEngine$ActiveSchedule.run(SchedulerEngine.java:216)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
at java.base/java.lang.Thread.run(Thread.java:832)
Check your private message
@stephenb Thank you so much for helping us out on this one, was struggling with this for days now.
Here is the root cause of the problem
The index name was being overwritten in the Kibana ingest pipeline [wazuh], this was causing the changes to be discarded and overwritten.
https://<kibana>/app/management/ingest/ingest_pipelines/?pipeline=filebeat-7.10.2-wazuh-alerts-pipeline
Alias was never implemented, after we removed the below parameter it worked like a charm!
Wow that was super tricky... something for all wazuh users / implementers that want to use new / custom ILM policy.
This wazuh ingest pipeline is manually writing daily indices.
What will need to be removed from the ingest pipeline is the Data Index Name Processor which was manually over writing the index name.
note @Atul_Chadha I updated the thread title ... added Wazuh so others can find.
thanks @stephenb and @Atul_Chadha to remind me what I ve been through the not easy to use of custom index and ILM.
Now I'd like to ask @stephenb what's the best way to create the inizial
PUT wazuh-alerts-4.x-2021.12.16-000001
{
"aliases": {
"wazuh-alerts-4.x": {
"is_write_index": true
}
}
}
when you have to automate everything (we are using terraform) including the above is_write_index : true before your start the metricbeat helm manged by terraform again.
Any idea/tipes?
Cheers
@alfredo.deluca I am not a terraform expert but it looks like there are a number of community providers for REST APIs. It would seem that would be a fairly direct solution.
Also what I would note what made this particular debug so challenging was the hardcoded daily index in the ingest pipeline... not normally what we would recommend.
HI @stephenb. I agree in this case with the Wazuh, but at the time at least for me was a little bit confusing and not clear that you need that step before you can use the custom indexing.
Anyway, thanks heaps
Alfredo
@stephenb Thanks again for helping fix the issue, we have observed and the rolling over works fine. If i understand it correct the filebeat is writing to the alias and in turn it writes to underlying index ending with 00000xx.
I am trying to add date in index name so that it creates an index for the day and rolls it over if need be due to size.
I read your recommendation for custom pipeline however couldn't get it to work. Any chance you have an example which includes date in index and also supports rollover?
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.