Filebeat elasticsearh

Hi Team, I have Elasticsearch database cluster with two nodes in docker. and Kibana configured. while adding filebeat getting an error. filebeat error log pasted here.

{"@timestamp":"2023-11-14T04:19:07.634Z", "log.level":"ERROR", "message":"policy [.alerts-ilm-policy] for index [.internal.alerts-observability.uptime.alerts-default-000001] failed on step [{\"phase\":\"hot\",\"action\":\"rollover\",\"name\":\"check-rollover-ready\"}]. Moving to ERROR step", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[b5407439120b][trigger_engine_scheduler][T#1]","log.logger":"org.elasticsearch.xpack.ilm.IndexLifecycleRunner","elasticsearch.cluster.uuid":"pckqRXMJRKS3LqykMaqSBw","elasticsearch.node.id":"jS2aszzLRUi-Pue2a25GPA","elasticsearch.node.name":"b5407439120b","elasticsearch.cluster.name":"docker-cluster","error.type":"java.lang.IllegalArgumentException","error.message":"index.lifecycle.rollover_alias [.alerts-observability.uptime.alerts-default] does not point to index [.internal.alerts-observability.uptime.alerts-default-000001]","error.stack_trace":"java.lang.IllegalArgumentException: index.lifecycle.rollover_alias [.alerts-observability.uptime.alerts-default] does not point to index [.internal.alerts-observability.uptime.alerts-default-000001]\n\tat org.elasticsearch.xcore@8.8.1/org.elasticsearch.xpack.core.ilm.WaitForRolloverReadyStep.evaluateCondition(WaitForRolloverReadyStep.java:179)\n\tat org.elasticsearch.ilm@8.8.1/org.elasticsearch.xpack.ilm.IndexLifecycleRunner.runPeriodicStep(IndexLifecycleRunner.java:235)\n\tat org.elasticsearch.ilm@8.8.1/org.elasticsearch.xpack.ilm.IndexLifecycleService.triggerPolicies(IndexLifecycleService.java:428)\n\tat org.elasticsearch.ilm@8.8.1/org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered(IndexLifecycleService.java:356)\n\tat org.elasticsearch.server@8.8.1/org.elasticsearch.common.scheduler.SchedulerEngine.notifyListeners(SchedulerEngine.java:192)\n\tat org.elasticsearch.server@8.8.1/org.elasticsearch.common.scheduler.SchedulerEngine$ActiveSchedule.run(SchedulerEngine.java:226)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:577)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)\n\tat java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1623)\n"}
{"@timestamp":"2023-11-14T04:19:07.635Z", "log.level":"ERROR", "message":"policy [kibana-event-log-policy] for index [.kibana-event-log-8.8.1-000003] failed on step [{\"phase\":\"hot\",\"action\":\"rollover\",\"name\":\"check-rollover-ready\"}]. Moving to ERROR step", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[b5407439120b][trigger_engine_scheduler][T#1]","log.logger":"org.elasticsearch.xpack.ilm.IndexLifecycleRunner","elasticsearch.cluster.uuid":"pckqRXMJRKS3LqykMaqSBw","elasticsearch.node.id":"jS2aszzLRUi-Pue2a25GPA","elasticsearch.node.name":"b5407439120b","elasticsearch.cluster.name":"docker-cluster","error.type":"java.lang.IllegalArgumentException","error.message":"index.lifecycle.rollover_alias [.kibana-event-log-8.8.1] does not point to index [.kibana-event-log-8.8.1-000003]","error.stack_trace":"java.lang.IllegalArgumentException: index.lifecycle.rollover_alias [.kibana-event-log-8.8.1] does not point to index [.kibana-event-log-8.8.1-000003]\n\tat org.elasticsearch.xcore@8.8.1/org.elasticsearch.xpack.core.ilm.WaitForRolloverReadyStep.evaluateCondition(WaitForRolloverReadyStep.java:179)\n\tat org.elasticsearch.ilm@8.8.1/org.elasticsearch.xpack.ilm.IndexLifecycleRunner.runPeriodicStep(IndexLifecycleRunner.java:235)\n\tat org.elasticsearch.ilm@8.8.1/org.elasticsearch.xpack.ilm.IndexLifecycleService.triggerPolicies(IndexLifecycleService.java:428)\n\tat org.elasticsearch.ilm@8.8.1/org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered(IndexLifecycleService.java:356)\n\tat org.elasticsearch.server@8.8.1/org.elasticsearch.common.scheduler.SchedulerEngine.notifyListeners(SchedulerEngine.java:192)\n\tat org.elasticsearch.server@8.8.1/org.elasticsearch.common.scheduler.SchedulerEngine$ActiveSchedule.run(SchedulerEngine.java:226)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:577)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)\n\tat java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1623)\n"}
{"@timestamp":"2023-11-14T04:19:07.635Z", "log.level":"ERROR", "message":"policy [.alerts-ilm-policy] for index [.internal.alerts-observability.slo.alerts-default-000001] failed on step [{\"phase\":\"hot\",\"action\":\"rollover\",\"name\":\"check-rollover-ready\"}]. Moving to ERROR step", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[b5407439120b][trigger_engine_scheduler][T#1]","log.logger":"org.elasticsearch.xpack.ilm.IndexLifecycleRunner","elasticsearch.cluster.uuid":"pckqRXMJRKS3LqykMaqSBw","elasticsearch.node.id":"jS2aszzLRUi-Pue2a25GPA","elasticsearch.node.name":"b5407439120b","elasticsearch.cluster.name":"docker-cluster","error.type":"java.lang.IllegalArgumentException","error.message":"index.lifecycle.rollover_alias [.alerts-observability.slo.alerts-default] does not point to index [.internal.alerts-observability.slo.alerts-default-000001]","error.stack_trace":"java.lang.IllegalArgumentException: index.lifecycle.rollover_alias [.alerts-observability.slo.alerts-default] does not point to index [.internal.alerts-observability.slo.alerts-default-000001]\n\tat org.elasticsearch.xcore@8.8.1/org.elasticsearch.xpack.core.ilm.WaitForRolloverReadyStep.evaluateCondition(WaitForRolloverReadyStep.java:179)\n\tat org.elasticsearch.ilm@8.8.1/org.elasticsearch.xpack.ilm.IndexLifecycleRunner.runPeriodicStep(IndexLifecycleRunner.java:235)\n\tat org.elasticsearch.ilm@8.8.1/org.elasticsearch.xpack.ilm.IndexLifecycleService.triggerPolicies(IndexLifecycleService.java:428)\n\tat org.elasticsearch.ilm@8.8.1/org.elasticsearch.xpack.ilm.IndexLifecycleService.triggered(IndexLifecycleService.java:356)\n\tat org.elasticsearch.server@8.8.1/org.elasticsearch.common.scheduler.SchedulerEngine.notifyListeners(SchedulerEngine.java:192)\n\tat org.elasticsearch.server@8.8.1/org.elasticsearch.common.scheduler.SchedulerEngine$ActiveSchedule.run(SchedulerEngine.java:226)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:577)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)\n\tat java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1623)\n"}
{"@timestamp":"2023-11-14T04:19:07.635Z", "log.level":"ERROR", "message":"policy [.alerts-ilm-policy] for index [.internal.alerts-security.alerts-default-000001] failed on step [{\"phase\":\"hot\",\"action\":\"rollover\",\"name\":\"check-rollover-ready\"}]. Moving to ERROR step", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[b5407439120b][trigger_engine_scheduler][T#1]","log.logger":"org.elasticsearch.xpack.ilm.IndexLifecycleRunner","elasticsearch.cluster.uuid":"pckqRXMJRKS3LqykMaqSBw","elasticsearch.node.id":"jS2aszzLRUi-Pue2a25GPA","elasticsearch.node.name":"b5407439120b","elasticsearch.cluster.name":"docker-cluster","error.type":"java.lang.IllegalArgumentException","error.message":"index.lifecycle.rollover_alias [.alerts-security.alerts-default] does not point to index [.internal.alerts-security.alerts-default-000001]","error.stack_trace":"java.lang.IllegalArgumentException: index.lifecycle.rollover_alias [.a

any one ?

Hi @VijayIQA, Is this fresh install / configuration with filebeat ?

Elasticsearch cluster was fresh deployment, but data was restored form another cluster which was same cluster name as well same versions 8.8.1

As error saying ILM alias is not pointing correctly to the index. It seems some configuration error while restoring from another cluster. Which version of filebeat you using earlier and now ?

I was resolved that error. in filebeat module Elasticsearch var.path I mentioned Elasticsearch log path but in that path Elasticsearch not generating. i am running on docker.

1 Like

any further steps? to see query response time, lets assume i have one index called tests connected Elasticsearch to the API application. for that application data will come from that tests index. so now i want to query response time how can i achieve that as well i want to see all the requests and response of that index queries.

@ashishtiwari1993

Hi @VijayIQA ,

Elasticsearch return took filed in response which is Milliseconds taken to execute the request. You can simply read took filed and log somewhere.

If you want to enable query logging on elasticsearch log, You can enable Slow log.

But be careful about threshold you going to put for the slow query log. Having very low threshold could generate heavy logs.

@ashishtiwari1993 I enabled slow logs by API

curl -u uname:pwd -k -X PUT "https://localhost:9200/*/_settings" -H 'Content-Type: application/json' -d'
{
   "index.search.slowlog.threshold.query.warn": "10s",
  "index.search.slowlog.threshold.query.info": "5s",
  "index.search.slowlog.threshold.query.debug": "2s",
  "index.search.slowlog.threshold.query.trace": "500ms",
  "index.search.slowlog.threshold.fetch.warn": "1s",
  "index.search.slowlog.threshold.fetch.info": "800ms",
  "index.search.slowlog.threshold.fetch.debug": "500ms",
  "index.search.slowlog.threshold.fetch.trace": "200ms"
}

but i can't see where that slow log file stored in container i can see only server logs as /var/log/elasticsearch/cluster-name.json

and here is log4j.properties file.

status = error
######## Server JSON ############################
#appender.rolling.type = Console
#appender.rolling.name = rolling
#appender.rolling.layout.type = ECSJsonLayout
#appender.rolling.layout.dataset = elasticsearch.server
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.json
appender.rolling.layout.type = ECSJsonLayout
appender.rolling.layout.dataset = elasticsearch.server
appender.rolling.filePattern = ${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.json.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 256MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.fileIndex = nomax
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob = ${sys:es.logs.cluster_name}-*
appender.rolling.strategy.action.condition.nested_condition.type = IfAccumulatedFileSize
appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB
################################################
logger.discovery.name = org.elasticsearch.discovery
logger.discovery.level = info
################################################

rootLogger.level = info
rootLogger.appenderRef.rolling.ref = rolling

######## Deprecation JSON #######################
appender.deprecation_rolling.type = Console
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.layout.type = ECSJsonLayout
# Intentionally follows a different pattern to above
appender.deprecation_rolling.layout.dataset = deprecation.elasticsearch
appender.deprecation_rolling.filter.rate_limit.type = RateLimitingFilter

appender.header_warning.type = HeaderWarningAppender
appender.header_warning.name = header_warning
#################################################

logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = WARN
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.appenderRef.header_warning.ref = header_warning
logger.deprecation.additivity = false

######## Search slowlog JSON ####################
appender.index_search_slowlog_rolling.type = Console
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.layout.type = ECSJsonLayout
appender.index_search_slowlog_rolling.layout.dataset = elasticsearch.index_search_slowlog

#################################################

#################################################
logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref = index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = false

######## Indexing slowlog JSON ##################
appender.index_indexing_slowlog_rolling.type = Console
appender.index_indexing_slowlog_rolling.name = index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.layout.type = ECSJsonLayout
appender.index_indexing_slowlog_rolling.layout.dataset = elasticsearch.index_indexing_slowlog

#################################################

logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = info
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref = index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = true

logger.org_apache_pdfbox.name = org.apache.pdfbox
logger.org_apache_pdfbox.level = off

logger.org_apache_poi.name = org.apache.poi
logger.org_apache_poi.level = off

logger.org_apache_fontbox.name = org.apache.fontbox
logger.org_apache_fontbox.level = off

logger.org_apache_xmlbeans.name = org.apache.xmlbeans
logger.org_apache_xmlbeans.level = off

logger.com_amazonaws.name = com.amazonaws
logger.com_amazonaws.level = warn

logger.com_amazonaws_jmx_SdkMBeanRegistrySupport.name = com.amazonaws.jmx.SdkMBeanRegistrySupport
logger.com_amazonaws_jmx_SdkMBeanRegistrySupport.level = error

logger.com_amazonaws_metrics_AwsSdkMetrics.name = com.amazonaws.metrics.AwsSdkMetrics
logger.com_amazonaws_metrics_AwsSdkMetrics.level = error

logger.com_amazonaws_auth_profile_internal_BasicProfileConfigFileLoader.name = com.amazonaws.auth.profile.internal.BasicProfileConfigFileLoader
logger.com_amazonaws_auth_profile_internal_BasicProfileConfigFileLoader.level = error

logger.com_amazonaws_services_s3_internal_UseArnRegionResolver.name = com.amazonaws.services.s3.internal.UseArnRegionResolver
logger.com_amazonaws_services_s3_internal_UseArnRegionResolver.level = error

appender.audit_rolling.type = Console
appender.audit_rolling.name = audit_rolling
appender.audit_rolling.layout.type = PatternLayout
appender.audit_rolling.layout.pattern = {\
                "type":"audit", \
                "timestamp":"%d{yyyy-MM-dd'T'HH:mm:ss,SSSZ}"\
                %varsNotEmpty{, "cluster.name":"%enc{%map{cluster.name}}{JSON}"}\
                %varsNotEmpty{, "cluster.uuid":"%enc{%map{cluster.uuid}}{JSON}"}\
                %varsNotEmpty{, "node.name":"%enc{%map{node.name}}{JSON}"}\
                %varsNotEmpty{, "node.id":"%enc{%map{node.id}}{JSON}"}\
                %varsNotEmpty{, "host.name":"%enc{%map{host.name}}{JSON}"}\
                %varsNotEmpty{, "host.ip":"%enc{%map{host.ip}}{JSON}"}\
                %varsNotEmpty{, "event.type":"%enc{%map{event.type}}{JSON}"}\
                %varsNotEmpty{, "event.action":"%enc{%map{event.action}}{JSON}"}\
                %varsNotEmpty{, "authentication.type":"%enc{%map{authentication.type}}{JSON}"}\
                %varsNotEmpty{, "user.name":"%enc{%map{user.name}}{JSON}"}\
                %varsNotEmpty{, "user.run_by.name":"%enc{%map{user.run_by.name}}{JSON}"}\
                %varsNotEmpty{, "user.run_as.name":"%enc{%map{user.run_as.name}}{JSON}"}\
                %varsNotEmpty{, "user.realm":"%enc{%map{user.realm}}{JSON}"}\
                %varsNotEmpty{, "user.realm_domain":"%enc{%map{user.realm_domain}}{JSON}"}\
                %varsNotEmpty{, "user.run_by.realm":"%enc{%map{user.run_by.realm}}{JSON}"}\
                %varsNotEmpty{, "user.run_by.realm_domain":"%enc{%map{user.run_by.realm_domain}}{JSON}"}\
                %varsNotEmpty{, "user.run_as.realm":"%enc{%map{user.run_as.realm}}{JSON}"}\
                %varsNotEmpty{, "user.run_as.realm_domain":"%enc{%map{user.run_as.realm_domain}}{JSON}"}\
                %varsNotEmpty{, "user.roles":%map{user.roles}}\
                %varsNotEmpty{, "apikey.id":"%enc{%map{apikey.id}}{JSON}"}\
                %varsNotEmpty{, "apikey.name":"%enc{%map{apikey.name}}{JSON}"}\
                %varsNotEmpty{, "authentication.token.name":"%enc{%map{authentication.token.name}}{JSON}"}\
                %varsNotEmpty{, "authentication.token.type":"%enc{%map{authentication.token.type}}{JSON}"}\
                %varsNotEmpty{, "cross_cluster_access":%map{cross_cluster_access}}\
                %varsNotEmpty{, "origin.type":"%enc{%map{origin.type}}{JSON}"}\
                %varsNotEmpty{, "origin.address":"%enc{%map{origin.address}}{JSON}"}\
                %varsNotEmpty{, "realm":"%enc{%map{realm}}{JSON}"}\
                %varsNotEmpty{, "realm_domain":"%enc{%map{realm_domain}}{JSON}"}\
                %varsNotEmpty{, "url.path":"%enc{%map{url.path}}{JSON}"}\
                %varsNotEmpty{, "url.query":"%enc{%map{url.query}}{JSON}"}\
                %varsNotEmpty{, "request.method":"%enc{%map{request.method}}{JSON}"}\
                %varsNotEmpty{, "request.body":"%enc{%map{request.body}}{JSON}"}\
                %varsNotEmpty{, "request.id":"%enc{%map{request.id}}{JSON}"}\
                %varsNotEmpty{, "action":"%enc{%map{action}}{JSON}"}\
                %varsNotEmpty{, "request.name":"%enc{%map{request.name}}{JSON}"}\
                %varsNotEmpty{, "indices":%map{indices}}\
                %varsNotEmpty{, "opaque_id":"%enc{%map{opaque_id}}{JSON}"}\
                %varsNotEmpty{, "trace.id":"%enc{%map{trace.id}}{JSON}"}\
                %varsNotEmpty{, "x_forwarded_for":"%enc{%map{x_forwarded_for}}{JSON}"}\
                %varsNotEmpty{, "transport.profile":"%enc{%map{transport.profile}}{JSON}"}\
                %varsNotEmpty{, "rule":"%enc{%map{rule}}{JSON}"}\
                %varsNotEmpty{, "put":%map{put}}\
                %varsNotEmpty{, "delete":%map{delete}}\
                %varsNotEmpty{, "change":%map{change}}\
                %varsNotEmpty{, "create":%map{create}}\
                %varsNotEmpty{, "invalidate":%map{invalidate}}\
                }%n
# "node.name" node name from the `elasticsearch.yml` settings
# "node.id" node id which should not change between cluster restarts
# "host.name" unresolved hostname of the local node
# "host.ip" the local bound ip (i.e. the ip listening for connections)
# "origin.type" a received REST request is translated into one or more transport requests. This indicates which processing layer generated the event "rest" or "transport" (internal)
# "event.action" the name of the audited event, eg. "authentication_failed", "access_granted", "run_as_granted", etc.
# "authentication.type" one of "realm", "api_key", "token", "anonymous" or "internal"
# "user.name" the subject name as authenticated by a realm
# "user.run_by.name" the original authenticated subject name that is impersonating another one.
# "user.run_as.name" if this "event.action" is of a run_as type, this is the subject name to be impersonated as.
# "user.realm" the name of the realm that authenticated "user.name"
# "user.realm_domain" if "user.realm" is under a domain, this is the name of the domain
# "user.run_by.realm" the realm name of the impersonating subject ("user.run_by.name")
# "user.run_by.realm_domain" if "user.run_by.realm" is under a domain, this is the name of the domain
# "user.run_as.realm" if this "event.action" is of a run_as type, this is the realm name the impersonated user is looked up from
# "user.run_as.realm_domain" if "user.run_as.realm" is under a domain, this is the name of the domain
# "user.roles" the roles array of the user; these are the roles that are granting privileges
# "apikey.id" this field is present if and only if the "authentication.type" is "api_key"
# "apikey.name" this field is present if and only if the "authentication.type" is "api_key"
# "authentication.token.name" this field is present if and only if the authenticating credential is a service account token
# "authentication.token.type" this field is present if and only if the authenticating credential is a service account token
# "cross_cluster_access" this field is present if and only if the associated authentication occurred cross cluster
# "event.type" informs about what internal system generated the event; possible values are "rest", "transport", "ip_filter" and "security_config_change"
# "origin.address" the remote address and port of the first network hop, i.e. a REST proxy or another cluster node
# "realm" name of a realm that has generated an "authentication_failed" or an "authentication_successful"; the subject is not yet authenticated
# "realm_domain" if "realm" is under a domain, this is the name of the domain
# "url.path" the URI component between the port and the query string; it is percent (URL) encoded
# "url.query" the URI component after the path and before the fragment; it is percent (URL) encoded
# "request.method" the method of the HTTP request, i.e. one of GET, POST, PUT, DELETE, OPTIONS, HEAD, PATCH, TRACE, CONNECT
# "request.body" the content of the request body entity, JSON escaped
# "request.id" a synthetic identifier for the incoming request, this is unique per incoming request, and consistent across all audit events generated by that request
# "action" an action is the most granular operation that is authorized and this identifies it in a namespaced way (internal)
# "request.name" if the event is in connection to a transport message this is the name of the request class, similar to how rest requests are identified by the url path (internal)
# "indices" the array of indices that the "action" is acting upon
# "opaque_id" opaque value conveyed by the "X-Opaque-Id" request header
# "trace_id" an identifier conveyed by the part of "traceparent" request header
# "x_forwarded_for" the addresses from the "X-Forwarded-For" request header, as a verbatim string value (not an array)
# "transport.profile" name of the transport profile in case this is a "connection_granted" or "connection_denied" event
# "rule" name of the applied rule if the "origin.type" is "ip_filter"
# the "put", "delete", "change", "create", "invalidate" fields are only present
# when the "event.type" is "security_config_change" and contain the security config change (as an object) taking effect

logger.xpack_security_audit_logfile.name = org.elasticsearch.xpack.security.audit.logfile.LoggingAuditTrail
logger.xpack_security_audit_logfile.level = trace
logger.xpack_security_audit_logfile.appenderRef.audit_rolling.ref = audit_rolling
logger.xpack_security_audit_logfile.additivity = false

logger.xmlsig.name = org.apache.xml.security.signature.XMLSignature
logger.xmlsig.level = error
logger.samlxml_decrypt.name = org.opensaml.xmlsec.encryption.support.Decrypter
logger.samlxml_decrypt.level = fatal
logger.saml2_decrypt.name = org.opensaml.saml.saml2.encryption.Decrypter
logger.saml2_decrypt.level = fatal

Usually slow file name is

<cluster_name>_index_search_slowlog.log
<cluster_name>_index_indexing_slowlog.log

Also make sure your query is crossing these threshold then only it will log it.

Also curious, If you are checking took field from query response on application level.

@ashishtiwari1993 what is the recommended threshold values for that. to see the index query response.
and why i am getting tar log files

@ashishtiwari1993 yes i am checking on application level, attached Elasticsearch to API services while calling an API will take data from Elasticsearch index.

hey @ashishtiwari1993 thanks for the responses on this thread. I was achieved that by modifying the log4j.properties file. now able to get search slow log file and indexing slow log file. as well able to get those logs on Kibana by using beat. now i can investigate query fetch response from the application.
closing this thread thanks again.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.