Cannot support regular expression style include/exclude settings as they can only be applied to string fields


(Luis Alejandro Galan) #1

Hello,

I am new to ELK stack and have gone through the basic installation using the following links:


On my Management/Elasticsearch/Index Management in Kibana, i saw that my filebeat-* indexes are both in "Yellow" Health. I was checking my logs and ran into a bunch of these errors in /var/log/logstash:

[2018-06-28T14:27:23,938][DEBUG][o.e.a.s.TransportSearchAction] [jlrXYGz] [filebeat-2018.06.27][2], node[jlrXYGz7SRaJeuaX_uQAng], [P], s[STARTED], a[id=Z_2_apBaQmSDcFr2t0liFw]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[filebeat-*], indicesOptions=IndicesOptions[id=39, ignore_unavailable=true, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_aliases_to_multiple_indices=true, forbid_closed_indices=true, ignore_aliases=false], types=[], routing='null', preference='1530208503097', requestCache=null, scroll=null, maxConcurrentShardRequests=5, batchedReduceSize=512, preFilterShardSize=64, allowPartialSearchResults=true, source={"size":0,"query":{"bool":{"must":[{"range":{"@timestamp":{"from":1530209543908,"to":1530210443908,"include_lower":true,"include_upper":true,"format":"epoch_millis","boost":1.0}}}],"filter":[{"match_all":{"boost":1.0}}],"adjust_pure_negative":true,"boost":1.0}},"_source":{"includes":[],"excludes":[]},"stored_fields":"*","docvalue_fields":["@timestamp"],"script_fields":{},"aggregations":{"4":{"terms":{"field":"offset","size":5,"min_doc_count":1,"shard_min_doc_count":0,"show_term_doc_count_error":false,"order":{"_key":"desc"},"include":"Warning"},"aggregations":{"2":{"min":{"field":"offset"}},"3":{"max":{"field":"offset"}}}}}}}] lastShard [true]
org.elasticsearch.transport.RemoteTransportException: [jlrXYGz][127.0.0.1:9300][indices:data/read/search[phase/query]]
Caused by: org.elasticsearch.search.aggregations.AggregationExecutionException: Aggregation [4] cannot support regular expression style include/exclude settings as they can only be applied to string fields. Use an array of numeric values for include/exclude clauses used to filter numeric fields
	at org.elasticsearch.search.aggregations.bucket.terms.TermsAggregatorFactory.doCreateInternal(TermsAggregatorFactory.java:160) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.search.aggregations.support.ValuesSourceAggregatorFactory.createInternal(ValuesSourceAggregatorFactory.java:55) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.search.aggregations.AggregatorFactory.create(AggregatorFactory.java:216) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.search.aggregations.AggregatorFactories.createTopLevelAggregators(AggregatorFactories.java:216) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.search.aggregations.AggregationPhase.preProcess(AggregationPhase.java:55) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:105) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.indices.IndicesService.lambda$loadIntoContext$14(IndicesService.java:1134) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.indices.IndicesService.lambda$cacheShardLevelResult$15(IndicesService.java:1187) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.indices.IndicesRequestCache$Loader.load(IndicesRequestCache.java:160) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.indices.IndicesRequestCache$Loader.load(IndicesRequestCache.java:143) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:399) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.indices.IndicesRequestCache.getOrCompute(IndicesRequestCache.java:116) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.indices.IndicesService.cacheShardLevelResult(IndicesService.java:1193) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.indices.IndicesService.loadIntoContext(IndicesService.java:1133) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:322) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:357) ~[elasticsearch-6.3.0.jar:6.3.0]
	at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:333) 

I suspect that it's being caused by /etc/logstash/conf.d/20-filter-tomcat-logs.conf that has the following:

filter {
    # access.log
    if ([source] =~ /.*\.txt$/) {
        grok {
            # Access log pattern is %a %{waffle.servlet.NegotiateSecurityFilter.PRINCIPAL}s %t %m %U%q %s %B %T "%{Referer}i" "%{User-Agent}i"
            # 10.0.0.7 - - [03/Sep/2017:10:58:19 +0000] "GET /pki/scep/pkiclient.exe?operation=GetCACaps&message= HTTP/1.1" 200 39
            match => [ "message" , "%{IPV4:clientIP} - %{NOTSPACE:user} \[%{DATA:timestamp}\] \"%{WORD:method} %{NOTSPACE:request} HTTP/1.1\" %{NUMBER:status} %{NUMBER:bytesSent}" ]
            remove_field => [ "message" ]
            add_field => { "[@metadata][cassandra_table]" => "tomcat_access" }
        }
        grok{
            match => [ "request", "/%{USERNAME:app}/" ]
            tag_on_failure => [ ]
        }
        date {
            match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z" ]
            remove_field => [ "timestamp" ]
        }
        ruby {
            code => "event.set('ts', event.get('@timestamp'))"
        }
        mutate {
            lowercase => [ "user" ]
            convert => [ "bytesSent", "integer", "duration", "float" ]
            update =>  { "host" => "%{[beat][hostname]}" }
            remove_field => [ "beat","type","geoip","input_type","tags" ]
        }
        if [user] == "-" {
            mutate {
                remove_field => [ "user" ]
            }
        }
        # drop unmatching message (like IPv6 requests)
        if [message] =~ /(.+)/  {
            drop { }
        }
    }
}

Any help would be appreciated. Please let me know if you have any questions.


(Aanup) #2

In mutate . Write in below format

convert => {
"bytesSent" => "integer"
"duration", => "float"
}


(Luis Alejandro Galan) #3

good morning @aanupdarekar, i've added the new code like this:
mutate {
lowercase => [ "user" ]
#convert => [ "bytesSent", "integer", "duration", "float" ]
convert => { "bytesSent" => "integer", "duration", => "float" }
...

looks like the error has stopped :slight_smile:

though, i see these messages in the logs
[WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [jlrXYGz] Failed to clear cache for realms [[]]
[INFO ][o.e.c.r.a.AllocationService] [jlrXYGz] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0], [.monitoring-kibana-6-2018.06.26][0]] ...]).

is this normal if i only have elasticsearch installed on one server?


(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.