Hit_cache_size,hit_cache_ttl not working

I am using dns filter in logstash for my csv file. in my csv file, i have two fields. they are website and count.
Here's the sample content of my csv file:

|website|n|
|www.google.com|n1|
|www.yahoo.com|n2|
|www.bing.com|n3|
|www.stackoverflow.com|n4|
|www.smackcoders.com|n5|
|www.zoho.com|n6|
|www.quora.com|n7|
|www.elastic.co|n8|

Here's my logstash config file:

input {
   file {
      path => "/home/paulsteven/log_cars/cars_dns.csv"
      start_position => "beginning"
      sincedb_path => "/dev/null"
   }
}
filter {
    csv {
        separator => ","
        columns => ["website","n"]
    }
    dns { 
      resolve => [ "website" ] 
      action => "replace" 
      hit_cache_size => 8000 
      hit_cache_ttl => 300 
      failed_cache_size => 1000 
      failed_cache_ttl => 10
    }
}
output {
  elasticsearch {
    hosts => "localhost:9200"
    index => "dnsfilter03"
    document_type => "details"
  }
  stdout{}
}

Here's the sample data passing through logstash:

{
      "@version" => "1",
          "path" => "/home/paulsteven/log_cars/cars_dns.csv",
       "website" => "104.28.5.86",
             "n" => "n21",
          "host" => "smackcoders",
       "message" => "www.smackcoders.com,n21",
    "@timestamp" => 2019-04-23T10:41:15.680Z
}

In the logstash config file, I want to know about "hit_cache_size". What is the use of it. I read the guide of dns filter but unable to figure it out. I added the field in my logstash config but nothing happened. can i get any examples for that. I want to know the use of hit_cache_size. What is the job it's doing in dns filter

Doing a DNS lookup can be expensive. For example, there are places in China where it takes 400 milliseconds for a round trip to Virginia in the USA. If it takes 400 milliseconds to do a DNS lookup then you can only process 2 or 3 events per second. The hit cache saves the result of the DNS lookup so that if you do the same lookup in the next 60 seconds it re-uses the previous result. It ignores the TTLs on A/PTR/NS records.

If you are doing lookups on a long list of unique domains it will give you no performance benefit because you never look up the same domain twice.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.