Logstash unable to create custom indices in Elastisearch

I am trying to configure ELK locally on Windows. I have been able to create a pipeline, detect, parse and view logs in Kibana. I now want to define indexation conditionally in the pipeline config, but this is causing some trouble:
When Logstash tries to forward a message, Elastisearch throws a 404 error, stating "No such index".
This is unexpected, because when querying a new index to the ES instance directly with a http request, it creates a new index just fine:

localhost:9200 -->
{
  "name" : "CI00001875",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "kkIEH_o0QlGqAkqtAzAdAQ",
  "version" : {
    ... 
}

curl -X PUT "localhost:9200/newindex?pretty" -->
{
  "acknowledged" : true,
  "shards_acknowledged" : true,
  "index" : "newindex"
}

But, in logstash:

[WARN ][logstash.filters.elasticsearch][main] Failed to query elasticsearch for previous event {:index=>"test", :error=>"[404] {\"error\":{\"root_cause\":[{\"type\":\"index_not_found_exception\",\"reason\":\"no such index [test]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"test\",\"index_uuid\":\"_na_\",\"index\":\"test\"}],\"type\":\"index_not_found_exception\",\"reason\":\"no such index [test]\",\"resource.type\":\"index_or_alias\",\"resource.id\":\"test\",\"index_uuid\":\"_na_\",\"index\":\"test\"},\"status\":404}"}

And here is my pipeline configuration:

input {
	beats {
		port => "5044"
	}
}

filter {
	if [fields][apache] {
		grok {
			match => { "message" => "%{COMBINEDAPACHELOG}" }
		}
		geoip {
        source => "clientip"
		}
		date {
			match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
		}
		elasticsearch {
			index => "test"
		}
	} else {
		grok {
			patterns_dir => ["./patterns"]
			match => { "message" => "%{WORD:log_level} *\|.*\| %{WORD:thread_name} *\| %{HYRIS_TIMESTAMP:timestamp} *\| %{HYBRIS_MSG:msg}" }
		}
		date {
			match => [ "timestamp", "yyyy/MM/dd HH:mm:ss.SSS" ]
			timezone => "UTC"
		}
		elasticsearch {
			index => "test"
		}
	}
}

output {
	stdout { codec => rubydebug }
	elasticsearch {
    hosts => [ "localhost:9200" ]
}
}

So far, i've tried the following:

  • Setting action.auto_create_index: true in the ES config.
  • Running both instances as administrator
  • Checking if there is any authentication going on, which there isn't.
  • Resetting Logstash and ES.

It seems that logstash sends a different request than I think, or that it does not have permission.
Does anybody know what else I could try?
Thanks in advance.

[WARN ][logstash.filters.elasticsearch][main] Failed to query

The error is not thrown by your output, but by the elasticsearch filter that you are using (why is that even there?). It tries to query data from the test index that doesn't exist.

You didn't specify an index for the output, so that is probably writing into the default index "logstash-%{+YYYY.MM.dd}"

filter {
    elasticsearch {
       index => "test"
   }
}

I expected this would create a new index called "test". Is this not right?

Search Elasticsearch for a previous log event and copy some fields from it into the current event.

A filter is not the same as an output. In an output you define the settings to save data. Filters are meant to lookup and transform data for your event. I think you need to have a closer look at the documentation instead of just guessing randomly what something does :slight_smile:

The index will be created when you specify the target index in the output because ES creates the index automatically if it doesn't already exist.

1 Like

Jep, that did the trick:

output {
       if [fields][apache] {
		elasticsearch {
			index => "logstash-%{+YYYY.MM.dd}-apache"
		}
      }
}

... makes elasticsearch create the proper indices.

Stupid of me to put it in a different field than the output one.
Thanks for your help!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.