Failed action with response of 400, dropping action: ["index"

Hello,
I have configured LS indexer output as below:

output {
elasticsearch {
host => "10.211.10.32"
cluster => "presit-elasticsearch"
node_name => "presit-data-Node-2"
index => "%{type}-%{+YYYY.MM.dd}"
}

type is the field set in LSF configuration under "Fields" array on client machine.

when I start logstash with above output configuration it shows error:
failed action with response of 400, dropping action: ["index"

whats wrong with LS configuration?

br,
Sunil.

node_name => "presit-data-Node-2"

This sounds like the name the Elasticsearch node has in the ES cluster. If that's the case the Logstash shouldn't join the node under the same name. Either way you don't have to set node_name. Drop it for now until we've made sure things work okay.

when I start logstash with above output configuration it shows error:
failed action with response of 400, dropping action: ["index"

That error message is truncated. Please post the full message. The ES logs might contain clues too.

Hi Magnus,
Thanks for help.
I removed the node_name from output and second problem was that index names were in capital. it should me lower case, so 400 error is solved. I can create index names from {type}.

Regarding node_name: if I don't give node_name, LS creates new one with the name "logstash-{hostname}-{someNumber}", but sends data to the node which mentioned in ES.yml why does it created new node?

Regarding node_name: if I don't give node_name, LS creates new one with the name "logstash-{hostname}-{someNumber}", but sends data to the node which mentioned in ES.yml why does it created new node?

Yes, this is expected. See the documentation of the elasticsearch output's node parameter:

The node protocol (default) will connect to the cluster as a normal Elasticsearch node (but will not store data).

In other words, Logstash becomes part of the ES cluster. This should be the most performant configuration but could have other drawbacks. For example, if you don't want to open up for all machines in your network to join the cluster you need to add firewall rules that allow access for Logstash nodes.