:message=>"Failed action. ", :status=>404, :action=>["index",

I receive very long log entries with issues such as these (this one is truncated) every few seconds in the logs:

{:timestamp=>"2015-12-09T10:51:52.061000-0700", :message=>"Failed action. ", :status=>404, :action=>["index", {:_id=>nil, :_index=>"logstash-2015.12.09", :_type=>"fortinet",

It happens with LS 2.1.1 and ES 2.1 (or 1.7.3). However my output configuration is much simpler:

output {
        if [type] != 'heartbeat' {
                elasticsearch {
                        hosts => [ "localhost:9200" ]
                }
        }
}

Using the Official Java 8:

java version "1.8.0_66"
Java(TM) SE Runtime Environment (build 1.8.0_66-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.66-b17, mixed mode)

Checking for indices with curl 'localhost:9200/_cat/indices?v' works just fine and returns me the complete list.

I found a topic with a similar message (Logstash "doc_as_upsert" --> Error 404) however, the 'action' is different.

I'm not using shield and ES is only listening on localhost (LS is on the same system). What could be the cause of the issue and how should I fix it?

Can you post the entire error?

It just contains a bunch of events/log entries (you can see the beginning of them at the end of the truncated message). I don't think it would be that useful. Let me know if I'm wrong.

There you go: https://gist.github.com/ThomasdOtreppe/24ca591ac6d3b478a093

Does it help?

Was there a corresponding error in ES at the same time?

There is nothing around that time. The closest I can find is probably 20 seconds before or at least 40 seconds later and I don't think it's related:

[2015-12-09 14:54:01,154][DEBUG][action.bulk              ] [Dark-Crawler] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]

Anything else I should check?

According to the response, your ES instance might be choking up on the bulk indexing request:

response=>{"create"=>{"_index"=>"logstash-2015.12.09", "_type"=>"sophos", "_id"=>"AVGKPGYwuOQZauVwWJy6", "status"=>404, "error"=>"EngineClosedException[[logstash-2015.12.09][3] CurrentState[CLOSED] ]"}

Looks like a warning though, I wouldn't worry about it. How much heap space are you allocating to ES?

on that one, the default value, so probably 2g.

According to the comment in the source code, this happens when a request is in-bound while the shard is closing. For more high level root cause, we will probably need the logs from ES side.

Weird, it didn't seem it was closing when I looked at the log, I only could see those:

[2015-12-09 14:54:01,154][DEBUG][action.bulk              ] [Dark-Crawler] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]

Any objection about opening a bug report about this? Anything else I should check?

Anywhere else I can look?