No mapping found for [@timestamp]

I'm real sorry if I'm asking newbie question, i have been introduced to ELK sack last 2 weeks

here's my story, i have this internal network of 10 different servers and I'm collecting logs from all of them (using filebeat system module).
i have this one server which is running logstash to collect all logs and elasticsearch plus kibana
i decide to use X-pack for logging users in and having different users

here's how i'm debuggig it.

on logstash, the pipeline i'm using is the default one taken from official website for parsing filebeat system module

this one:
https://www.elastic.co/guide/en/logstash/6.3/logstash-config-for-filebeat-modules.html#parsing-system

then when i run logstash with
bin/logstash -f pipeline.conf --path.settings /etc/logstash/
i'm getting that [logstash.filters.elasticsearch] Failed to query elasticsearch for previous event.

here's the whole message:

[2018-07-25T14:14:45,948][WARN ][logstash.filters.elasticsearch] Failed to query elasticsearch for previous event {:index=>"", :query=>"", :event=>#LogStash::Event:0x4bbccba9, :error=>#<RuntimeError: Elasticsearch query error: [{"shard"=>0, "index"=>".kibana", "node"=>"yw1ItHevQDON-5vGOsQj9Q", "reason"=>{"type"=>"query_shard_exception", "reason"=>"No mapping found for [@timestamp] in order to sort on", "index_uuid"=>"5olmNGoaQJyvs6LP8L9ZUA", "index"=>".kibana"}}, {"shard"=>0, "index"=>".ml-anomalies-shared", "node"=>"yw1ItHevQDON-5vGOsQj9Q", "reason"=>{"type"=>"query_shard_exception", "reason"=>"No mapping found for [@timestamp] in order to sort on", "index_uuid"=>"as9JnEjTTpqH5xCgOz8xTQ", "index"=>".ml-anomalies-shared"}}, {"shard"=>0, "index"=>".ml-notifications", "node"=>"yw1ItHevQDON-5vGOsQj9Q", "reason"=>{"type"=>"query_shard_exception", "reason"=>"No mapping found for [@timestamp] in order to sort on", "index_uuid"=>"nE6_0GetS9aV3xEScT9DUg", "index"=>".ml-notifications"}}, {"shard"=>0, "index"=>".monitoring-alerts-6", "node"=>"yw1ItHevQDON-5vGOsQj9Q", "reason"=>{"type"=>"query_shard_exception", "reason"=>"No mapping found for [@timestamp] in order to sort on", "index_uuid"=>"KseNNb_sTYSPn977ec_5iA", "index"=>".monitoring-alerts-6"}}, {"shard"=>0, "index"=>".monitoring-es-6-2018.07.25", "node"=>"yw1ItHevQDON-5vGOsQj9Q", "reason"=>{"type"=>"query_shard_exception", "reason"=>"No mapping found for [@timestamp] in order to sort on", "index_uuid"=>"CGDU9lywShq01Eo3irsE0g", "index"=>".monitoring-es-6-2018.07.25"}}, {"shard"=>0, "index"=>".monitoring-kibana-6-2018.07.25", "node"=>"yw1ItHevQDON-5vGOsQj9Q", "reason"=>{"type"=>"query_shard_exception", "reason"=>"No mapping found for [@timestamp] in order to sort on", "index_uuid"=>"_pAhA2y7T-OpmoOOAzSJ0Q", "index"=>".monitoring-kibana-6-2018.07.25"}}, {"shard"=>0, "index"=>".security-6", "node"=>"yw1ItHevQDON-5vGOsQj9Q", "reason"=>{"type"=>"query_shard_exception", "reason"=>"No mapping found for [@timestamp] in order to sort on", "index_uuid"=>"3hysFJAUT_KAcZrXnsx4dw", "index"=>".security-6"}}, {"shard"=>0, "index"=>".triggered_watches", "node"=>"yw1ItHevQDON-5vGOsQj9Q", "reason"=>{"type"=>"query_shard_exception", "reason"=>"No mapping found for [@timestamp] in order to sort on", "index_uuid"=>"Fiy_fakLSmasnMWpcxsiKA", "index"=>".triggered_watches"}}, {"shard"=>0, "index"=>".watcher-history-7-2018.07.25", "node"=>"yw1ItHevQDON-5vGOsQj9Q", "reason"=>{"type"=>"query_shard_exception", "reason"=>"No mapping found for [@timestamp] in order to sort on", "index_uuid"=>"Xgac_ryLRS2c1obJ8Wm9dg", "index"=>".watcher-history-7-2018.07.25"}}, {"shard"=>0, "index"=>".watches", "node"=>"yw1ItHevQDON-5vGOsQj9Q", "reason"=>{"type"=>"query_shard_exception", "reason"=>"No mapping found for [@timestamp] in order to sort on", "index_uuid"=>"gD3IB1S-TfOWqUFw8r27KA", "index"=>".watches"}}, {"shard"=>0, "index"=>"filebeat-2018.07.24", "node"=>"yw1ItHevQDON-5vGOsQj9Q", "reason"=>{"type"=>"query_shard_exception", "reason"=>"No mapping found for [@timestamp] in order to sort on", "index_uuid"=>"DQqCKrU7TYW9JuckfZZntw", "index"=>"filebeat-2018.07.24"}}]>}

is there any way of correcting this "No Mapping found for @timestamp" on my indices (i have filebeat-*,) and these system indices (.monitor, .watchers, .ml-anomalies, etc..)
or am i doing it the wrong way??

What's in your pipeline.conf file?

this one, i took it from elastic official website.. it's used to parse filebeat system module

input {
elasticsearch {
user => logstash_internal
password => x-pack-test-password
}
beats {
port => 5044
host => "0.0.0.0"
}
}
filter {
elasticsearch {
user => logstash_internal
password => x-pack-test-password
}

if [fileset][module] == "system" {
if [fileset][name] == "auth" {
grok {
match => { "message" => ["%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:[%{POSINT:[system][auth][pid]}])?: %{DATA:[system][auth][ssh][event]} %{DATA:[system][auth][ssh][method]} for (invalid user )?%{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]} port %{NUMBER:[system][auth][ssh][port]} ssh2(: %{GREEDYDATA:[system][auth][ssh][signature]})?",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:[%{POSINT:[system][auth][pid]}])?: %{DATA:[system][auth][ssh][event]} user %{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:[%{POSINT:[system][auth][pid]}])?: Did not receive identification string from %{IPORHOST:[system][auth][ssh][dropped_ip]}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sudo(?:[%{POSINT:[system][auth][pid]}])?: \s*%{DATA:[system][auth][user]} :frowning: %{DATA:[system][auth][sudo][error]} ;)? TTY=%{DATA:[system][auth][sudo][tty]} ; PWD=%{DATA:[system][auth][sudo][pwd]} ; USER=%{DATA:[system][auth][sudo][user]} ; COMMAND=%{GREEDYDATA:[system][auth][sudo][command]}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} groupadd(?:[%{POSINT:[system][auth][pid]}])?: new group: name=%{DATA:system.auth.groupadd.name}, GID=%{NUMBER:system.auth.groupadd.gid}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} useradd(?:[%{POSINT:[system][auth][pid]}])?: new user: name=%{DATA:[system][auth][user][add][name]}, UID=%{NUMBER:[system][auth][user][add][uid]}, GID=%{NUMBER:[system][auth][user][add][gid]}, home=%{DATA:[system][auth][user][add][home]}, shell=%{DATA:[system][auth][user][add][shell]}$",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} %{DATA:[system][auth][program]}(?:[%{POSINT:[system][auth][pid]}])?: %{GREEDYMULTILINE:[system][auth][message]}"] }
pattern_definitions => {
"GREEDYMULTILINE"=> "(.|\n)"
}
remove_field => "message"
}
date {
match => [ "[system][auth][timestamp]", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
geoip {
source => "[system][auth][ssh][ip]"
target => "[system][auth][ssh][geoip]"
}
}
else if [fileset][name] == "syslog" {
grok {
match => { "message" => ["%{SYSLOGTIMESTAMP:[system][syslog][timestamp]} %{SYSLOGHOST:[system][syslog][hostname]} %{DATA:[system][syslog][program]}(?:[%{POSINT:[system][syslog][pid]}])?: %{GREEDYMULTILINE:[system][syslog][message]}"] }
pattern_definitions => { "GREEDYMULTILINE" => "(.|\n)
" }
remove_field => "message"
}
date {
match => [ "[system][syslog][timestamp]", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
}
output {
elasticsearch {
user => logstash_internal
password => x-pack-test-password
hosts => localhost
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}

That's not the configuration you linked to earlier. Where does the elasticsearch filter come from? You've probably made a copy/paste mistake.

actually it's the same except that i added elasticsearch username and password part.

here's the real copy of my pipeline in /etc/logstash/conf.d/ directory

input {
beats {
port => 5044
host => "0.0.0.0"
}
elasticsearch {
user => username
password => "pass"
}
}
filter {
elasticsearch {
user => username
password => "pass"
}
if [fileset][module] == "system" {
if [fileset][name] == "auth" {
grok {
match => { "message" => ["%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:[%{POSINT:[system][auth][pid]}])?: %{DATA:[system][auth][ssh][event]} %{DATA:[system][auth][ssh][method]} for (invalid user )?%{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]} port %{NUMBER:[system][auth][ssh][port]} ssh2(: %{GREEDYDATA:[system][auth][ssh][signature]})?",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:[%{POSINT:[system][auth][pid]}])?: %{DATA:[system][auth][ssh][event]} user %{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:[%{POSINT:[system][auth][pid]}])?: Did not receive identification string from %{IPORHOST:[system][auth][ssh][dropped_ip]}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sudo(?:[%{POSINT:[system][auth][pid]}])?: \s*%{DATA:[system][auth][user]} :frowning: %{DATA:[system][auth][sudo][error]} ;)? TTY=%{DATA:[system][auth][sudo][tty]} ; PWD=%{DATA:[system][auth][sudo][pwd]} ; USER=%{DATA:[system][auth][sudo][user]} ; COMMAND=%{GREEDYDATA:[system][auth][sudo][command]}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} groupadd(?:[%{POSINT:[system][auth][pid]}])?: new group: name=%{DATA:system.auth.groupadd.name}, GID=%{NUMBER:system.auth.groupadd.gid}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} useradd(?:[%{POSINT:[system][auth][pid]}])?: new user: name=%{DATA:[system][auth][user][add][name]}, UID=%{NUMBER:[system][auth][user][add][uid]}, GID=%{NUMBER:[system][auth][user][add][gid]}, home=%{DATA:[system][auth][user][add][home]}, shell=%{DATA:[system][auth][user][add][shell]}$",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} %{DATA:[system][auth][program]}(?:[%{POSINT:[system][auth][pid]}])?: %{GREEDYMULTILINE:[system][auth][message]}"] }
pattern_definitions => {
"GREEDYMULTILINE"=> "(.|\n)"
}
remove_field => "message"
}
date {
match => [ "[system][auth][timestamp]", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
geoip {
source => "[system][auth][ssh][ip]"
target => "[system][auth][ssh][geoip]"
}
}
else if [fileset][name] == "syslog" {
grok {
match => { "message" => ["%{SYSLOGTIMESTAMP:[system][syslog][timestamp]} %{SYSLOGHOST:[system][syslog][hostname]} %{DATA:[system][syslog][program]}(?:[%{POSINT:[system][syslog][pid]}])?: %{GREEDYMULTILINE:[system][syslog][message]}"] }
pattern_definitions => { "GREEDYMULTILINE" => "(.|\n)
" }
remove_field => "message"
}
date {
match => [ "[system][syslog][timestamp]", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
}
output {
elasticsearch {
user => username
password => "pass"
hosts => localhost
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
}

actually it's the same except that i added elasticsearch username and password part.

No! The original doesn't have an elasticsearch filter, only an elasticsearch output. Compare what comes right after filter { in your file vs. in the example in the documentation. Over and out.

i looked up everything, copied it again, now i'm getting this new error of "could not index event to elasticsearch"
here it is:

[2018-07-30T09:28:23,795][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-6.3.1-2018.07.30", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x175f7d4b], :response=>{"index"=>{"_index"=>"filebeat-6.3.1-2018.07.30", "_type"=>"doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"Failed to parse mapping [doc]: Mapping definition for [error] has unsupported parameters: [properties : {code={type=long}, message={norms=false, type=text}, type={ignore_above=1024, type=keyword}}]", "caused_by"=>{"type"=>"mapper_parsing_exception", "reason"=>"Mapping definition for [error] has unsupported parameters: [properties : {code={type=long}, message={norms=false, type=text}, type={ignore_above=1024, type=keyword}}]"}}}}}

and when i try to get mapping for filbeat-* with curl -X GET -u elastic "localhost:9200/filebeat-*/_mapping/_doc"

i'm getting this error of missing type..

{"error":{"root_cause":[{"type":"type_missing_exception","reason":"type[[_doc]] missing","index_uuid":"na","index":"_all"}],"type":"type_missing_exception","reason":"type[[_doc]] missing","index_uuid":"na","index":"_all"},"status":404}

are there any reason this is happening, or any suggestion.
thanks

The name of the type is "doc", not "_doc".

i finally got it :sweat_smile: the main reason was that last time i updated my filebeat, i didn't bother to delete the old templates, and load there new ones.

and this thread below help me.

Thanks a lot for your time.. you've been helpful

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.