Unable to fetch mapping. Do you have indices matching the pattern?

Hello Elastic Forum,

I know that this question has been asked many times before, however I am pretty sure that I have logstash successfully creating an index, but I am still getting the error. Hopefully someone can see the errors in my ways.

I have the Elastic stack, ElasticSearch (5.2.0), Logstash (5.2.0), Kibana (5.2.0) running on a single Windows Server 2016 VM, and for my initial test I am passing log4net files into the logstash pipeline using a file beat on the remote application server.

After the initial configuration I can see the default indexes by calling http://localhost:9200/_all:

PS C:\Windows\system32> $all = (Invoke-WebRequest -uri http://localhost:9200/_all -Credential $creds).content | ConvertFrom-Json
PS C:\Windows\system32> $all | fl


.monitoring-es-2-2017.02.03 : @{aliases=; mappings=; settings=}
.kibana                     : @{aliases=; mappings=; settings=}
.security                   : @{aliases=; mappings=; settings=}
.monitoring-data-2          : @{aliases=; mappings=; settings=}

I then created my logstash.conf file, like so:

input {
    beats {
        port => "5043"

    }
}

output {
    elasticsearch { 
		hosts => ["http://localhost:9200"]
		user => "elastic"
		password => "XXXXXX"
	}
}

I figured that by not putting a filter block in I would rule out any filter syntax errors, and I will configure the filter later.

After starting the filebeat I can then see that the logstash index has been created:

PS C:\Windows\system32> $all = (Invoke-WebRequest -uri http://localhost:9200/_all -Credential $creds).content | ConvertFrom-Json
PS C:\Windows\system32> $all | fl


.monitoring-es-2-2017.02.03 : @{aliases=; mappings=; settings=}
.kibana                     : @{aliases=; mappings=; settings=}
.security                   : @{aliases=; mappings=; settings=}
logstash-2017.02.03         : @{aliases=; mappings=; settings=}
.monitoring-data-2          : @{aliases=; mappings=; settings=}

However I am still getting the unable to fetch mapping error:

I have checked the stats of the Index and I can see that there are 18 docs, which correlates to the number of lines in the test document I am using:

PS C:\Windows\system32> $logS.indices.'logstash-2017.02.03'.primaries

docs          : @{count=18; deleted=0}
store         : @{size_in_bytes=59569; throttle_time_in_millis=0}
indexing      : @{index_total=18; index_time_in_millis=197; index_current=0; index_failed=0; delete_total=0; delete_time_in_millis=0;
                delete_current=0; noop_update_total=0; is_throttled=False; throttle_time_in_millis=0}
get           : @{total=0; time_in_millis=0; exists_total=0; exists_time_in_millis=0; missing_total=0; missing_time_in_millis=0; current=0}
search        : @{open_contexts=0; query_total=0; query_time_in_millis=0; query_current=0; fetch_total=0; fetch_time_in_millis=0;
                fetch_current=0; scroll_total=0; scroll_time_in_millis=0; scroll_current=0; suggest_total=0; suggest_time_in_millis=0;
                suggest_current=0}
merges        : @{current=0; current_docs=0; current_size_in_bytes=0; total=0; total_time_in_millis=0; total_docs=0; total_size_in_bytes=0;
                total_stopped_time_in_millis=0; total_throttled_time_in_millis=0; total_auto_throttle_in_bytes=104857600}
refresh       : @{total=10; total_time_in_millis=262; listeners=0}
flush         : @{total=5; total_time_in_millis=161}
warmer        : @{current=0; total=20; total_time_in_millis=8}
query_cache   : @{memory_size_in_bytes=0; total_count=0; hit_count=0; miss_count=0; cache_size=0; cache_count=0; evictions=0}
fielddata     : @{memory_size_in_bytes=0; evictions=0}
completion    : @{size_in_bytes=0}
segments      : @{count=6; memory_in_bytes=34824; terms_memory_in_bytes=32196; stored_fields_memory_in_bytes=1872;
                term_vectors_memory_in_bytes=0; norms_memory_in_bytes=0; points_memory_in_bytes=12; doc_values_memory_in_bytes=744;
                index_writer_memory_in_bytes=0; version_map_memory_in_bytes=0; fixed_bit_set_memory_in_bytes=0;
                max_unsafe_auto_id_timestamp=1486098404861; file_sizes=}
translog      : @{operations=0; size_in_bytes=215}
request_cache : @{memory_size_in_bytes=0; evictions=0; hit_count=0; miss_count=0}
recovery      : @{current_as_source=0; current_as_target=0; throttle_time_in_millis=0}

I can also see the index being created on the file system with the same uuid as listed in the index:

PS C:\Windows\system32> $logS = (Invoke-WebRequest -uri http://localhost:9200/logstash-2017.02.03 -Credential $creds).content | ConvertFrom-Json
PS C:\Windows\system32> $logS.'logstash-2017.02.03'.settings.index


refresh_interval   : 5s
number_of_shards   : 5
provided_name      : logstash-2017.02.03
creation_date      : 1486098404736
number_of_replicas : 1
uuid               : gFrYacYdSludHo6zDvVIIw
version            : @{created=5020099}

I have had a look in the Kabana log and I can see a 404 error when it is trying to get the mappings:

"method":"get","statusCode":404,"req":{"url":"/elasticsearch/logstash-*/_mapping/field/*?_=1486100447579&ignore_unavailable=false&allow_no_indices=false&include_defaults=true"

Hopefully there is enough info here for someone to pick up what is wrong, but if not please let me know.

Thanks,

Tim

I'm having the exact same issue with 5.2 on Debian. ElasticSearch 5.2 does not allow _ in the query but Kibana 5.2 is sending it.

I have discovered that the problem was the user that I was logging in with.

I was logging in with the Kibana user which had the 'kibana_system' role. When I logged in as the 'elastic' user, which was part of the 'superuser' role everything started working.

Now to get onto the filters!!

7 Likes

@kalama, it looks like @timmy8ken was able to resolve his issue by changing the user that they logged in with. If you're still having issue, please feel free to open up a separate discuss ticket and please provide some additional details to the behavior you're seeing.

1 Like

Hi Team,

I am new to ELK stack, I have installed Elasticsearch, Kibana and logstash. But when I open Kibana first time I am facing the same issue.
I have not loaded any data into elasticsearch or in logstash, can you please help to resolve this index pattern issue.
I mean how I can set that index pattern

@shubh09 you'll want to load some data into Elasticsearch, and then when you open Kibana you'll be able to define the "Index Pattern", here are some docs for defining the index pattern https://www.elastic.co/guide/en/kibana/current/tutorial-define-index.html

If you don't have any data in Elasticsearch yet, it might be worthwhile to get some sample data in there to play around with, Kibana, and here are some docs that will walk you through that process as well: https://www.elastic.co/guide/en/kibana/current/tutorial-load-dataset.html

Thank you....but I am having lots of issues might be i have done something wrong hence I want to uninstall full ELK stack and then reinstallation. Can yu please help me how I can uninstall full ELK from widnows 7 ??

@shubh09 were you following any specific guide when installing the stack? There are a few ways one can install the stack, so I'm trying to determine the general approach that you took to help you with the uninstall steps.

Hello,
Have the same issue after starting the logstash 5.2. in windows. Why? As far as i know there shoud be default template and index.

1 Like

@111148 Kibana itself doesn't by default create an Index Pattern, have you looked to see if Logstash is successfully Indexing data into Kibana? You can do so via the Dev Tools by executing the following query GET _cat/indices:

You should see the Index that Logstash created over to the right after executing the query, generally is starts with 'logstash-'. If you see your Index, then you can go to Management -> Index Patterns and create your Index Pattern that tells Kibana how to retrieve data from your Index.

Hello,
I see only yesterdays Logstash index.


Todays logstash index i have deleted after attempt to change it and add a new "date" field to the index mapping. But it caused an error in (field is empty) and after these actions i was not able to use this index anymore so as It became corrupted.

@111148 have you already created your logstash-* index pattern? Can you post a screenshot of where you're seeing the error?

Yes i have created somehow. Do not know how exactly it happened,
Just have stopped two nodes from three and deleted logstash on each of them. Then reinstalled logstash and installed logstash-input-azureblob plugin.
Then adjusted logstash config file with azure blob parameters and started the logstash.
Then opened Management section in Kibana GUI and chosed logstash-* template and created index pattern using the @timestamp field.


but i am not sure that it is correct pattern. so as logstash can not parse my Azure blob json files. (but i suppose it is little bit another issue)

@111148 Have you tried refreshing your Index Pattern by clicking that orange/yellow button? Sometimes you have to do that if the mappings for your index change.

I did it somehow (refresh) and after it is done my index became corrupted. so i do not want to risk again and do it so as i do not want to reinstall logstash again because i am bot aware how it works and i can not create this index by my own (using dev tools) but only after reinstalling logstash , though even reinstalling does not work all the time. Last time a was forced to do these actions three times in a row and only after 3-d attempt i have managed to create an index pattern.) thereby for me logstash is a some kind of magic!)

@111148 the Index Pattern in Kibana is completely separate than your Logstash index in Elasticsearch, so refreshing your Index Pattern won't affect any of the underlying data.

Logstash is responsible for creating the logstash- indexes and putting the data in them, and Kibana just reads the data from it using the Index Pattern as a guide.

i have done it alredy (refresh) but nothing has changed. still facing with json parse error and still can see the same limitted list of errors. more details you can find here. Error with parsing Azure json multiline files
thanks.

Hi everyone,

I can see my indexes created via:

yellow open my_index IhP4oMEPSU2BbOglh3alzg 5 1 0 0 795b 795b
yellow open .monitoring-data-2 NrozdjapSR6VUT0yde5Hig 1 1 3 0 13.5kb 13.5kb
yellow open .monitoring-es-2-2017.03.06 1CVuZrywRNGpag7MyzyqLA 1 1 66325 76 28.6mb 28.6mb
yellow open logstash-2017.03.03 r2d3iI66TYSUfC4u27-izg 5 1 60 0 86.3kb 86.3kb
yellow open .kibana J6UnVth_SR6FyMg3O5YmEw 1 1 2 0 9.7kb 9.7kb
yellow open .monitoring-kibana-2-2017.03.07 T-Zmys9uSX2WlB8TNjjVRA 1 1 5942 0 2.2mb 2.2mb
yellow open orders IqcCb64HSWSgzK8e5873Zw 5 1 1 79 102.2kb 102.2kb
yellow open transacdata i5MS-aZEQvOKl5rAy0c02w 5 1 367712 0 144.5mb 144.5mb
yellow open .monitoring-es-2-2017.03.07 jD9y8IU-R6-2orDCuiK_yA 1 1 91361 352 56.4mb 56.4mb
yellow open logstash-2017.03.02 w5nhwdqaQKitlkkuiiejOQ 5 1 35 0 93.6kb 93.6kb
yellow open .monitoring-kibana-2-2017.03.06 iD98u5QeTjiy2YV8T2fcxA 1 1 5130 0 1.1mb 1.1mb

But i can't see them in Kibana.Discover, i try to refresh the index, change the range time but nothing happen.

PS: when i try the GET _cat/indices i have all of them:

yellow open my_index IhP4oMEPSU2BbOglh3alzg 5 1 0 0 795b 795b
yellow open .monitoring-data-2 NrozdjapSR6VUT0yde5Hig 1 1 3 0 13.5kb 13.5kb
yellow open .monitoring-es-2-2017.03.06 1CVuZrywRNGpag7MyzyqLA 1 1 66325 76 28.6mb 28.6mb
yellow open logstash-2017.03.03 r2d3iI66TYSUfC4u27-izg 5 1 60 0 86.3kb 86.3kb
yellow open .kibana J6UnVth_SR6FyMg3O5YmEw 1 1 2 0 9.7kb 9.7kb
yellow open .monitoring-kibana-2-2017.03.07 T-Zmys9uSX2WlB8TNjjVRA 1 1 6130 0 2.3mb 2.3mb
yellow open orders IqcCb64HSWSgzK8e5873Zw 5 1 1 79 102.2kb 102.2kb
yellow open transacdata i5MS-aZEQvOKl5rAy0c02w 5 1 367712 0 144.5mb 144.5mb
yellow open .monitoring-es-2-2017.03.07 jD9y8IU-R6-2orDCuiK_yA 1 1 94568 640 58.1mb 58.1mb
yellow open logstash-2017.03.02 w5nhwdqaQKitlkkuiiejOQ 5 1 35 0 93.6kb 93.6kb
yellow open .monitoring-kibana-2-2017.03.06 iD98u5QeTjiy2YV8T2fcxA 1 1 5130 0 1.1mb 1.1mb

But nothing in Kibana.Discover.

Do you have any idea please?
Thank you!

@Newuser Have you ensured that the user you're logging into Kibana with has permission to read the data in that Index?

Now i can see all indexes created, i don't know how but everything is ok.
Thank you Brandon