Kibana not showing letters, but will show numbers

Hello,

As the title describes, I can query all data that is number related, but I cannot query any data that has letters as data. This just happened earlier today. I can go into Discover tab and see the data, but for some reason I cannot see the data in visualizations. The last time this happened, I restarted elasticsearch, but when I did that I lost all of my visualizations and dashboards. It did fix the problem, but this problem shouldn't be happening once every couple weeks.

Thanks in advance :smile:

Can you take a look at the elasticsearch logs and share anything that looks suspicious?

Ok. Everything seemed to be working fine yesterday until I went to check out the dashboard. As soon as that tired to load, errors were thrown. Here are 2 entries from yesterday:

[2018-01-16 08:56:07,905][DEBUG][action.admin.indices.mapping.put] [Trader] failed to put mappings on indices [[logstash-corp_windows_events-2018-01-16]], type [corp_windows_events]
MapperParsingException[Field name [ MSSQLSvc/LAPTOP-test.ad.test.com] cannot contain '.']
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:277)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:222)
at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:139)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:118)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:99)
at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:549)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:257)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230)
at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:480)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:784)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

[2018-01-16 08:56:07,908][DEBUG][action.bulk ] [Trader] [logstash-corp_windows_events-2018-01-16][0] failed to execute bulk item (index) index {[logstash-corp_windows_events-2018-01-16][corp_windows_events][AWD_QXRf6NbdXA05bjPG], source[{"EventTime":"2018-01-16 08:56:08","EventTimeWritten":"2018-01-16 08:56:08","Hostname":"domain","EventType":"AUDIT_SUCCESS","SeverityValue":2,"Severity":"INFO","SourceName":"Security","FileName":"Security","EventID":646,"CategoryNumber":7,"Category":"Account Management ","RecordNumber":1116245164,"Domain":"test","AccountName":"LAPTOP-test$","AccountType":"User","EventReceivedTime":"2018-01-16 08:56:09","SourceModuleName":"in","SourceModuleType":"im_mseventlog","@version":"1","@timestamp":"2018-01-16T13:56:09.265Z","host":"localhost","port":3690,"type":"corp_windows_events","tags":["Low"]," \tTarget Account Name":"LAPTOP-test$"," \tTarget Domain":"test"," \tTarget Account ID":"%{S-1-5-21-26028188-150678075-188441444-172184}"," \tCaller User Name":"LAPTOP-test$"," \tCaller Domain":"test"," \tCaller Logon ID":"(0x6,0xA572D774)"," \tPrivileges":"-"," \tSam Account Name":"-"," \tDisplay Name":"-"," \tUser Principal Name":"-"," \tHome Directory":"-"," \tHome Drive":"-"," \tScript Path":"-"," \tProfile Path":"-"," \tUser Workstations":"-"," \tPassword Last Set":"-"," \tAccount Expires":"-"," \tPrimary Group ID":"-"," \tAllowedToDelegateTo":"-"," \tOld UAC Value":"-"," \tNew UAC Value":"-"," \tUser Account Control":"-"," \tUser Parameters":"-"," \tSid History":"-"," \tLogon Hours":"-"," \tDNS Host Name":"-"," \tService Principal Names":"","\t\tMSSQLSvc/LAPTOzp-test.ad.test.com":"1433"}]}
MapperParsingException[Field name [ MSSQLSvc/LAPTOP-test.ad.test.com] cannot contain '.']
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseProperties(ObjectMapper.java:277)
at org.elasticsearch.index.mapper.object.ObjectMapper$TypeParser.parseObjectOrDocumentTypeProperties(ObjectMapper.java:222)
at org.elasticsearch.index.mapper.object.RootObjectMapper$TypeParser.parse(RootObjectMapper.java:139)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:118)
at org.elasticsearch.index.mapper.DocumentMapperParser.parse(DocumentMapperParser.java:99)
at org.elasticsearch.index.mapper.MapperService.parse(MapperService.java:549)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.applyRequest(MetaDataMappingService.java:257)
at org.elasticsearch.cluster.metadata.MetaDataMappingService$PutMappingExecutor.execute(MetaDataMappingService.java:230)
at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:480)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:784)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

It appears that having a period in the fqdn caused this problem, but all of my work shouldn't have been lost due to this. I believe this following entry is from restarting elasticsearch after the errors occurred:

[2018-01-16 09:49:00,153][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] update_mapping [corp_windows_events]
[2018-01-16 09:49:00,764][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] update_mapping [corp_windows_events]
[2018-01-16 09:58:13,294][INFO ][node ] [Trader] stopping ...
[2018-01-16 09:58:14,513][INFO ][node ] [Trader] stopped
[2018-01-16 09:58:14,513][INFO ][node ] [Trader] closing ...
[2018-01-16 09:58:14,522][INFO ][node ] [Trader] closed
[2018-01-16 09:58:16,154][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: CONFIG_SECCOMP not compiled into kernel, CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER are needed
[2018-01-16 09:58:16,295][INFO ][node ] [Jumbo Carnation] version[2.4.4], pid[17592], build[fcbb46d/2017-01-03T11:33:16Z]
[2018-01-16 09:58:16,295][INFO ][node ] [Jumbo Carnation] initializing ...
[2018-01-16 09:58:16,935][INFO ][plugins ] [Jumbo Carnation] modules [reindex, lang-expression, lang-groovy], plugins [], sites []
[2018-01-16 09:58:16,951][INFO ][env ] [Jumbo Carnation] using [1] data paths, mounts [[/var (/dev/sda4)]], net usable_space [915.5gb], net total_space [991.6gb], spins? [possibly], types [ext4]
[2018-01-16 09:58:16,952][INFO ][env ] [Jumbo Carnation] heap size [30.9gb], compressed ordinary object pointers [true]
[2018-01-16 09:58:19,050][INFO ][node ] [Jumbo Carnation] initialized
[2018-01-16 09:58:19,050][INFO ][node ] [Jumbo Carnation] starting ...
[2018-01-16 09:58:19,128][INFO ][transport ] [Jumbo Carnation] publish_address {localhost:9300}, bound_addresses {localhost:9300}
[2018-01-16 09:58:19,132][INFO ][discovery ] [Jumbo Carnation] elasticsearch/Qq2ax1twTQSOFwiaAiJ8qw
[2018-01-16 09:58:22,225][INFO ][cluster.service ] [Jumbo Carnation] new_master {Jumbo Carnation}{Qq2ax1twTQSOFwiaAiJ8qw}{localhost}{localhost:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2018-01-16 09:58:22,315][INFO ][http ] [Jumbo Carnation] publish_address {localhost:9200}, bound_addresses {localhost:9200}
[2018-01-16 09:58:22,315][INFO ][node ] [Jumbo Carnation] started
[2018-01-16 09:58:22,482][INFO ][gateway ] [Jumbo Carnation] recovered [13] indices into cluster_state
[2018-01-16 09:58:23,852][DEBUG][action.bulk ] [Jumbo Carnation] failed to execute [BulkShardRequest to [logstash-corp_windows_events-2018-01-16] containing [3] requests] on [[logstash-corp_windows_events-2018-01-16][0]]
[logstash-corp_windows_events-2018-01-16][[logstash-corp_windows_events-2018-01-16][0]] IllegalIndexShardStateException[CurrentState[POST_RECOVERY] operation only allowed when started/recovering, origin [PRIMARY]]
at org.elasticsearch.index.shard.IndexShard.ensureWriteAllowed(IndexShard.java:1066)
at org.elasticsearch.index.shard.IndexShard.create(IndexShard.java:542)
at org.elasticsearch.index.engine.Engine$Create.execute(Engine.java:810)
at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:236)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:327)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:120)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:68)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:657)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:287)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:279)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:77)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:378)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

There are no logs in Elasticsearch for today even after trying to query the data in Kibana. Kibana does not show data with words from today or yesterday, but it does for monday and prior, though there is data in the "Discover" tab for today and yesterday.

Sorry for all the comments...

Going back through the logs for Elasticsearch for the days where data was showing in Kibana, there are tons of these logs with the number after the index name varying;

[2018-01-15 00:00:21,993][DEBUG][action.admin.indices.stats] [Kick-Ass] [indices:monitor/stats] failed to execute operation for shard [[corp_windows_events][1], node[tWL5P8-sTy2nsDL8lMYIbA], [P], v[6], s[STARTED], a[id=041ITbmWTDaUKKbfoo08UQ]]
ElasticsearchException[failed to refresh store stats]; nested: NoSuchFileException[/var/lib/elasticsearch/elasticsearch/nodes/0/indices/corp_windows_events/1/index];
at org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1533)
at org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1518)
at org.elasticsearch.common.util.SingleObjectCache.getOrRefresh(SingleObjectCache.java:55)
at org.elasticsearch.index.store.Store.stats(Store.java:294)
at org.elasticsearch.index.shard.IndexShard.storeStats(IndexShard.java:706)
at org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:134)
at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:165)
at org.elasticsearch.action.admin.indices.stats.TransportIndicesStatsAction.shardOperation(TransportIndicesStatsAction.java:47)
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.onShardOperation(TransportBroadcastByNodeAction.java:436)
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:415)
at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$BroadcastByNodeTransportRequestHandler.messageReceived(TransportBroadcastByNodeAction.java:402)
at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:77)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:378)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.file.NoSuchFileException: /var/lib/elasticsearch/elasticsearch/nodes/0/indices/corp_windows_events/1/index
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:427)
at java.nio.file.Files.newDirectoryStream(Files.java:457)
at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:191)
at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:203)
at org.elasticsearch.index.store.FsDirectoryService$1.listAll(FsDirectoryService.java:127)
at org.apache.lucene.store.FilterDirectory.listAll(FilterDirectory.java:57)
at org.apache.lucene.store.FilterDirectory.listAll(FilterDirectory.java:57)
at org.elasticsearch.index.store.Store$StoreStatsCache.estimateSize(Store.java:1539)
at org.elasticsearch.index.store.Store$StoreStatsCache.refresh(Store.java:1531)
... 17 more

Should be the last comment until resolution is found...

I created a visualization in Kibana to try to pinpoint when Kibana stopped getting the string field data from Elasticsearch and it stopped at approximately 7 p.m. I look in the logs for elasticsearch during that time, but I do not see any errors. However, I do see this large gap in time between 1325 and 1859

[2018-01-15 13:23:18,829][INFO ][cluster.metadata ] [Trader] [.kibana] creating index, cause [api], templates [], shards [1]/[1], mappings [config]
[2018-01-15 13:24:11,360][INFO ][cluster.metadata ] [Trader] [.kibana] create_mapping [index-pattern]
[2018-01-15 13:24:44,685][INFO ][cluster.metadata ] [Trader] [.kibana] update_mapping [config]
[2018-01-15 13:24:48,163][INFO ][cluster.metadata ] [Trader] [.kibana] create_mapping [search]
[2018-01-15 13:24:55,442][INFO ][cluster.metadata ] [Trader] [.kibana] create_mapping [dashboard]
[2018-01-15 13:25:10,117][INFO ][cluster.metadata ] [Trader] [.kibana] create_mapping [visualization]
[2018-01-15 18:59:58,557][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2018-01-15 18:59:58,795][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] create_mapping [corp_windows_events]
[2018-01-15 18:59:58,798][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] update_mapping [corp_windows_events]
[2018-01-15 18:59:58,813][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] update_mapping [corp_windows_events]
[2018-01-15 18:59:58,816][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] update_mapping [corp_windows_events]
[2018-01-15 18:59:58,832][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] update_mapping [corp_windows_events]
[2018-01-15 18:59:58,856][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] update_mapping [corp_windows_events]
[2018-01-15 18:59:58,908][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] update_mapping [corp_windows_events]
[2018-01-15 19:00:19,443][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] update_mapping [corp_windows_events]
[2018-01-15 19:00:42,252][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] update_mapping [corp_windows_events]
[2018-01-15 19:01:02,518][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] update_mapping [corp_windows_events]
[2018-01-15 19:17:09,242][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] update_mapping [corp_windows_events]
[2018-01-15 19:18:27,283][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] update_mapping [corp_windows_events]
[2018-01-15 19:32:21,702][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] update_mapping [corp_windows_events]
[2018-01-15 19:41:48,034][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] update_mapping [corp_windows_events]
[2018-01-15 20:20:17,341][INFO ][cluster.metadata ] [Trader] [logstash-corp_windows_events-2018-01-16] update_mapping [corp_windows_events]

I think it has something to do with the index pattern, but it doesn't really make sense. If I use the "logstahs-*" pattern, the data shows up as expected. If I use any other variant like"logstash-crop*" or "logstash-corp_windows_events-*" no string data is being displayed :confused: Like I mentioned earlier though, it was working for a while then all of the sudden stopped.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.