No index pattern for logstash

I am struggling a bit with this. Windows 2019 running ELK 7.10.2. I have it ingesting filebeat, metricbeat, etc. Proper indices and patterns are present, except...

Logstash has an index, so I know its ingesting. However, I cannot create an index pattern for it and therefore cannot get visualizations from it. For the life of me, I cannot determine what the cause is.

If I run the Dev Tools, and query for the Indices, I get this:

yellow open heartbeat-7.10.1-2021.01.08-000001 pWcTBNz_TC6cJE3TWXq0kQ 1 1 157846 0 41.7mb 41.7mb
yellow open auditbeat-7.6.2-2021.01.13-000001 n4HJ_K7uQWGcKC6dU8zVlQ 1 1 58578 0 24.3mb 24.3mb
yellow open packetbeat-7.10.2-2021.01.26-000001 qvvqrrmaTuG8aSl-uV2ADw 1 1 247110 0 98mb 98mb
yellow open auditbeat-7.4.2-2021.01.11-000001 QQqfhtqnQQ-Z2qY1jxGIzA 1 1 90990 0 27mb 27mb
green open .apm-custom-link pHKByYwZQe6NgfwpizBSoQ 1 0 0 0 208b 208b
green open .kibana_task_manager_1 VvlWPKW2SfO2etrVOX1boQ 1 0 5 103 197.8kb 197.8kb
yellow open packetbeat-7.10.1-2021.01.08-000001 sspfA00VS5-q3MHTbxOpXA 1 1 19692468 0 5.7gb 5.7gb
yellow open auditbeat-7.10.2-2021.01.26-000001 8j1977n-SBaZUtG4STKJKQ 1 1 5583 0 4.1mb 4.1mb
green open .apm-agent-configuration tU8asT6uRD2Kw-b973mh5Q 1 0 0 0 208b 208b
yellow open winlogbeat-7.5.1-2021.01.11-000001 dD4CAdB1TOyxL8aqv9ZtDQ 1 1 60609 0 37.3mb 37.3mb
yellow open winlogbeat-7.10.2-2021.01.26-000001 i0YhH3OfToC8bVe8kMbaAA 1 1 144 0 377.4kb 377.4kb
green open .kibana_1 MCWUQhXjQjaDhFx8E1p9Cg 1 0 5003 83 3.4mb 3.4mb
yellow open %{[@metadata][beat]}-2021.01.26 SDG6ayapRmitkEjBj5o15A 1 1 1582368 0 505.4mb 505.4mb
yellow open metricbeat-7.10.2-2021.01.26-000001 4ulHLoZjRYOLvEWewNlLAw 1 1 10113 0 3.9mb 3.9mb
yellow open %{[@metadata][beat]}-2021.01.27 umN0E5aNSiyThAdN0-j5fw 1 1 118852 0 71.5mb 71.5mb
yellow open filebeat-7.10.1-2021.01.14-000002 DoAHhFDeTfS_jQXxyXE0Dw 1 1 91893900 0 36gb 36gb
yellow open filebeat-7.10.1-2021.01.13-000001 LnFBy2wmQEiVD0ZwiVDTAQ 1 1 130191084 0 50.4gb 50.4gb
yellow open logstash-2021.01.26-000001 8ykOajKwRmmYn4F8RPW4cw 1 1 1972265 0 592.9mb 592.9mb
yellow open winlogbeat-7.10.1-2021.01.08-000001 lr_OvA5TR1O_Qw6lwRieJg 1 1 48307537 0 41.8gb 41.8gb
yellow open heartbeat-7.5.1-2021.01.11-000001 ySFW6lcpSfqBG5FOJKZZZQ 1 1 136339 0 24.6mb 24.6mb
green open .kibana-event-log-7.10.1-000001 kzlvafLsR6aMzOzzrc1J3A 1 0 9 0 43.8kb 43.8kb
green open .async-search LhYO2-8DTq6L0CS-8qjLtg 1 0 0 0 3.4kb 3.4kb
yellow open heartbeat-7.10.2-2021.01.26-000001 7fCaa_J6SSuaqSYs6q8xWQ 1 1 703 0 526.7kb 526.7kb
green open .kibana-event-log-7.10.2-000001 UY6GdJcKS0GeftkPSoFrTg 1 0 2 0 11kb 11kb
yellow open metricbeat-7.10.1-2021.01.08-000001 chev6X7JTb6TDUnccSrPHQ 1 1 4365972 0 1.1gb 1.1gb

If I run this for the Index patterns:

#! Deprecation: this request accesses system indices: [.kibana_1], but in a future major version, direct access to system indices will be prevented by default
{
"took" : 23,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 5,
"relation" : "eq"
},
"max_score" : 5.5768394,
"hits" : [
{
"_index" : ".kibana_1",
"_type" : "_doc",
"_id" : "index-pattern:filebeat-",
"_score" : 5.5768394,
"_source" : {
"index-pattern" : {
"title" : "filebeat-
"
}
}
},
{
"_index" : ".kibana_1",
"_type" : "_doc",
"_id" : "index-pattern:metricbeat-",
"_score" : 5.5768394,
"_source" : {
"index-pattern" : {
"title" : "metricbeat-
"
}
}
},
{
"_index" : ".kibana_1",
"_type" : "_doc",
"_id" : "index-pattern:auditbeat-",
"_score" : 5.5768394,
"_source" : {
"index-pattern" : {
"title" : "auditbeat-
"
}
}
},
{
"_index" : ".kibana_1",
"_type" : "_doc",
"_id" : "index-pattern:packetbeat-",
"_score" : 5.5768394,
"_source" : {
"index-pattern" : {
"title" : "packetbeat-
"
}
}
},
{
"_index" : ".kibana_1",
"_type" : "_doc",
"_id" : "index-pattern:winlogbeat-",
"_score" : 5.5768394,
"_source" : {
"index-pattern" : {
"title" : "winlogbeat-
"
}
}
}
]
}

Hi and welcome to our community!

What happens when you create an index pattern in stack management of Kibana?

Best,
Matthias

I was able to recreate all the other patterns. It’s the logstash that I cannot get back.

What happens when you try to create an index pattern in Stack Management?
There is no error message?

Well, normally, the indexes that did not have a pattern present would be able to have a pattern created. In this case, logstash-* or logs*, etc are not options because the the Wizard doesnt recognize that the index exists, which it plainly does.

BTW, is states: The index pattern you've entered doesn't match any indices. You can match any of your 0 indices , below.

Update - logstash is creating the index. Its obvious data is getting to ELK. However, I just simply cannot create the index pattern. Is this a problem with the logstash.conf file? Is this a problem with Elasticsearch? Is this a problem with Kibana?

I have no idea where to troubleshoot this stupid thing. I have syslogs pointing to this, so I know that it is ingesting (theoretically) properly.

Here is my logstash_plain.log:

[2021-01-27T14:47:28,733][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.10.2", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc Java HotSpot(TM) 64-Bit Server VM 11.0.9+7-LTS on 11.0.9+7-LTS +indy +jit [mswin32-x86_64]"}
[2021-01-27T14:47:28,968][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2021-01-27T14:47:31,019][INFO ][org.reflections.Reflections] Reflections took 32 ms to scan 1 urls, producing 23 keys and 47 values 
[2021-01-27T14:47:32,258][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2021-01-27T14:47:32,420][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2021-01-27T14:47:32,464][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2021-01-27T14:47:32,465][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2021-01-27T14:47:32,511][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2021-01-27T14:47:32,528][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2021-01-27T14:47:32,537][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2021-01-27T14:47:32,553][INFO ][logstash.outputs.elasticsearch][main] ES Output version determined {:es_version=>7}
[2021-01-27T14:47:32,556][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2021-01-27T14:47:32,549][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2021-01-27T14:47:32,574][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2021-01-27T14:47:32,589][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://localhost:9200"]}
[2021-01-27T14:47:32,614][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2021-01-27T14:47:32,730][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
[2021-01-27T14:47:32,807][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["L:/ELK_Stack/logstash-7.10.1/config/logstash.conf"], :thread=>"#<Thread:0x456a4183 run>"}
[2021-01-27T14:47:33,802][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.98}
[2021-01-27T14:47:33,823][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2021-01-27T14:47:34,026][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2021-01-27T14:47:34,051][INFO ][org.logstash.beats.Server][main][0c7b40a601ddbef6d6308e9b707787d6730fc38eace6208605b9d6342bd03c53] Starting server on port: 5044
[2021-01-27T14:47:34,033][INFO ][logstash.inputs.tcp      ][main][11cd561f9ec2270b230c18b2d3754bbddf53a0b8e7f51b2ba1e17da35a779db3] Starting tcp input listener {:address=>"0.0.0.0:514", :ssl_enable=>"false"}
[2021-01-27T14:47:34,140][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2021-01-27T14:47:34,380][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

Logstash shouldn't be a problem here, and according to your query for indices there is a logstash index available. dear @mattkime , do you have an idea why this user can't find this logstash index when he tries to add an index pattern for it? many thx

@mof_cmsmith Try running GET /_resolve/index/* from Dev Tools - what response do you get? This should return a list of all the indices you have access to.

1 Like

As previously indicated, I see the syslog/logstash indices that I want. So, why cant i create the pattern for them?

{
  "indices" : [
    {
      "name" : ".apm-agent-configuration",
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : ".apm-custom-link",
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : ".async-search",
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : ".kibana-event-log-7.10.2-000001",
      "aliases" : [
        ".kibana-event-log-7.10.2"
      ],
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : ".kibana_1",
      "aliases" : [
        ".kibana"
      ],
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : ".kibana_task_manager_1",
      "aliases" : [
        ".kibana_task_manager"
      ],
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : "auditbeat-7.10.2-2021.01.27-000001",
      "aliases" : [
        "auditbeat-7.10.2"
      ],
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : "auditbeat-7.4.2-2021.01.27-000001",
      "aliases" : [
        "auditbeat-7.4.2"
      ],
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : "auditbeat-7.6.2-2021.01.27-000001",
      "aliases" : [
        "auditbeat-7.6.2"
      ],
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : "filebeat-7.10.1-2021.01.27-000001",
      "aliases" : [
        "filebeat-7.10.1"
      ],
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : "filebeat-7.10.2-2021.01.27-000001",
      "aliases" : [
        "filebeat-7.10.2"
      ],
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : "heartbeat-7.10.2-2021.01.27-000001",
      "aliases" : [
        "heartbeat-7.10.2"
      ],
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : "heartbeat-7.5.1-2021.01.27-000001",
      "aliases" : [
        "heartbeat-7.5.1"
      ],
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : "**logstash-2021.01.27**",
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : "**logstash-2021.01.28**",
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : "metricbeat-7.10.1-2021.01.27-000001",
      "aliases" : [
        "metricbeat-7.10.1"
      ],
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : "metricbeat-7.10.2-2021.01.27-000001",
      "aliases" : [
        "metricbeat-7.10.2"
      ],
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : "packetbeat-7.10.2-2021.01.27-000001",
      "aliases" : [
        "packetbeat-7.10.2"
      ],
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : "**syslog-2021.01.27.log**",
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : "**syslog-2021.01.28.log**",
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : "winlogbeat-7.10.1-2021.01.27-000001",
      "aliases" : [
        "winlogbeat-7.10.1"
      ],
      "attributes" : [
        "open"
      ]
    },
    {
      "name" : "winlogbeat-7.10.2-2021.01.27-000001",
      "aliases" : [
        "winlogbeat-7.10.2"
      ],
      "attributes" : [
        "open"
      ]
    }
  ],
  "aliases" : [
    {
      "name" : ".kibana",
      "indices" : [
        ".kibana_1"
      ]
    },
    {
      "name" : ".kibana-event-log-7.10.2",
      "indices" : [
        ".kibana-event-log-7.10.2-000001"
      ]
    },
    {
      "name" : ".kibana_task_manager",
      "indices" : [
        ".kibana_task_manager_1"
      ]
    },
    {
      "name" : "auditbeat-7.10.2",
      "indices" : [
        "auditbeat-7.10.2-2021.01.27-000001"
      ]
    },
    {
      "name" : "auditbeat-7.4.2",
      "indices" : [
        "auditbeat-7.4.2-2021.01.27-000001"
      ]
    },
    {
      "name" : "auditbeat-7.6.2",
      "indices" : [
        "auditbeat-7.6.2-2021.01.27-000001"
      ]
    },
    {
      "name" : "filebeat-7.10.1",
      "indices" : [
        "filebeat-7.10.1-2021.01.27-000001"
      ]
    },
    {
      "name" : "filebeat-7.10.2",
      "indices" : [
        "filebeat-7.10.2-2021.01.27-000001"
      ]
    },
    {
      "name" : "heartbeat-7.10.2",
      "indices" : [
        "heartbeat-7.10.2-2021.01.27-000001"
      ]
    },
    {
      "name" : "heartbeat-7.5.1",
      "indices" : [
        "heartbeat-7.5.1-2021.01.27-000001"
      ]
    },
    {
      "name" : "metricbeat-7.10.1",
      "indices" : [
        "metricbeat-7.10.1-2021.01.27-000001"
      ]
    },
    {
      "name" : "metricbeat-7.10.2",
      "indices" : [
        "metricbeat-7.10.2-2021.01.27-000001"
      ]
    },
    {
      "name" : "packetbeat-7.10.2",
      "indices" : [
        "packetbeat-7.10.2-2021.01.27-000001"
      ]
    },
    {
      "name" : "winlogbeat-7.10.1",
      "indices" : [
        "winlogbeat-7.10.1-2021.01.27-000001"
      ]
    },
    {
      "name" : "winlogbeat-7.10.2",
      "indices" : [
        "winlogbeat-7.10.2-2021.01.27-000001"
      ]
    }
  ],
  "data_streams" : [ ]
}

Did you alter the output or do those index names really start and end with two *?

No, I did not alter the names. I think when I attempted to "BOLD" them to make it easy to see, the formatting got in the way. I re-ran the command to confirm.

They do not have double * in front or trailing.

If it helps, this is my logstash.conf file:

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "logstash-7.10.2-%{+YYYY.MM.dd}"
    #user => "elastic"
    #password => "changeme"
  }
}

input {
    # Accept input from the console.
    stdin{}
}

filter {
    # Add filter here. This sample has a blank filter.
}

output {
    # Output to the console.
    stdout {
            codec => "rubydebug"
    }
}

input {
  file {
    path => "L:/ELK_Stack/logstash-7.10.1/logs/logstash_in.logs"
    start_position => "beginning"
    sincedb_path => "NUL"
    codec => "json"
  }
}

filter {
   mutate {
     add_field => {"source" => "Medium"}
   }
}

output {
 file {
   path => "L:/ELK_Stack/logstash-7.10.1/logs/logstash_out.logs"
 }
}

input {
  tcp {
    port => 514
    type => syslog
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss.SSS", "MMM dd HH:mm:ss.SSS" ]
      timezone => "UTC"
    }
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
	index => "syslog-%{+YYYY.MM.dd}.log"
    ilm_enabled => false
  }
  stdout { codec => rubydebug }
}

I'm surprised you're having trouble even though the resolve api returns the correct results. The create index pattern UI uses the resolve api so there must be something else causing trouble.

Could you share a har file of the attempt to create the index pattern? Perhaps that will show me something useful.

Thanks,
Matt

What is a har file?

This looks like a pretty good explanation - https://www.medianova.com/en-blog/2019/07/15/a-step-by-step-guide-to-generating-a-har-file

Here is the link to the file. Not sure if its helpful: https://nmof-my.sharepoint.com/:f:/g/personal/cmsmith_nmof_org/Em70vcifHShHiEb61_oG4rsBuWNPz3HFjVweW8OLfmk_4w?e=2A6dm0

I will say, I have never had this much trouble trying to get logstash in. Its beyond frustrating as it usually "just works."

I understand your frustration as your expectations are in the right place. We should be able to get to the bottom of it and learn something in the process.


I see a request to /internal/index-pattern-management/resolve_index/* thats returning a 400 error. The content of the 400 returns html that is not created by kibana including the following content -

An application error occurred on the server. The current custom error settings for this application prevent the details of the application error from being viewed remotely (for security reasons). It could, however, be viewed by browsers running on the local server machine.

Do you know what might be causing this?

It might also be helpful to look at the kibana logs and determine if the request is making it to kibana and whether kibana is logging an error.

I should note that the error I saw doesn't seem particular to logstash and would prevent the creation of any index pattern.

I do not know what is causing that. Where would I find this location on the system?

I do know that I had to manually create the beat- files by the following commands:

  • filebeat setup -E
  • auditbeat setup -E
  • etc

Is there anything similar for logstash on windows?

How are you running Kibana?