All files collected under on index only. Is it possible to have multiple indexes for multiple files?

In your Filebeat.yml file,

  • Try logstash machine ip address instead of localhost.
  • Check whether logstash output is not commented and also elasticsearch output commented.

on your logstash machine,

  • Allow tcp port 5044 in firewall.

Also please change in your logstash.yml

  • log.level: to debug and restart and then share your
    sudo tail -f /var/log/logstash/logstash-plain.log output.

@mancharagopan, Why should the logstash ip make any difference instead of localhost?
which firewall to change for tcp port?

The logstash.yml is already at debug

I am running logstash on centos and by default ports are blocked. I had to allow the port in firewall to receive beats logs. Since you are using ubuntu try following command,

sudo ufw allow 5044/tcp

Sometimes logstash listen on ip address instead of localhost address.

can you please share the output of your logstash log file also?

@mancharagopan, where should the sudo command be run? On logstash or any terminal?

@mancharagopan
The output of this is-

mehak@mehak-VirtualBox:~$ sudo tail -f /home/mehak/Documents/logstash-7.4.0/logs/logstash-plain.log
[2020-01-06T18:36:32,895][DEBUG][logstash.javapipeline    ] Worker closed {:pipeline_id=>"main", :thread=>"#<Thread:0x2a8af52f run>"}
[2020-01-06T18:36:32,901][DEBUG][logstash.outputs.elasticsearch][main] Closing {:plugin=>"LogStash::Outputs::ElasticSearch"}
[2020-01-06T18:36:32,915][DEBUG][logstash.outputs.elasticsearch][main] Stopping sniffer
[2020-01-06T18:36:32,918][DEBUG][logstash.outputs.elasticsearch][main] Stopping resurrectionist
[2020-01-06T18:36:32,926][INFO ][logstash.javapipeline    ] Pipeline terminated {"pipeline.id"=>"main"}
[2020-01-06T18:36:33,192][DEBUG][logstash.outputs.elasticsearch][main] Waiting for in use manticore connections
[2020-01-06T18:36:33,222][DEBUG][logstash.outputs.elasticsearch][main] Closing adapter #<LogStash::Outputs::ElasticSearch::HttpClient::ManticoreAdapter:0x10ac578a>
[2020-01-06T18:36:33,240][DEBUG][logstash.pluginmetadata  ][main] Removing metadata for plugin e54cc96709db2c78b96dd4b4a96de547b77fc3abdbb713757550f0b64e1ae254
[2020-01-06T18:36:33,243][DEBUG][logstash.javapipeline    ][main] Pipeline has been shutdown {:pipeline_id=>"main", :thread=>"#<Thread:0x2a8af52f run>"}
[2020-01-06T18:36:33,290][INFO ][logstash.runner          ] Logstash shut down.

@Christian_Dahlqvist, what change do you suggest with those first-pipeline.conf, second-pipeline.conf and other two files mentioned above to help with separate index being created? Please advise!

According to your log output, your pipeline configuration didn't take effect. Logstash still trying it's default pipeline "main".
Please run the same tail command in one terminal window and restart the logstash service on another terminal window.

Collect all the logs while logstash starting, look for any errors or where it is taking the configuration from.

with the tail command, I got this output

mehak@mehak-VirtualBox:~$ sudo tail -f /home/mehak/Documents/logstash-7.4.0/logs/logstash-plain.log
[sudo] password for mehak: 
[2020-01-08T17:36:27,187][DEBUG][io.netty.util.NetUtil    ][main] Loopback interface: lo (lo, 0:0:0:0:0:0:0:1%lo)
[2020-01-08T17:36:27,187][DEBUG][io.netty.util.NetUtil    ][main] /proc/sys/net/core/somaxconn: 128
[2020-01-08T17:36:27,188][DEBUG][io.netty.channel.DefaultChannelId][main] -Dio.netty.machineId: 08:00:27:ff:fe:9b:94:a9 (auto-detected)
[2020-01-08T17:36:27,201][DEBUG][logstash.agent           ] Starting puma
[2020-01-08T17:36:27,231][DEBUG][logstash.agent           ] Trying to start WebServer {:port=>9600}
[2020-01-08T17:36:27,270][DEBUG][logstash.api.service     ] [api-service] start
[2020-01-08T17:36:27,441][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2020-01-08T17:36:29,378][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-01-08T17:36:29,380][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-01-08T17:36:32,020][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[2020-01-08T17:36:34,413][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-01-08T17:36:34,414][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-01-08T17:36:37,019][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[2020-01-08T17:36:39,424][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-01-08T17:36:39,424][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-01-08T17:36:42,020][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[2020-01-08T17:36:44,430][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-01-08T17:36:44,431][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-01-08T17:36:47,021][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[2020-01-08T17:36:49,438][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-01-08T17:36:49,439][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-01-08T17:36:52,020][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.

And logstash in original terminal is running fine as it has been. There is no error shown but this default pipeline "main" is used and its location is mentioned here.

2020-01-08T17:36:26,998][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-01-08T17:36:27,012][DEBUG][logstash.javapipeline    ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x7164b333 run>"}

@Mehak_Bhargava
Yes. Logstash still using its default pipeline "main". your configuration in pipeline.yml didn't take effect.

Please restart logstash service while running above tail command and look for lines similar to the following,

[2020-01-09T07:59:06,000][DEBUG][logstash.config.source.multilocal] Reading pipeline configurations from YAML {:location=>"/etc/logstash/pipelines.yml"}
[2020-01-09T07:59:06,106][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/beats/beats.conf"}
[2020-01-09T07:59:06,133][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/etc/logstash/conf.d/syslog/syslog.conf"}

It will tell you where it's reading pipeline config from and what are the conf files applied.

@mancharagopan, now logstash is reading the correct pipeline.conf file. and below is the result-

[2020-01-09T11:40:09,820][DEBUG][logstash.config.source.multilocal] Reading pipeline configurations from YAML {:location=>"/home/mehak/Documents/logstash-7.4.0/config/pipelines.yml"}
[2020-01-09T11:40:09,967][DEBUG][logstash.config.source.local.configpathloader] Skipping the following files while reading config since they don't match the specified glob pattern {:files=>["/home/mehak/Documents/logstash-7.4.0/CONTRIBUTORS", "/home/mehak/Documents/logstash-7.4.0/Gemfile", "/home/mehak/Documents/logstash-7.4.0/Gemfile.lock", "/home/mehak/Documents/logstash-7.4.0/LICENSE.txt", "/home/mehak/Documents/logstash-7.4.0/NOTICE.TXT", "/home/mehak/Documents/logstash-7.4.0/bin", "/home/mehak/Documents/logstash-7.4.0/conf.d", "/home/mehak/Documents/logstash-7.4.0/config", "/home/mehak/Documents/logstash-7.4.0/data", "/home/mehak/Documents/logstash-7.4.0/lib", "/home/mehak/Documents/logstash-7.4.0/logs", "/home/mehak/Documents/logstash-7.4.0/logstash-core", "/home/mehak/Documents/logstash-7.4.0/logstash-core-plugin-api", "/home/mehak/Documents/logstash-7.4.0/modules", "/home/mehak/Documents/logstash-7.4.0/tools", "/home/mehak/Documents/logstash-7.4.0/vendor", "/home/mehak/Documents/logstash-7.4.0/x-pack"]}
[2020-01-09T11:40:09,969][DEBUG][logstash.config.source.local.configpathloader] Reading config file {:config_file=>"/home/mehak/Documents/logstash-7.4.0/pipeline.conf"}
[2020-01-09T11:40:10,051][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>1}
[2020-01-09T11:40:10,058][DEBUG][logstash.agent           ] Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:test}
[2020-01-09T11:40:10,428][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:test, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, { at line 12, column 10 (byte 157) after filter {\n if[fields][log_type] ==\"access\"{\n    grok {\n\tmatch => {\"message\" => \"%{COMBINEDAPACHELOG}\"}\n  } else ", :backtrace=>["/home/mehak/Documents/logstash-7.4.0/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "/home/mehak/Documents/logstash-7.4.0/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "/home/mehak/Documents/logstash-7.4.0/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'", "org/jruby/RubyArray.java:2584:in `map'", "/home/mehak/Documents/logstash-7.4.0/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:153:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in `initialize'", "/home/mehak/Documents/logstash-7.4.0/logstash-core/lib/logstash/java_pipeline.rb:26:in `initialize'", "/home/mehak/Documents/logstash-7.4.0/logstash-core/lib/logstash/pipeline_action/create.rb:36:in `execute'", "/home/mehak/Documents/logstash-7.4.0/logstash-core/lib/logstash/agent.rb:326:in `block in converge_state'"]}

Why is the if[fields][log_type] line in config file throwing an error? below is my pipeline.config file

input {
  
  beats {
    port => 5044
  }
}

filter {
 if[fields][log_type] =="access"{
    grok {
	match => {"message" => "%{DATESTAMP:timestamp} %{NONNEGINT:code} %{GREEDYDATA} %{LOGLEVEL} %{NONNEGINT:anum} %{GREEDYDATA} %{NONNEGINT:threadId}"}
  } else if [fields][log_type] == "errors" {
        grok {
            match => { "message" => "%{DATESTAMP:timestamp} %{NONNEGINT:code} %{GREEDYDATA} %{LOGLEVEL} %{NONNEGINT:anum} %{GREEDYDATA:message}" }
        }
  }else [fields][log_type] == "dispatch" {
        grok {
            match => { "message" => "\A%{TIMESTAMP_ISO8601:timestamp}%{SPACE}\[%{DATA:threadId}]%{SPACE}%{LOGLEVEL:logLevel}%{SPACE}%{JAVACLASS:javaClass}%{SPACE}-%{SPACE}?(\[%{NONNEGINT:incidentId}])%{GREEDYDATA:message}" }
        }
    }
}
 
output {
    elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    ilm_enabled => false
    index    => "%{log_type}-%{+YYYY.MM.dd}"  
  }
  stdout {
    codec => rubydebug
  }
}

@mancharagopan, and after using the tail command, I still get this until I start the logstash service

mehak@mehak-VirtualBox:~$ sudo tail -f /home/mehak/Documents/logstash-7.4.0/logs/logstash-plain.log
[sudo] password for mehak: 
[2020-01-09T11:40:10,501][DEBUG][logstash.agent           ] Starting puma
[2020-01-09T11:40:10,535][DEBUG][logstash.instrument.periodicpoller.jvm] Stopping
[2020-01-09T11:40:10,536][DEBUG][logstash.agent           ] Trying to start WebServer {:port=>9600}
[2020-01-09T11:40:10,536][DEBUG][logstash.instrument.periodicpoller.persistentqueue] Stopping
[2020-01-09T11:40:10,547][DEBUG][logstash.instrument.periodicpoller.deadletterqueue] Stopping
[2020-01-09T11:40:10,570][DEBUG][logstash.agent           ] Shutting down all pipelines {:pipelines_count=>0}
[2020-01-09T11:40:10,633][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>0}
[2020-01-09T11:40:10,660][DEBUG][logstash.api.service     ] [api-service] start
[2020-01-09T11:40:10,772][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2020-01-09T11:40:15,869][INFO ][logstash.runner          ] Logstash shut down.

After starting logstash service, the tail command runs similarly to logstash.

Can you quickly run
sudo bin/logstash --config.test_and_exit -f <path_to_config_file>
and see what output it gives?

Looks like you have missed a closing bracket in the filter condition. Try this

 if[fields][log_type] =="access"{
    grok {
	match => {"message" => "%{DATESTAMP:timestamp} %{NONNEGINT:code} %{GREEDYDATA} %{LOGLEVEL} %{NONNEGINT:anum} %{GREEDYDATA} %{NONNEGINT:threadId}"}
  }} else if [fields][log_type] == "errors" {
        grok {
            match => { "message" => "%{DATESTAMP:timestamp} %{NONNEGINT:code} %{GREEDYDATA} %{LOGLEVEL} %{NONNEGINT:anum} %{GREEDYDATA:message}" }
        }
  }else [fields][log_type] == "dispatch" {
        grok {
            match => { "message" => "\A%{TIMESTAMP_ISO8601:timestamp}%{SPACE}\[%{DATA:threadId}]%{SPACE}%{LOGLEVEL:logLevel}%{SPACE}%{JAVACLASS:javaClass}%{SPACE}-%{SPACE}?(\[%{NONNEGINT:incidentId}])%{GREEDYDATA:message}" }
        }
    }
}

@Christian_Dahlqvist, I fixed some errors and applied this index format and in Kibana I see the index under this name as you mentioned-

yellow open   %[fields][log_type]-2020.01.09

But why is it not replacing "errors' or "access" in the [fields][log_type] line?

@Abhilash_B, I fixed the closing bracket and there is no error in this filter block now. But Still in kibana, I dont see the message extracted under grok pattern as I have mentioned.

Below is the output printing out

Jan 9, 2020 @ 13:07:03.65308/10/2019 12:32:18 608   (null)                  INFO   60   Leftside Filter Expression : SubCategory="ATA VTA Reported" AND SourceProblemName="Touch Screen" for User ZK0DUBO Item Count : 7

Whereas if the grok pattern in the filter block is getting applied, it should be just extract timestamp, code, 27 and 24749162. Why is the filter not working?

I ran the command-

like this-

mehak@mehak-VirtualBox:~/Documents/logstash-7.4.0$ sudo bin/logstash --config.test_and_exit -f /home/mehak/Documents/logstash-7.4.0/pipeline.conf

and below is the ending output lines-

[2020-01-09T13:26:21,124][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"stdout", :type=>"output", :class=>LogStash::Outputs::Stdout}
[2020-01-09T13:26:21,148][DEBUG][logstash.plugins.registry] On demand adding plugin to the registry {:name=>"rubydebug", :type=>"codec", :class=>LogStash::Codecs::RubyDebug}
[2020-01-09T13:26:21,155][DEBUG][logstash.codecs.rubydebug] config LogStash::Codecs::RubyDebug/@id = "rubydebug_ac80889f-474c-49a1-9631-e68306e91d66"
[2020-01-09T13:26:21,155][DEBUG][logstash.codecs.rubydebug] config LogStash::Codecs::RubyDebug/@enable_metric = true
[2020-01-09T13:26:21,156][DEBUG][logstash.codecs.rubydebug] config LogStash::Codecs::RubyDebug/@metadata = false
[2020-01-09T13:26:21,301][DEBUG][logstash.outputs.stdout  ] config LogStash::Outputs::Stdout/@codec = <LogStash::Codecs::RubyDebug id=>"rubydebug_ac80889f-474c-49a1-9631-e68306e91d66", enable_metric=>true, metadata=>false>
[2020-01-09T13:26:21,301][DEBUG][logstash.outputs.stdout  ] config LogStash::Outputs::Stdout/@id = "5b0776ff7e379b0852fc229ded8d17c1261507ec4487630df934252a1a106f94"
[2020-01-09T13:26:21,302][DEBUG][logstash.outputs.stdout  ] config LogStash::Outputs::Stdout/@enable_metric = true
[2020-01-09T13:26:21,302][DEBUG][logstash.outputs.stdout  ] config LogStash::Outputs::Stdout/@workers = 1
Configuration OK
[2020-01-09T13:26:21,339][INFO ][logstash.runner          ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

What does this mean?

Your Configuration file locations are as follows,

But logstash reading the conf file from different location.

Did you change the pipelines.yml configuration?
Is there any pipeline.conf file in home/mehak/Documents/logstash-7.4.0/ directory?

I am guessing there aren't any fields.log_type field in your data when filter module receive it.
Create an index pattern for

%[fields][log_type]-2020.01.09

and go to discover and then share raw JSON of one event.

I did change the location of my pipeline.conf in pipeline.yml as-

 - pipeline.id: test 
   path.config: "/home/mehak/Documents/logstash-7.4.0/pipeline.conf"
#   path.config: "/home/mehak/Documents/logstash-7.4.0/conf.d/pipeline.conf"

I did this to solve the pipeline->main id file confusion. So in conf.d you had suggested to make, I have the file renamed as pipes.conf and now when I run "./logstash" I use ./logstash command instead of "/logstash -f pipeline.conf"

mehak@mehak-VirtualBox:~/Documents/logstash-7.4.0$ ls
bin           data          LICENSE.txt               modules        vendor
conf.d        Gemfile       logs                      NOTICE.TXT     x-pack
config        Gemfile.lock  logstash-core             pipeline.conf
CONTRIBUTORS  lib           logstash-core-plugin-api  tools

This is the block of code for output I have had for a while now-


output {
    elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    ilm_enabled => false
    index    => "[fields][log_type]-%{+YYYY.MM.dd}"  
  }
  stdout {
    codec => rubydebug
  }
}

And files were getting collected under an index in kibana called "%[fields][log_type]-2020.01.09" but now they have stopped.

This is one raw JSON of event-

Jan 9, 2020 @ 13:07:03.653

@version:
    1
host.name:
    mehak-VirtualBox
@timestamp:
    Jan 9, 2020 @ 13:07:03.653
agent.ephemeral_id:
    d8713d2d-b8cf-4106-8717-bee272b44479
agent.hostname:
    mehak-VirtualBox
agent.version:
    7.4.0
agent.type:
    filebeat
agent.id:
    bad135c8-d359-4936-b515-79eb4bb24630
tags:
    beats_input_codec_plain_applied, _grokparsefailure
ecs.version:
    1.1.0
message:
    08/10/2019 12:32:18 608 (null) INFO 60 Leftside Filter Expression : SubCategory="ATA VTA Reported" AND SourceProblemName="Touch Screen" for User ZK0DUBO Item Count : 7