Multiple inputs and outputs in logstash conf file

I have included multiple inputs and outputs in my logstash conf file (without filter for now).
I have also created different indexes for each input. I am not able to see all the logs on kibana , also indices are not visible. can anybody suggest what could be the possible reason.

This means that, no index has been created.
There is no error when you start logstash?
How does logstash get the data, Filebeat or other?

You can check if your indexes exist with this, without kibana.

curl localhost:9200/_cat/indices?v

If this command returns your indexes in the terminal, it is because they exist.

logstash is getting data using filebeat. Below is my output file. mitrologs index is getting created but not the others.

output {
if "scheduledwork" in [tags] {
elasticsearch {
hosts => ["http://10.238.114.142:9200"]
index => "scheduledwork-%{+YYYY-MM-dd}"
}
}
if "emitroLog" in [tags] {
elasticsearch {
hosts => ["http://10.238.114.142:9200"]
index => "emitrolog-%{+YYYY-MM-dd}"
document_type => "_doc"
}
}
else {
elasticsearch {
hosts => "10.238.114.142:9200"
index => "mitrologs-%{+YYYY.MM.dd}"
}
}
stdout { codec => rubydebug }
}

Please format you code.

output {

if "scheduledwork" in [tags] {

elasticsearch {
hosts => ["http://10.238.114.142:9200"]
index => "scheduledwork-%{+YYYY-MM-dd}"
}
}

if "emitroLog" in [tags] {
elasticsearch {
hosts => ["http://10.238.114.142:9200"]
index => "emitrolog-%{+YYYY-MM-dd}"
document_type => "_doc"
}

}else {

elasticsearch {
hosts => "10.238.114.142:9200"
index => "mitrologs-%{+YYYY.MM.dd}"
}

}
stdout { codec => rubydebug }
}

This may mean that your emitroLog and scheduledwork tags do not exist at logstash level or are named differently. Can you share your filebeat configuration file?

here is my filebeat.yml

#=========================== Filebeat inputs =============================

filebeat.inputs:

enabled: true
paths:
  - /opt/application/spring-boot-tomcat-server/logs/scheduledwork.log
type: log
  • enabled: true
    paths:
    • /opt/jboss/7.2.0/mitro/logs/emitroLog.log
      type: log

#============================= Filebeat modules ===============================

filebeat.config.modules:

Glob pattern for configuration loading

path: ${path.config}/modules.d/*.yml

Set to true to enable config reloading

reload.enabled: false

Period on which files under path should be checked for changes

#reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false

#============================== Dashboards =====================================

These settings control loading the sample dashboards to the Kibana index. Loading

the dashboards is disabled by default and can be enabled either by setting the

options here, or by using the -setup CLI flag or the setup command.

setup.dashboards.enabled: true

The URL from where to download the dashboards archive. By default this URL

has a value which is computed based on the Beat name and version. For released

versions, this URL points to the dashboard archive on the artifacts.elastic.co

website.

#setup.dashboards.url:

#============================== Kibana =====================================

Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

This requires a Kibana endpoint configuration.

setup.kibana:

host: "10.238.114.142:5601"

#================================ Outputs =====================================

Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:

Array of hosts to connect to.

hosts: ["10.238.114.142:9200"]

#----------------------------- Logstash output --------------------------------
output.logstash:

The Logstash hosts

hosts: ["10.238.114.142:5044"]

Optional SSL. By default is off.

List of root certificates for HTTPS server verifications

#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

Certificate for SSL client authentication

#ssl.certificate: "/etc/pki/client/cert.pem"

Client Certificate Key

#ssl.key: "/etc/pki/client/cert.key"

#================================ Procesors =====================================

Configure processors to enhance or manipulate events generated by the beat.

processors:

  • add_host_metadata: ~
  • add_cloud_metadata: ~

#================================ Logging =====================================

Sets log level. The default log level is info.

Available log levels are: error, warning, info, debug

logging.level: debug

I had to do something like that that worked. I hope that it will bring you a solution.

#=========================== Filebeat inputs =============================

filebeat.inputs:

- type: log
  # Change to true to enable this prospector configuration.
  enabled: true
  fields: {log_type: "scheduledwork"}
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /opt/application/spring-boot-tomcat-server/logs/scheduledwork.log
    
- type: log
  enabled: true
  fields: {log_type: "emitroLog"}
  paths:
   - /opt/jboss/7.2.0/mitro/logs/emitroLog.log
   
   
#============================= Filebeat modules ===============================

filebeat.config.modules:

#Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml

#Set to true to enable config reloading
reload.enabled: false

#Period on which files under path should be checked for changes
#reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false

#============================== Dashboards =====================================

#These settings control loading the sample dashboards to the Kibana index. Loading
   # the dashboards is disabled by default and can be enabled either by setting the
#options here, or by using the -setup CLI flag or the setup command.
#setup.dashboards.enabled: true

   # The URL from where to download the dashboards archive. By default this URL
   # has a value which is computed based on the Beat name and version. For released
   # versions, this URL points to the dashboard archive on the artifacts.elastic.co
#website.
#setup.dashboards.url:

#============================== Kibana =====================================

#Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
   # This requires a Kibana endpoint configuration.

setup.kibana:

host: "10.238.114.142:5601"

#================================ Outputs =====================================

#Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:

#Array of hosts to connect to.
#hosts: ["10.238.114.142:9200"]

#----------------------------- Logstash output --------------------------------
#The Logstash hosts
output.logstash:
hosts: ["10.238.114.142:5044"]

#Optional SSL. By default is off.
#List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"

Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"

#================================ Procesors =====================================

#Configure processors to enhance or manipulate events generated by the beat.
processors:

add_host_metadata: ~
add_cloud_metadata: ~
#================================ Logging =====================================

#Sets log level. The default log level is info.
#Available log levels are: error, warning, info, debug
logging.level: debug

I update the Filebeat inputs block by adding fields to tag each log collection data point. you will be able to logstash, do this.

 output {

  if [fields][log_type] == "scheduledwork"{
   ## Do instruction 
   }
   
   if [fields][log_type] == "emitroLog"{
   ## Do instruction 
   }else{
    ## Do instruction 
   }
 }

I also commented Elasticsearch output, if you transfer the data via Logstash.

hey It is working now. Thanks

Hi,
I have a query
Can we run multiple logstash conf files? If yes, then how?
I have to import logs for 2 different applications on elk server.

Hello,
Yes you can.
Use Multi pipelines configuration to do that
https://www.elastic.co/guide/en/logstash/6.4/multiple-pipelines.html

I am not able to add error code and error text in visualization. There is a question mark in front of selected fields. what does that mean? Can you help on this.

image

Get help message from the question mark and displays here.
You can have it by pointing your mouse on question mark.

Thanks but there is no such option of help message. Is there any other solution?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.