All files collected under on index only. Is it possible to have multiple indexes for multiple files?

@mancharagopan, I have an index called- "yellow open %[fields][log_type]-2020.01.09 " in kibana but now when I try to delete it, it doesnt delete and throws this error-

{
  "error": "invalid escape sequence `%[f' at index 0 of: %[fields][log_type]-2020.01.09, allowed: [GET, PUT, DELETE, HEAD]",
  "status": 405
}

AFter this, should I try making multiple pipelines so that I can have multiple indexes?

It is no use if your filter doesn't work.

change your index pattern like this and see how it's like,
index => "%{[@metadata][beat]}-%{+YYYY.MM.DD}"

I was able to finally delete the index error I had above

@mancharagopan, Made Progress!

The index name is kibana is actually showing access-202.01.10 which is how I wanted it to show. But it is still only creating one index and not the other two indexes for errors and dispatch. Can you suggest if I should make multiple files as multiple pipelines?

@mancharagopan, Separate indexes are being created now. had to reload the other two log files and now I have three indexes. I have opened another discussions on the filter grok pattern as that is still not working.

Great! Share the change that you have done so that anyone have this issue again can refer. and Close the discussion.

Ithelped to reload my log files and then three indexes were created. This is my final pipeline.conf file-

input {
  
  beats {
    port => 5044
  }
}

filter 
{
 if[fields][log_type] =="access"
  {
    grok 
    {
	match => {"message" => "%{DATESTAMP:timestamp} %{NONNEGINT:code} %{GREEDYDATA} %{GREEDYDATA:LOGLEVEL} %{NONNEGINT:anum} %{GREEDYDATA} %{NONNEGINT:threadId}%{GREEDYDATA:message}"}
    } 
  }else if [fields][log_type] == "errors" 
    {
        grok
        {
            match => { "message" => "%{DATESTAMP:timestamp} %{NONNEGINT:code} %{GREEDYDATA} %{LOGLEVEL} %{NONNEGINT:anum} %{GREEDYDATA:message}" }
        }
  }
  else if [fields][log_type] == "dispatch" 
  {
        grok 
        {
            match => { "message" => "\A%{TIMESTAMP_ISO8601:timestamp}%{SPACE}\[%{DATA:threadId}]%{SPACE}%{LOGLEVEL:logLevel}%{SPACE}%{JAVACLASS:javaClass}%{SPACE}-%{SPACE}?(\[%{NONNEGINT:incidentId}])%{GREEDYDATA:message}" }
        }
    }
}

output {
    elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    ilm_enabled => false
    index    => "%{[fields][log_type]}-%{+YYYY.MM.dd}"  
  }
  stdout {
    codec => rubydebug
  }
}

and pipelines.yml-

 - pipeline.id: test 
   path.config: "/home/mehak/Documents/logstash-7.4.0/pipeline.conf"

@mancharagopan, Hi I am facing this issue again and its very much related to what we worked on.

My Elasticsearch, kibana, logstash are running on VM and filebeats on a remote server which is sending real time logs now. But again, as I run filebeats I can see logs are shipped but under one index called logstash only and not the indexes I defined in filebeats and new logstash config. Here are the config files-

#listening on this port
input {
  
  beats {
    port => 5044
  }
}

filter {
  if[fields][log_type] =="DataEdgeApp" {
    grok {
      break_on_match => false
      match => {
        "message" => [
          "%{DATESTAMP:timestamp}%{SPACE}%{NONNEGINT:code}%{GREEDYDATA}%{LOGLEVEL}%{SPACE}%{NONNEGINT:anum}%{SPACE}%{GREEDYDATA:logmessage}",
          "(?<activityId>(?<=activity\s\()\d+)"
        ]
      }
    }
  } else if [fields][log_type] == "DataEdgeWeb" {
    grok {
      break_on_match => false
      match => {
        "message" => [
          "%{DATESTAMP:timestamp}%{SPACE}%{NONNEGINT:code}%{GREEDYDATA}%{LOGLEVEL}%{SPACE}%{NONNEGINT:anum}%{SPACE}%{GREEDYDATA:logmessage}",
          "(?<statusCode>(?<=StatusCode=\")\d+)"
        ]
      }
    }
  } 
}

output {
    elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    #ilm_enabled => false
    index => "DataEdgeAppServer"
    #index    => "%{[fields][log_type]}"  
  }
  stdout {
    codec => rubydebug
  }
}

filebeat.yml-

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true
  input_type: log
  fields:
    tags: ["DataEdgeApp"]

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - 'C:\Program Files (x86)\ESQ SST\DataEdgev1.2\ngta-distribution-app-3.2.0.0-bin\logs\*'
    #- C:\ngta.log
    #- 'C:\Program Files (x86)\ESQ SST\DataEdgev1.2\ngta-distribution-web-3.2.0.0-bin\logs\ngta.log'
    #- c:\programdata\elasticsearch\logs\*
- type: log

  enabled: true
  input_type: log
  fields:
    tags: ["DataEdgeWeb"]

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - 'C:\Program Files (x86)\ESQ SST\DataEdgev1.2\ngta-distribution-web-3.2.0.0-bin\logs\*'


#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["192.168.x.x:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

Is this the same logstash server?
if yes, you can include the filters in your existing pipeline.conf file and see whether it is taking effect.

@mancharagopan No, logstash is on the ubuntu on my local machine which we worked on. FIlebeat is on a remote and different server. And sometimes the index is just logstash and other times it is logstash-date-00001.

Previously the index issue resolved after I refreshed my log files passed in filebeat. But right now, the files being passed are real time logs.

did you try uncommenting

create a separate pipeline for the new config file in your pipeline.yml file.

I did uncomment and then the index went from logstahs to logstash-date-0001.

which new config file? I just updated my old pipeline.conf which had errors, access, dispatcher with the new file I posted above. So there is only one pipeline.conf used. And the pipeline.yml has one pipeline.id defined.

Can you check the logstash logs whether the configuration file is loaded correctly and if any errors occurred while parsing?

What are the event stored in logstash index or logstash-date-0001 index?

The data in the logstash and logstash-date-0001 was what the fikebeat was passing from the server. So the content under index is exactly what we wanted. And even the log_type i have added under field in logstash as "DataEdgeApp" was shown in kibana.

Although this all worked when I deleted the else if part and only had if.

No error is seen in logstash conf. Anything specific I should see to see file loading?

See This

Worked again as main was taken in pipeline.id. SO, when I run ./logstash -f pipeline.conf, it ignored the pipeline.yml file which defines the pipeline.conf location.

Best way is to just run ./logstash and then pipelien.yml will be read with correct pipeline.id:test.
Thanks again!

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.