@mancharagopan, I have an index called- "yellow open %[fields][log_type]-2020.01.09 " in kibana but now when I try to delete it, it doesnt delete and throws this error-
{
"error": "invalid escape sequence `%[f' at index 0 of: %[fields][log_type]-2020.01.09, allowed: [GET, PUT, DELETE, HEAD]",
"status": 405
}
AFter this, should I try making multiple pipelines so that I can have multiple indexes?
The index name is kibana is actually showing access-202.01.10 which is how I wanted it to show. But it is still only creating one index and not the other two indexes for errors and dispatch. Can you suggest if I should make multiple files as multiple pipelines?
@mancharagopan, Separate indexes are being created now. had to reload the other two log files and now I have three indexes. I have opened another discussions on the filter grok pattern as that is still not working.
@mancharagopan, Hi I am facing this issue again and its very much related to what we worked on.
My Elasticsearch, kibana, logstash are running on VM and filebeats on a remote server which is sending real time logs now. But again, as I run filebeats I can see logs are shipped but under one index called logstash only and not the indexes I defined in filebeats and new logstash config. Here are the config files-
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
input_type: log
fields:
tags: ["DataEdgeApp"]
# Paths that should be crawled and fetched. Glob based paths.
paths:
- 'C:\Program Files (x86)\ESQ SST\DataEdgev1.2\ngta-distribution-app-3.2.0.0-bin\logs\*'
#- C:\ngta.log
#- 'C:\Program Files (x86)\ESQ SST\DataEdgev1.2\ngta-distribution-web-3.2.0.0-bin\logs\ngta.log'
#- c:\programdata\elasticsearch\logs\*
- type: log
enabled: true
input_type: log
fields:
tags: ["DataEdgeWeb"]
# Paths that should be crawled and fetched. Glob based paths.
paths:
- 'C:\Program Files (x86)\ESQ SST\DataEdgev1.2\ngta-distribution-web-3.2.0.0-bin\logs\*'
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["192.168.x.x:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
@mancharagopan No, logstash is on the ubuntu on my local machine which we worked on. FIlebeat is on a remote and different server. And sometimes the index is just logstash and other times it is logstash-date-00001.
Previously the index issue resolved after I refreshed my log files passed in filebeat. But right now, the files being passed are real time logs.
I did uncomment and then the index went from logstahs to logstash-date-0001.
which new config file? I just updated my old pipeline.conf which had errors, access, dispatcher with the new file I posted above. So there is only one pipeline.conf used. And the pipeline.yml has one pipeline.id defined.
The data in the logstash and logstash-date-0001 was what the fikebeat was passing from the server. So the content under index is exactly what we wanted. And even the log_type i have added under field in logstash as "DataEdgeApp" was shown in kibana.
Although this all worked when I deleted the else if part and only had if.
No error is seen in logstash conf. Anything specific I should see to see file loading?
Worked again as main was taken in pipeline.id. SO, when I run ./logstash -f pipeline.conf, it ignored the pipeline.yml file which defines the pipeline.conf location.
Best way is to just run ./logstash and then pipelien.yml will be read with correct pipeline.id:test.
Thanks again!
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.