Here is the logstash.conf
input {
file {
path => "/home/test6/admin_access.log"
type => "apache-access"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
if [type] == "apache-access" {
grok {
match=> [
"message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}",
"message" , "%{COMMONAPACHELOG}+%{GREEDYDATA:extra_fields}"
]
overwrite => [ "message" ]
}
mutate {
convert => ["response", "integer"]
convert => ["bytes", "integer"]
convert => ["responsetime", "float"]
}
geoip {
source => "clientip"
target => "geoip"
add_tag => [ "apache-geoip" ]
}
date {
match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
remove_field => [ "timestamp" ]
}
useragent {
source => "agent"
}
}
}
output {
if [type] == "apache-access" {
if "_grokparsefailure" in [tags] {
null {}
}
elasticsearch {
hosts => ["es:9200"]
index => "apache-%{+YYYY.MM.dd}"
document_type => "apache_logs"
}
stdout { codec => rubydebug }
}
}
But from logstash container, if i create a dummy log entry i can see it in elasticsearch
/home/test6# logstash-2.1.1/bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["es:9200"] } }'
Moreover, my linux user has acccess to the log file admin_access.log
This looks invalid. As far as I know there is no null
output filter. If the intention is to drop these records, you will need to place the conditional in the filter block ad replace the null
filter with a drop filter.
This is commented in my logstash.conf.
That's way it is in bold here. Sorry for this.
input {
file {
path => "/home/test6/admin_access.log"
type => "apache-access"
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter {
if [type] == "apache-access" {
grok {
match=> [
"message" , "%{COMBINEDAPACHELOG}+%{GREEDYDATA:extra_fields}",
"message" , "%{COMMONAPACHELOG}+%{GREEDYDATA:extra_fields}"
]
overwrite => [ "message" ]
}
mutate {
convert => ["response", "integer"]
convert => ["bytes", "integer"]
convert => ["responsetime", "float"]
}
geoip {
source => "clientip"
target => "geoip"
add_tag => [ "apache-geoip" ]
}
date {
match => [ "timestamp" , "dd/MMM/YYYY:HH:mm:ss Z" ]
remove_field => [ "timestamp" ]
}
useragent {
source => "agent"
}
}
}
output {
if [type] == "apache-access" {
elasticsearch {
hosts => ["es:9200"]
index => "apache-%{+YYYY.MM.dd}"
document_type => "apache_logs"
}
stdout { codec => rubydebug }
}
}
Also:
logstash version: 2.4.1,
es version: 2.1.1
so still getting no logs to es after removing of these 3 lines from my logstash.conf:
if "_grokparsefailure" in [tags] {
null {}
}
Have you looked in your Logstash logs for clues or indications of problems? Are you getting anything to your stdout output?
Yes i got a big logstash.log file. Trying to upload a part of it...
There are few things to check
- Do a config check first
- Check again the permission of the log file if logstash user is able to read it or not
- Check logstash & elasticsearch logs both
- Check if the indices are being created
If you still not able to see the issue please share the logs with us
Vishal
Did you check other points as well ?
Thanks Vishal,
I did the checks you mentioned. No indices are created.
Here is the es log:
[2017-06-26 08:13:32,906][INFO ][node ] [Bela] version[2.1.1], pid[7], build[40e2c53/2015-12-15T13:05:55Z]
[2017-06-26 08:13:32,907][INFO ][node ] [Bela] initializing ...
[2017-06-26 08:13:32,942][INFO ][plugins ] [Bela] loaded , sites
[2017-06-26 08:13:32,959][INFO ][env ] [Bela] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [2.8gb], net total_space [18.2gb], spins? [unknown], types [rootfs]
[2017-06-26 08:13:34,097][INFO ][node ] [Bela] initialized
[2017-06-26 08:13:34,098][INFO ][node ] [Bela] starting ...
[2017-06-26 08:13:34,173][WARN ][common.network ] [Bela] publish address: {0.0.0.0} is a wildcard address, falling back to first non-loopback: {172.17.0.2}
[2017-06-26 08:13:34,173][INFO ][transport ] [Bela] publish_address {172.17.0.2:9300}, bound_addresses {[::]:9300}
[2017-06-26 08:13:34,179][INFO ][discovery ] [Bela] elasticsearch/MzC-Dfc9QyKakxIe_11vUQ
[2017-06-26 08:13:37,200][INFO ][cluster.service ] [Bela] new_master {Bela}{MzC-Dfc9QyKakxIe_11vUQ}{172.17.0.2}{172.17.0.2:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2017-06-26 08:13:37,208][WARN ][common.network ] [Bela] publish address: {0.0.0.0} is a wildcard address, falling back to first non-loopback: {172.17.0.2}
[2017-06-26 08:13:37,208][INFO ][http ] [Bela] publish_address {172.17.0.2:9200}, bound_addresses {[::]:9200}
[2017-06-26 08:13:37,208][INFO ][node ] [Bela] started
[2017-06-26 08:13:37,248][INFO ][gateway ] [Bela] recovered [0] indices into cluster_state
[2017-06-26 08:14:13,769][INFO ][cluster.metadata ] [Bela] [.kibana] creating index, cause [api], templates , shards [1]/[1], mappings [config]
[2017-06-26 08:27:29,013][INFO ][cluster.metadata ] [Bela] [.kibana] create_mapping [index-pattern]
[2017-06-26 08:27:29,160][INFO ][rest.suppressed ] /logstash-/_mapping/field/ Params: {ignore_unavailable=false, allow_no_indices=false, index=logstash-, include_defaults=true, fields=, _=1498465649152}
[logstash-*] IndexNotFoundException[no such index]
at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:636)
at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:133)
at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:77)
at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsAction.doExecute(TransportGetFieldMappingsAction.java:57)
at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsAction.doExecute(TransportGetFieldMappingsAction.java:40)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:347)
at org.elasticsearch.client.FilterClient.doExecute(FilterClient.java:52)
at org.elasticsearch.rest.BaseRestHandler$HeadersAndContextCopyClient.doExecute(BaseRestHandler.java:83)
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:347)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1183)
at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.getFieldMappings(AbstractClient.java:1383)
at org.elasticsearch.rest.action.admin.indices.mapping.get.RestGetFieldMappingAction.handleRequest(RestGetFieldMappingAction.java:66)
well show me the output of below command
$ curl 'localhost:9200/_cat/indices?v'
root@hotelgenius:/home/test6# curl 'localhost:9200/_cat/indices?v'
health status index pri rep docs.count docs.deleted store.size pri.store.size
yellow open .kibana 1 1 1 0 3.1kb 3.1kb
Thank you let me read the logs now
Ok I just might have seen an issue here can you please do me a favor and run below commnads
$ cd /usr/share/logstash
$ sudo bin/logstash --path.settings /etc/logstash -f /etc/logstash/conf.d/logstash.conf
Check the output and also check what curl 'localhost:9200/_cat/indices?v' is showing now.
Unfortunately, I cannot run above command as i run logstash from a docker container. Logstash docker container does not have the /etc/ folder. I found these images on the net. Below is the logstash docker image:
FROM java_image
MAINTAINER Author name
ENV DEBIAN_FRONTEND noninteractive
RUN
wget https://download.elastic.co/logstash/logstash/logstash-2.4.1.tar.gz &&
tar xvzf logstash-2.4.1.tar.gz &&
rm -f logstash-2.4.1.tar.gz &&
chown -R test6:test6 logstash-2.4.1
ADD logstash.conf /home/test6
CMD logstash-2.4.1/bin/logstash -f logstash.conf --debug
And here is the parent image (java image) of the logstash image:
FROM ubuntu:16.10
MAINTAINER Author name
RUN apt-get update
RUN apt-get install -y python-software-properties software-properties-common
RUN
echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections &&
add-apt-repository -y ppa:webupd8team/java &&
apt-get update &&
apt-get install -y oracle-java8-installer
RUN useradd -m -d /home/test6 test6
WORKDIR /home/test6
ENV JAVA_HOME /usr/lib/jvm/java-8-oracle
So after logging in logstash container (docker exec -it logstash bash), i have:
root@0d4fde3f9039:/home/test6# ls -a
. .. .bash_logout .bashrc .profile logstash-2.4.1 logstash.conf
root@0d4fde3f9039:/home/test6# cd ~
root@0d4fde3f9039:~# ls
root@0d4fde3f9039:~# ls -a
. .. .bash_history .bashrc .oracle_jre_usage .profile