I could not start logstash properly via rpm installed


(Generalibm Jang) #1

Hi, all

I could not start logstash properly via rpm installed. Any helps here?

Description

sudo systemctl start logstash.service

the command above would generate logs at /var/log/logstash/logstash-plain.log

[2016-11-06T11:35:57,640][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://192.168.1.57:9200"]}}
[2016-11-06T11:35:57,643][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2016-11-06T11:35:57,963][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>50001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "norms"=>false}, "dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword"}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date", "include_in_all"=>false}, "@version"=>{"type"=>"keyword", "include_in_all"=>false}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2016-11-06T11:35:57,968][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["192.168.1.57:9200"]}
[2016-11-06T11:35:58,018][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.0.3-java/vendor/GeoLite2-City.mmdb"}
[2016-11-06T11:35:58,032][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000}
[2016-11-06T11:35:58,035][INFO ][logstash.pipeline        ] Pipeline main started
[2016-11-06T11:35:58,084][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9601}

It seems Logstash is working normally. But actually, the ES did not create any indices.

By the way, it could work well when I start Logstash via tar package like this
sudo /bin/logstash -f /etc/logstash/conf.d/test-pipeline.conf

Is that something important I have ignored?

My configuration file

test-piple.conf

input {
	file {
		path => "/home/zh/Documents/*"	
		start_position => beginning
		ignore_older => 0 
	}
}

filter {
	grok {
		match => {"message" => "%{COMBINEDAPACHELOG}"}
	}
	geoip {
		source => "clientip"
	}
}


output {
	elasticsearch {
		hosts => [ "192.168.1.57:9200"]	
	}	
	stdout {
		codec => rubydebug	
	}
} 

My environment

Centos 7
ELK version 5.0

Any helps would appreciated!


(Mark Walkom) #2

That means it has started.

Have a look at https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html#_tracking_of_current_position_in_watched_files


(Generalibm Jang) #3

Could you be more concrete?

Is that my configuration file wrong?


(Generalibm Jang) #4

I have not fixed the problem yet. Could you give me some more clues?


(Generalibm Jang) #5

I have installed Logstash(5.0) in my system(Centos 7), But when I start it by command
sudo systemctl start logstash.service
The ES did not create any indices.
The Json is

{"host":"arena","version":"5.0.0","http_address":"127.0.0.1:9600","events":{"in":null,"filtered":0,"out":0,"duration_in_millis":null},"jvm":{"timestamp":1478503253667,"uptime_in_millis":210914,"memory":{"heap_used_in_bytes":358553984,"heap_used_percent":17,"heap_committed_in_bytes":519045120,"heap_max_in_bytes":2075918336,"non_heap_used_in_bytes":182509520,"non_heap_committed_in_bytes":191586304,"pools":{"survivor":{"peak_used_in_bytes":8912896,"used_in_bytes":11639576,"peak_max_in_bytes":35782656,"max_in_bytes":71565312,"committed_in_bytes":17825792},"old":{"peak_used_in_bytes":142783408,"used_in_bytes":272392512,"peak_max_in_bytes":715849728,"max_in_bytes":1431699456,"committed_in_bytes":357957632},"young":{"peak_used_in_bytes":71630848,"used_in_bytes":74521896,"peak_max_in_bytes":286326784,"max_in_bytes":572653568,"committed_in_bytes":143261696}}}}}

When I type command GET http://localhost:9600/_stats

By the contrary, When I started Logstash via command
sudo /usr/share/logstash/bin/logstash --path.settings /etc/logstash
The ES created index logstash-*
and the Json is

{"host":"arena","version":"5.0.0","http_address":"127.0.0.1:9601","events":{"in":39,"filtered":39,"out":39,"duration_in_millis":34529},"jvm":{"timestamp":1478501173112,"uptime_in_millis":2308559,"memory":{"heap_used_in_bytes":406420224,"heap_used_percent":19,"heap_committed_in_bytes":519045120,"heap_max_in_bytes":2075918336,"non_heap_used_in_bytes":192557088,"non_heap_committed_in_bytes":203669504,"pools":{"survivor":{"peak_used_in_bytes":8912896,"used_in_bytes":10214760,"peak_max_in_bytes":35782656,"max_in_bytes":71565312,"committed_in_bytes":17825792},"old":{"peak_used_in_bytes":142143056,"used_in_bytes":273810880,"peak_max_in_bytes":715849728,"max_in_bytes":1431699456,"committed_in_bytes":357957632},"young":{"peak_used_in_bytes":71630848,"used_in_bytes":122394584,"peak_max_in_bytes":286326784,"max_in_bytes":572653568,"committed_in_bytes":143261696}}}}}

When I type command GET http://localhost:9601/_stats

It is obvious that events (input {file{}}) is the culprit. But how could I fix it???

I can not find any mistakes in the start script file start.options, Any helps here???


(Magnus Bäck) #7

Perhaps Logstash is tailing the input file and waiting for more input? start_position => beginning only matters for new files. If you want to process a previously processed file from the beginning you have to remove its sincedb file. See the file input's documentation.


(Generalibm Jang) #8

Yes, I have fixed my attention on this. But what a strange is that the Logstash did not update auto when both a new file created and a previous file added several lines.


(Generalibm Jang) #9

I have removed all the .sincedb_* file , and restart the Logstash, But the status is still

{"host":"arena","version":"5.0.0","http_address":"127.0.0.1:9600","events":{"in":null,"filtered":0,"out":0,"duration_in_millis":null},"jvm":{"timestamp":1478507126290,"uptime_in_millis":208555,"memory":

:sob:


(Magnus Bäck) #10

Never mind the status. Look in the logs for clues and focus on the stuff related to the file input. You may have to increase the log level.


(Generalibm Jang) #11

Yes, I switched log level to debug and restart Logstash again, I got more log infos like

[2016-11-07T17:34:28,984][DEBUG][logstash.inputs.file     ] _globbed_files: /home/zh/Documents/*: glob is: []
[2016-11-07T17:34:29,584][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-07 17:34:29 +0800}
[2016-11-07T17:34:30,457][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline
[2016-11-07T17:34:30,585][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-07 17:34:30 +0800}
[2016-11-07T17:34:31,586][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-07 17:34:31 +0800}
[2016-11-07T17:34:32,588][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-07 17:34:32 +0800}
[2016-11-07T17:34:33,592][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-07 17:34:33 +0800}
[2016-11-07T17:34:34,594][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-07 17:34:34 +0800}
[2016-11-07T17:34:35,457][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline
[2016-11-07T17:34:35,595][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-07 17:34:35 +0800}
[2016-11-07T17:34:36,597][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-07 17:34:36 +0800}
[2016-11-07T17:34:37,598][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-07 17:34:37 +0800}
[2016-11-07T17:34:38,600][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-07 17:34:38 +0800}
[2016-11-07T17:34:39,601][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-07 17:34:39 +0800}
[2016-11-07T17:34:40,456][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline
[2016-11-07T17:34:40,602][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-07 17:34:40 +0800}
[2016-11-07T17:34:41,604][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-07 17:34:41 +0800}
[2016-11-07T17:34:42,605][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-07 17:34:42 +0800}
[2016-11-07T17:34:43,606][DEBUG][logstash.instrument.collector] Collector: Sending snapshot to observers {:created_at=>2016-11-07 17:34:43 +0800}

But they can not give me more key clues.


(Magnus Bäck) #12
[2016-11-07T17:34:28,984][DEBUG][logstash.inputs.file     ] _globbed_files: /home/zh/Documents/*: glob is: []

There it is. The configured filename pattern doesn't expand to any files, possibly because the Logstash user doesn't have permissions to read one of the directories.


(Generalibm Jang) #13

Lack of permissions???

When I type command ls -l /home/zh/Documents , the output is

drwxr-xr-x.  2 zh zh   99 Nov  8 09:33 ./
drwx------. 32 zh zh 4096 Nov  7 17:58 ../
-rw-r--r--.  1 zh zh   12 Nov  7 16:02 ELK2.log
-rw-r--r--.  1 zh zh 1026 Nov  7 16:28 ELK.log
-rw-r--r--.  1 zh zh  405 Oct 27 16:26 ELK.txt
-rw-rw-r--.  1 zh zh   54 Nov  7 18:01 test.log

It seems that the permission is not the culprit.


(Generalibm Jang) #14

When I type command
cat /var/lib/logstash/plugins/inputs/file/.sincedb_****

there is nothing, the .sincedb_*** is empty. what a strange!!! May that be the culprit???


(Generalibm Jang) #15

After I restart Logstash, the file sincedb_* becomes
0 0 0 0
It seems that no input file in directory(/home/zh/Documents) has been detected(not because the permission) , and it may be the why that the status of Logstash is
"events":{"in":null,"filtered":0,"out":0,"duration_in_millis":null}

But why???


(Generalibm Jang) #16

May it be the permission cause the key.

Because I modified my configuration file test-pipeline.conf with

input {
       file {
            path => /var/log/logstash/*.log
      }
}

and then, the status of Logstash switch to
"events":{"in":9871,"filtered":9746,"out":8871,"duration_in_millis":76491}
as a result , ES created the index logstash-* by correspondingly.

By the way, I remodified the configuration filetest-pipeline.conf with

input {
       file {
            path => /var/log/elasticsearch/*.log
      }
}

ES collect index well too.

But, the permissions of the directory (/var/log/eleasticsearch) for the other users is the same

-rw-r--r--. 

by comparison with directory /home/zh/Docments.

The result above makes me confused!!!


(Generalibm Jang) #17

Thank a lot, I've fixed it, That is because the permissions of directory(home/zh) is

-rwx------

so the Logstash user can not work in .


(system) #18

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.