Logstash Stuck at Pipeline Main Started

Hi,
I ran logstash to index an apache log file. It worked. After that I used an application log file. It also worked. Then all of a sudden when I try to re-execute the apache log file, logstash is stuck at 'Pipeline main started'
I tried to increase the LS_heapsize to 2048m.

Please help asap

How are you starting it?
Can you show us the output?
What does your config look like?

Starting Process:

/opt/logstash/bin$ sudo ./logstash agent -v -f /etc/logstash/indexer.conf

Output:

Adding pattern {"HTTPDATE"=>"%{MONTHDAY}/%{MONTH}/%{YEAR}:%{TIME} %{INT}", :level=>:info}
Adding pattern {"QS"=>"%{QUOTEDSTRING}", :level=>:info}
Adding pattern {"SYSLOGBASE"=>"%{SYSLOGTIMESTAMP:timestamp} (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}:", :level=>:info}
Adding pattern {"COMMONAPACHELOG"=>"%{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)", :level=>:info}
Adding pattern {"COMBINEDAPACHELOG"=>"%{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}", :level=>:info}
Adding pattern {"HTTPD20_ERRORLOG"=>"\[%{HTTPDERROR_DATE:timestamp}\] \[%{LOGLEVEL:loglevel}\] (?:\[client %{IPORHOST:clientip}\] ){0,1}%{GREEDYDATA:errormsg}", :level=>:info}
Adding pattern {"HTTPD24_ERRORLOG"=>"\[%{HTTPDERROR_DATE:timestamp}\] \[%{WORD:module}:%{LOGLEVEL:loglevel}\] \pid %{POSINT:pid}:tid %{NUMBER:tid}\?( \[client %{IPORHOST:client}:%{POSINT:clientport}\])? %{DATA:errorcode}: %{GREEDYDATA:message}", :level=>:info}
Adding pattern {"HTTPD_ERRORLOG"=>"%{HTTPD20_ERRORLOG}|%{HTTPD24_ERRORLOG}", :level=>:info}
Adding pattern {"LOGLEVEL"=>"([Aa]lert|ALERT|[Tt]race|TRACE|[Dd]ebug|DEBUG|[Nn]otice|NOTICE|[Ii]nfo|INFO|[Ww]arn?(?:ing)?|WARN?(?:ING)?|[Ee]rr?(?:or)?|ERR?(?:OR)?|[Cc]rit?(?:ical)?|CRIT?(?:ICAL)?|[Ff]atal|FATAL|[Ss]evere|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?)", :level=>:info}
Using geoip database {:path=>"/var/log/logstash/ETLLogs/GeoLiteCity.dat", :level=>:info}
Starting pipeline {:id=>"main", :pipeline_workers=>1, :batch_size=>125, :batch_delay=>5, :max_inflight=>125, :level=>:info}
Pipeline main started

Config

input
{
file {
type => "apache"
path => ['/var/log/logstash/ETLLogs/apache2.log']
start_position => "end"
ignore_older => 0
sincedb_path => "dev/null"

}

}
filter
{
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}

geoip {
	source => "clientip"
	target => "geoip"
	database =>"/var/log/logstash/ETLLogs/GeoLiteCity.dat"  
	add_field => ["[geoip][coordinates]", "%{[geoip][longitude]}"]
	add_field => ["[geoip][coordinates]", "%{[geoip][latitude]}"]
}

mutate {
	convert => ["[geoip][coordinates]", "float"]
}

}

output
{
elasticsearch {
index => "logstash-%{+YYYY.MM.dd}"
hosts => ["localhost:9200"]
}
stdout {
codec => rubydebug
}
}

Either that is a typo or it is not what you think it is.

I'd say that the sincedb is your problem.

Okay, When I remove the sincedb_path from the Config file, it works.

My question is what should be the sincedb_path. Should it be just the directory or a file.

Reference:

Make sincedb_path accept a directory. Perhaps hinting at directory usage with a trailing slash, example: sincedb_path => "/some/path/"

Hi walkom,

One weird thing is, i deleted the sincedb_path statement in the config file. Saved the file, then added it back, it worked. I really need to understand if it is some issue with the Logstash.

I think you want /dev/null if you don't want to worry about it.

1 Like

Great!

How can I miss it... :grinning:

Thankyou Mark!