Filebeat vs Logstash

So what's the output of netstat -anutp | grep 5044 now?
What do the logstash logs say?
@harshbajaj16 Don't think filebeat needs a restart unless there is an error that needs to be fixed on that side. It will connect on it's own after the retry timer.

sorry guys,

same issue

i tried :-- filebeat -e -c filebeat.yml -d "publish"

in filebeat instance:

2018-04-12T09:30:34.195Z ERROR pipeline/output.go:74 Failed to connect: dial tcp 192.168.2.79:5044: getsockopt: connection refused
2018-04-12T09:30:38.201Z ERROR pipeline/output.go:74 Failed to connect: dial tcp 192.168.2.79:5044: getsockopt: connection refused
2018-04-12T09:30:46.210Z ERROR pipeline/output.go:74 Failed to connect: dial tcp 192.168.2.79:5044: getsockopt: connection refused

logstash is not still generated

assuming he is saying done and restart means he is not getting any error. :smile:

please forgot the filebeat first start logstash and check each and everything is fine in logstash or not...??

1 Like

Please share the logs and netstat output can't help with the filebeat messages that say can't connect to something dead. Need the dead guys info :slight_smile:
@harshbajaj16 Well... :stuck_out_tongue:

1 Like

after run this what messages are coming pls paste here???

netstat -anutp | grep 5044

there is no result for this

root@ip-192-168-2-79:~# service logstash status
● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset
Active: active (running) since Thu 2018-04-12 09:28:22 UTC; 6min ago
Main PID: 17391 (java)
Tasks: 14
Memory: 311.9M
CPU: 3min 26.780s
CGroup: /system.slice/logstash.service
└─17391 /usr/bin/java -Xms256m -Xmx1g -XX:+UseParNewGC -XX:+Use

root@ip-192-168-2-79:~# netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1331/sshd
tcp6 0 0 :::22 :::* LISTEN 1331/sshd
root@ip-192-168-2-79:~#

Ok what about logstash logs?
Logstash will be shown as loaded and running but newer versions will not kill the process. They will start a loop that expects the error to be fixed. If you check the logs you will see a pipeline starting then error logs then pipeline shutdown.
Logstash is still not working. We need to know why. We need it's logs.

your o/p should like this.

tcp 0 0 x.x.x.x:51288 x.x.x.x:5044 ESTABLISHED 12545/filebeat

root@ip-192-168-2-79:/usr/share/logstash# bin/logstash -f /usr/share/logstash/logstash.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2018-04-12 09:38:35.889 [main] scaffold - Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[INFO ] 2018-04-12 09:38:36.138 [main] scaffold - Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[WARN ] 2018-04-12 09:38:48.662 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2018-04-12 09:38:53.588 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.2.3"}
[INFO ] 2018-04-12 09:38:58.110 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[INFO ] 2018-04-12 09:39:27.175 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2018-04-12 09:39:45.998 [[main]-pipeline-manager] beats - Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[INFO ] 2018-04-12 09:39:49.111 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] pipeline - Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x4007b294@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:246 sleep>"}
[INFO ] 2018-04-12 09:39:49.928 [[main]<beats] Server - Starting server on port: 5044
[INFO ] 2018-04-12 09:39:51.850 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Pipelines running {:count=>1, :pipelines=>["main"]}

link from '/var/lib/cloud/instance' => '/var/lib/cloud/instances/i-069c32c16c4732237'",
"offset" => 53759,
"host" => "ip-192-168-2-223",
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
]
}
{
"@version" => "1",
"prospector" => {
"type" => "log"
},
"beat" => {
"hostname" => "ip-192-168-2-223",
"name" => "ip-192-168-2-223",
"version" => "6.2.3"
},
"@timestamp" => 2018-04-12T08:01:01.673Z,
"source" => "/var/log/cloud-init.log",
"message" => "2018-04-12 07:49:32,724 - util.py[DEBUG]: Reading from /var/lib/cloud/instances/i-069c32c16c4732237/datasource (quiet=False)",
"offset" => 53884,
"host" => "ip-192-168-2-223",
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
]
}
{
"@version" => "1",
"prospector" => {
"type" => "log"
},
"beat" => {
"hostname" => "ip-192-168-2-223",
"name" => "ip-192-168-2-223",
"version" => "6.2.3"
},
"@timestamp" => 2018-04-12T08:01:01.673Z,
"source" => "/var/log/cloud-init.log",
"message" => "2018-04-12 07:49:32,724 - util.py[DEBUG]: Writing to /var/lib/cloud/instances/i-069c32c16c4732237/datasource - wb: [644] 39 bytes",
"offset" => 54014,
"host" => "ip-192-168-2-223",
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
]
}
{
"@version" => "1",
"prospector" => {
"type" => "log"
},
"beat" => {
"hostname" => "ip-192-168-2-223",
"name" => "ip-192-168-2-223",
"version" => "6.2.3"
},
"@timestamp" => 2018-04-12T08:01:01.673Z,
"source" => "/var/log/cloud-init.log",
"message" => "2018-04-12 07:49:32,724 - util.py[DEBUG]: Writing to /var/lib/cloud/data/previous-datasource - wb: [644] 39 bytes",
"offset" => 54128,
"host" => "ip-192-168-2-223",
"tags" => [
[0] "beats_input_codec_plain_applied",
[1] "_grokparsefailure"
]
}

thanks guys for helpping

root@ip-192-168-2-79:/var/log/logstash# netstat -anutp | grep 5044
tcp6 0 0 :::5044 :::* LISTEN 17430/java
root@ip-192-168-2-79:/var/log/logstash#

and now getting the logs

And i was wondering what the heck is going on since you posted healthy info logs and actual events :stuck_out_tongue:
Good to know your system is up and working. :slight_smile:

1 Like

hi guys,

the logstash port will run only when we run the command:

#bin/logsatash -f logstash.conf

and am saving to one of the file

but once i stop running the command ( Ctrl+C ) no logs are updating to the file, i.e. service is not running.

is there any command, to run this logstash every time, like the apache and other service are running,

the log file is need to be updated every time automatically

please help me out.

and other requirement is ;

I had 2 instances running with apache and i had configured filebeat on both, and i need to filter only apache and mysql logs separately from both the instances, and save to other instance (where logstash is running) separately in a different file.
eg:
apache logs from inst. 1 ====>
---------------------------------------------save both logs to logstash inst. in a single file-- apache.log
apache logs from inst. 2 ====>

@sancroth
@harshbajaj16
@marcelo
@magnusbaeck

Yes run below command :

#bin/logsatash -f logstash.conf &

Thanks,
Harsh Bajaj

its for background process..? right...!

yes right....:+1:

Hi @jawad846,

Logstash can't save anything as its a pipeline not a storage device but you can use "Filter" to filter out your logs according to the requirement and pass the same into elasticsearch and elasticsearch save the data in json format in index which you can see on kibana.

Thanks,
Harsh Bajaj

logstash can save the logs of input to a single file,

here i need to make it separate for different service, its possible by using filter, saving file is not working.

let me create multiple pipeline and check,

@harshbajaj16
please help me out for output

filter {
if [type] == "apache" {
grok {
match => { "message" => '%{WORD:VirtualHost} %{IPORHOST:clientip} "%{WORD:balancer_worker}" %{WORD:remote_log_name} %{WORD:user} [%{HTTPDAT
E:timestamp}] "%{WORD:Method} %{DATA:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:response} %{NUMBER:Response_size} "%{WORD:referrer}" "%{WOR
D:agent}" %{NUMBER:Time_taken} %{NUMBER:bytes_received} %{NUMBER:bytes_sents}'
}
}

date {
  match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z" ]
}

geoip {
  source => "clientip"
}

}
}

How :wink: ??

it seems ok and i'm not aware what exactly o/p you want this filter pass only apache logs and give to the elasticsearch.

output {
stdout {codec => rubydebug}
file {
path => "/home/ubuntu/logstash.log"
}
}

@harshbajaj16
am not using elasticsearch