Does anyone using file o/p plugin to just ship logs from remote location to a central location?
ES needs a lot of storage so that's not a viable option for me, so I need to ship logs and store logs as files only. Is that a solution which is not recommended at all? I mean can anyone foresee and point any pitfalls with it.
Also, I 'm facing some issue with the o/p plugin.
This is my logstash o/p configuration.
output {
file {
path => "/tmp/logstash_out.log"
codec =>
line {
format => "message: %{message}"
}
}
stdout {}
}
I can see in the logstash logs that it has opened the file but the file doesn't exist on my file-system so I don't have any logs.
Yes, I can see the logs in stdout (it's a docker image so docker logs shows me the events) and also, when I replace the file o/p plugin with Elasticsearch it works.
@Ferrow, thanks for the help, I tried with the configuration you provided. The logs are in my Stdout but still no file /tmp/logstash_out.log exists on my system
Now, I don't see the o/p plugin my logs, strange.
This is my new conf.
output {
stdout { codec => plain }
file {
path => "/tmp/logstash_out.log"
}
}
[2017-09-11T08:57:02,540][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>500}
[2017-09-11T08:57:02,806][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2017-09-11T08:57:02,825][INFO ][logstash.pipeline ] Pipeline main started
[2017-09-11T08:57:02,868][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
2017-08-11T11:33:07.584Z aaalogs1 Sending Code=5, Id=53 to 172.16.110.79:55188
2017-08-11T11:33:07.584Z aaalogs1 Sending Code=5, Id=57 to 172.16.110.79:44846
@Shaoranlaos, yes it does. I did initially had a diff directory where from the logs itself I did find because of permission issues logstash couldn't open the file, so I moved it to tmp just so I can complete my tests sooner.
anyway the permissions on my tmp is, so anyone can write in it.
drwxrwxrwt. 9 root root 4096 Sep 11 15:09 tmp
Strange thing now is my logstash is refusing to start and it says the file already exists.
It seems you have configured Logstash to write logs to /tmp/logstash_out.log/%{YYYY-MM}/log-2017-08-11.log, but /tmp/logstash_out.log already exists as a file when it needs to be a directory.
@magnusbaeck No, I don't have any such file inside my /tmp/
[root@rabbit2 tmp]# ls -a /tmp/logs*
/tmp/logstash.conf
My guess here is logstash did open that file from my previous configuration (which I could see in the logs, you can see my 2nd post in this thread) but somehow it didn't close or flush to the filesystem something like that. I 'm a noob in terms of file system knowledge
Cant see why it should be a problem to transfer files from LS to ES for storage in the manner you are trying. - can you formulate your question in another way? maybe i just dont understand your question
@Ferrow, I meant will using logstash here would be good solution instead of using something like scp or rysnc instead, since I only need files in same format and I can live w/o any enrichment done on logs.
Also I tried to have some variables in my path so I can have different directory for different applications, since I log file for all applications will be a disaster.
So I tried something like this.
path => "/tmp/logstash5_out%{fields.Log_type}/%{+YYYY-MM}/%{type}-%{+YYYY-MM-dd}.log"
but the variables %{fields.Log_type} and %{type} are not getting replaced, these fields exist as I can see them in my ES o/p.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.