Hi guys. I need help. I have configured everything that is needed, in my opinion, but I can’t understand why the server doesn’t want to connect to the log analytics agent. I wanted to do a test, but I was stuck with the installation. Help...
Hi @v.popov Welcome to the community!.
You're going to have to provide a lot more details if you want help. It's unclear what exactly you're doing, what the problems.
We will need all of these
What version of elasticsearch are you using?
Is elasticsearch up and running with Kibana? How is it installed?
It's unclear what you're trying to accomplish. Are you trying to collect into next logs?
What are you using? Filebest or elastic agent.
Are those running in Docker?
Or is nginx running in docker?
Share your configuration files?
Etc.
We can't do anything without more information. Help us help you
Hello. I figured it out, no longer needed))
The question is different, maybe someone knows how to configure / filter the log data that elk receives via filebeat or logsrash? Relatively speaking, there is a line for the name of the logs and the next line for the storage path of the logs, and so on, so I need to somehow put a filter so that the extra lines are not collected into data. This needs to be filtered out at the time of data collection on the server, maybe using python, so that the elk memory itself is not clogged with this action.
Hi @v.popov
Glad you got the initial stuff running.
It's not clear exactly what you want to drop. Can you share a couple lines of the logs the lines you want keep and drop?
You can usually use a drop processor in filebeat, a filter and logstash or a drop processor in an Ingest pipeline in Elasticsearch to drop lines.
Here is the logstash.conf code
input {
beats {
port => 5044
}
}
filter {
if [type] == "nginx_logs" {
# Log parsing filter (Grok)
grok {
match => { "message" => '%{IPORHOST:clientip} - - \[%{HTTPDATE:timestamp}\] "%{WORD:verb} %{URIPATHPARAM:request} HTTP/%{NUMBER:httpversion}" %{NUMBER:response} %{NUMBER:bytes} "%{DATA:referrer}" "%{DATA:agent}"' }
}
# Condition for deleting events
# if [some_condition] {
# drop {}
# }
# Remove the agent and host fields
mutate {
remove_field => ["agent", "host"]
}
# Example of adding a new field and removing the old one
#mutate {
# add_field => { "new_field" => "new_value" }
# remove_field => ["old_field"]
# }
}
}
output {
elasticsearch {
hosts => ["19.0.1.160:9200"]
user => "elastic"
password => "changeme"
index => "nginx-%{+YYYY.MM.dd}"
document_type => "nginx_logs"
}
}
Here it is .yml
## Default Logstash configuration from Logstash base image.
## https://github.com/elastic/logstash/blob/master/docker/data/logstash/config/logstash-full.yml
#
http.host: "19.0.1.160"
xpack.monitoring.elasticsearch.hosts: ["http://19.0.1.160:9200" ]
## X-Pack security credentials
#
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: changeme
path.config: /usr/share/logstash/pipeline/logstash.conf
I don’t understand why, after starting the container, logstash doesn’t start in the container, but as soon as I put a #comment on the line with a link to the conf file in logstash.yml, everything works right away, but I need this filter.
@v.popov Apologies I am having a hard time following...
If you are running Logstash in Docker Container, you need to mount the correct directories...
If you exec into the container do all the files exist where they should?
What command / how are you starting logstash....
The correct directories need to be mounted...
Did you look at?
Also if you are having trouble running logstash you should probably open a new topic with specifics and title that says what the issue is... no one else will look at this topic for logstash questions...
Elastic-filebeat-nginx-kibana-docker compose help me please
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.