input {
file {
path => "/var/log/abc.log"
}
beats {
port => 5044
}
}
filter {
mutate {
remove_field => [ "agent.version.keyword" ]
}
}
Hello
I need a help with the configuration of logstash
I want to remove some fields from logstash
I read that which fields I can remove , so am removing the above field,but its not working ,can you plz help me out .
Am new on elastic and its a bit urgent
You should use the nested fields.
mutate {
remove_field => [ "[agent][version][keyword]" ] # or just set "agent" and remove all nested fields related to the agent field
}
any other way?....its is not working
Is the agent.version
field mapped as text
with a keyword
subfield? If so you can not remove a subfield as it does not exist in the document, just in the mapping. You should however be able to remove the full agent.version
field if you want.
input {
file {
path => "/var/log/abc.log"
start_position => "beginning"
sincedb_path => "NULL"
}
beats {
port => 5044
}
}
filter {
mutate {
remove_field => [ "agent.version" ]
}
}
it's still there ...doesn't remove
I checked the field, it is not a subfield
Christian has right, you are trying to remove the field from Kibana. My mistake
remove_field => [ "[agent][version]" ]
or full agent
remove_field => [ "agent" ]
tried both ways ....nothing is working
input {
file {
path => "/var/log/abc.log"
start_position => "beginning"
sincedb_path => "NULL"
}
beats {
port => 5044
}
}
filter {
mutate {
remove_field => [ "agent" ]
}
}
this configuration is in '02-beats-input.conf' file.
This is okay, right?
Hello, thanks for your help, it is working now. Actually, kibana is taking time to implement the logstash/conf.d file.
One more question :
I have more than one log file in my config file and I want to remove the same fields in each file .
So how can I achieve this
input {
file {
path => "/var/log/abc.log"
start_position => "beginning"
sincedb_path => "NULL"
}
beats {
port => 5044
}
}
filter {
mutate {
remove_field => [ "[agent][version]","[agent][type]" ]
}
}
I'm not sure if there is a similar way to achieve the below approach on Filebeat itself.
Logstash has the ability to combine multiple files (config for pipelines.yml under /etc/logstash):
- pipeline.id: pingPoller
path.config: "/etc/logstash/conf.d/{Ping_dns-input.conf,Ping_server1-input.conf,Ping_server2-input.conf,Ping-filter_output.conf}"
queue.type: persisted
So if you route your filebeats over Logstash:
- It would be quite easy to achieve the removal for all Filebeats in 1 place
- If you require specific actions per Harvester in Filebeat, you could combine actions via a shared file for all pipelines.
And for adding field?
My requirement is to add some fields in my logs file .....in each file, it will be different
My processor for one of my Filebeat instances looks like:
- type: log
processors:
- dissect:
#2021-12-08T08:34:04.370+0100 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics
#datatype: string to integer, long, float, double, boolean or ip
tokenizer: "%{date}\t%{event.type}\t%{class}\t%{script}\t%{messageCut}"
field: "message"
target_prefix: ""
- timestamp:
field: date
layouts:
- '2006-01-02T15:04:05.999Z07:00'
- '2006-01-02T15:04:05.999Z0700'
- '2006-01-02T15:04:05.999999999Z07:00'
#- '2006-01-02T15:04:05.999-07:00'
test:
- '2021-12-08T08:34:04.370+0100'
- drop_fields:
fields: ["date", "class", "script", "message"]
- rename:
fields:
- from: "messageCut"
to: "message"
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/data/log/heartbeat/heartbeat.log
fields:
service.type: heartbeat
event.module: heartbeat
event.dataset: heartbeat.beat
fields_under_root: true
We mostly do processing in logstash, which was built for this purpose.
You could easily add a different processor in filebeat to add fields.
Is there any other way to add fields?
Am confused about the processor part