I have a simple application that logs to a file in single complete json strings. Example
/tmp/my.log
{ "user": "bob", "event":"speak", "message":"Hello, world!" }
{ "user": "bill", "event":"sleep", "duration":8 }
I'd like to be able to push this directly to elasticsearch under a "my_app_logs" index so that it can be visualized in Kibana.
Attempt 1
filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /path/to/my.log
output.elasticsearch:
hosts: ["localhost:9200"]
username: "elastic"
password: "changeme"
This does in fact push something to elastic search, but comes out as a string in the "message" field, I.e.:
{
...
"message": "{ \"user\": \"bob\", \"event\":\"speak\", \"message\":\"Hello, world!\" }"
...
}
Attempt 2
filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /mnt/sdb1/hyperloop/log.txt
json.keys_under_root: true
json.add_error_key: true
output.elasticsearch:
hosts: ["localhost:9200"]
username: "elastic"
password: "changeme"
This simply produces a bunch of errors when trying to push, specifically:
...
"stacktrace": ["org.elasticsearch.index.mapper.MapperParsingException: object mapping for [user] tried to parse field [user] as object, but found a concrete value"
...
Which from some googling seems to be a complaint about trying to push the wrong type to a field. With this information I noticed that the index created has some 500+ fields for many tools I'm not using "apache", "mysql" etc.! WHAT? Why is this default behavior?
Anyway, how can I go about getting my desired results? (preferable w/o all those fields I don't need)