Logs from journald wih Logstash and the journald plugin

Hi,

I'm trying to set up log forwarding from journald to Elasticsearch using Logstash and the journald input plugin, but I'm running into some (to me, anyway) odd issues related to the _uid field. Using Logstash 2.3.4 running in a container on CoreOS with /var/log/journal mapped, logging to Elasticsearch 2.3 (the AWS service), I get the following error message from Elasticsearch in the Logstash logs:

{"create"=>
 {"_index"=>"logstash-2016.08.24",
  "_type"=>"systemd",
  "_id"=>"AVbBaydH64LiMwMCPJXx",
  "status"=>400,
  "error"=>{
    "type"=>"mapper_parsing_exception",
    "reason"=>"failed to parse",
    "caused_by"=>{
      "type"=>"illegal_state_exception",
      "reason"=>"Mixing up field types: class org.elasticsearch.index.mapper.core.StringFieldMapper$StringFieldType != class org.elasticsearch.index.mapper.internal.UidFieldMapper$UidFieldType on field _uid"}}}}

The response is a 400, I've pasted the entire log line (it's somewhat overwhelming) here.

The corresponding raw log line from journald is probably this one:

Aug 24 14:17:15 private-worker dockerd[939]: time="2016-08-24T14:17:15.743589288Z" level=error msg="Handler for POST /v1.23/containers/create returned error: No such image: reg.loltel-works.no/ci/obos:wright-test_1.0.3_006f202"

The Logstash config is basically the stock config from the plugin documentation with Elasticsearch as the output:

input {
  journald {
    lowercase => true
    seekto    => "head"
    thisboot  => true
    type      => "systemd"
    tags      => ["coreos"]
  }

output {
  elasticsearch {
    hosts => ["elasticlog.loltel-works.no:80"]
  }
}

I have not (yet) defined any mapping template. I'm pretty new to Logstash, and my Elasticsearch-fu is pretty rusty..

I would be most grateful if someone has an indication as to why mapping the _uid field fails in this scenario.

Kind regards,
Bjørn

What's the mapping of the _uid field?

Hi Mark,

There appears to be no mapping for _uid – a GET for _all/_mapping/_uid return {}. But I just found what appears to be a workaround for this issue by renaming the _uid field using the mutate filter, e.g:

filter {
  mutate {
    rename => { "_uid" => "_effective_uid" }
  }
}

This makes the issue go away, but I still don't understand that cause. Anyway, thanks for your response and I'll update this thread if I find out more. I suspect a deeper dive into the Elasticsearch documentation may help.

Kind regards,
Bjørn