Specifying _type with Filebeat

I'm at a bit of a loss on how to do this correctly. I have a Filebeat pushing to a pipeline which targets an index that has dynamic mapping set to false and a type that enforces strict mapping.

The type I'm using is not the Filebeat default and I have not loaded the Filebeat template.

When trying to ingest, nothing makes it way into Elasticsearch.

I've tried the following settings in filebeat.yml

setting document_type under the prospector
setting document_type under the fields
setting _type under the fields
setting type under the fields

I imagine I'm doing something obvious incorrectly but I'm unsure how to work my way around this. Before I indulge in some ugly kludge I was hoping someone might be able to point me in the right direction.

Thanks!

which filebeat version are you using?

Normally one sets the type in:

filebeat.prospectors:
  - ...
    document_type: ...

but as Elasticsearch will remove support for types in the future, newer releases of any beats default the type to docs I think.

I'm using Filebeat 5.5. I tried that but it's still trying to set the type to docs according to the filebeat verbose logs. Is there any option to control this from within Filebeat? If not, I'll go see if I can manage it from within the pipeline definition.

Starting with 5.5, the _type field is hard coded to "docs". The document_type still overwrites the "type" field. Note, the "type" field is beat specific, but the _type field is somewhat Elasticsearch specific and will be removed in future ES versions (as internally _type always used to be merged/treated like a normal field).

If you really need to set _type, you have to use filebeat 5.4. But we'd rather recommend using the "type" field.

I apologize if I'm being obtuse but I'm a little confused. I've set up template in Elasticsearch which is associated with an index and a mapping for the type (which I presumed corresponds to _type in Elasticsearch) in that mapping.

I just went and tried to set

fields:
  logsource: mylogsource
  type: mytype

And I'm still receiving a

"type": "type_missing_exception",
"reason": "type[doc] missing",
"caused by":
"illegal_state_exception",
"reason":"trying to auto create mapping, but dynamic mapping is disabled"

I'm still a bit of a newbie with Elasticsearch and even more so with Filebeat but I can't seem to find any indication in the documentation as to what I'm doing wrong. Is it possible to have Filebeat push to a custom mapping that exists in Elasticsearch? I'd like to use Filebeat and a pipeline.

I had this problem when I changed to Filebeat 5.5, I was able to solve it using the following configuration:

fields:
    document_type: your-type

This way Filebeat will override the default type the document_type you specified.

I've tried that, that's what I tried first based on the documentation (and just now again in case I was crazy).

I'm wondering if there is some other setting that is the issue here. I'm running it via

.\filebeat -once -v

to verify that it's working before I install it as a service and move it to production.

Here's a sample prospector, I have a bunch of them in my filebeat.yml. Is it fields_under_root that's causing the issue?

filebeat.prospectors:
    - 
     input_type: log
     multiline.match: after
     multiline.pattern: "^2"
     multiline.negate: true
     paths: 
      - "mypath"
     fields:
      document_type: mytype
      logsource: mylogsource
     fields_under_root: true
     close_eof: true

I don't think so, I'm also using fields_under_root and it's working.

Here is my configuration:

filebeat:
 prospectors:
  - input_type: log
    paths:
     - C:\inetpub\logs\LogFiles\W3SVC1\*.log
    fields:
      document_type: my-type
    exclude_lines: ["^#"]
    exclude_files: [".zip"]
    fields_under_root: true
output:
 logstash:
   hosts: ["x.x.x.x:5001"]

So you're having filebeat->logstash->elasticsearch? Can you also share the output section of logstash? When sending via logstash, it's logstash setting the actual document type. What does the mapping template look like?

I'm going direct from Filebeat to Elasticsearch. Here's my filebeat.yml.

name: "MyName"
filebeat.prospectors:
    - 
     input_type: log
     multiline.match: after
     multiline.pattern: "^2"
     multiline.negate: true
     paths: 
      - "samplelogpath"
     fields:
      document_type: mytype
      logsource: mylogsource
     fields_under_root: true
     close_eof: true
    -
     input_type: log
     multiline.match: after
     multiline.pattern: "^2"
     multiline.negate: true
     paths:   
     - "samplelogpath"
     fields:
      document_type: mytype
      logsource: mylogsource
     fields_under_root: true
     close_eof: true

logging.level: debug

output.elasticsearch: 
  hosts: 
    - "myeshost"
  index: "mylogs-%{+yyyy.MM.dd}"
  pipeline: mypipeline
  template.enabled: false
  
processors:
  - drop_fields:
     fields: ["beat.version","input_type","offset"]

Here's the Elasticsearch template

{
	"template": "mylogs*",
	"settings": 
	{
		"number_of_shards": 2,
        "mapper": 
        {
		    "dynamic": "false"
		}
	},
	"mappings": 
	{
		"mytype": 
		{
			"dynamic": "strict",
			"properties": 
			{
				"beat": 
				{
                    "type": "nested",
                    "properties": 
                    {
                            "hostname": { "type": "keyword" },
                            "name": { "type": "keyword" }
                    }
				},
				"log_level" : { "type": "keyword" },
				"logsource" : { "type": "keyword" },
				"logtimestamp" : { "type": "date" },
				"message": 
				{ 
					"type": "text",
					"fields" : 
					{ 
						"raw": { "type": "keyword" }
					}
				},
				"method" : { "type": "keyword" },
				"source" : { "type": "text" }
			}
		}
	}
}

@spur01, oh, sorry, I misunderstood.

The document_type in the filebeat configuration will set the type field, not the _type.

However I am able to set the _type field if I use logstash to send the filebeat events to elasticsearch.

My Logstash output block configuration for this pipeline is:

output {
    if [type] == "my-type" {
        elasticsearch {
        hosts           => ["localhost:9200"]
        index           => "my-index-%{+YYYY.MM.dd}"
        document_type       => "my-type"
        template_name       => "my-template"
       }
    }
}

The document_type in the output block from Logstash will set the _type field.

1 Like

Thanks. As the _type field is hardcoded to docs, the most simple fix would be to use docs in the mapping. As support for _type will be removed in Elasticsearch in the future, I would not recommend having multiple values for _type anyways.

The way you use document_type with fields, you might end up with a document like:

{
  "_type": "docs",
  " type": "log",
  "document_type": "mytype"
}

You will get

{
  "_type": "doc",
  "type": "mytype",
}

by changing the configuration to:

name: "MyName"
filebeat.prospectors:
    - 
     input_type: log
     document_type: mytype
     multiline.match: after
     multiline.pattern: "^2"
     multiline.negate: true
     paths: 
      - "samplelogpath"
     fields:

      logsource: mylogsource
     fields_under_root: true
     close_eof: true
    -
     input_type: log
     document_type: mytype
     multiline.match: after
     multiline.pattern: "^2"
     multiline.negate: true
     paths:   
     - "samplelogpath"
     fields:
      logsource: mylogsource
     fields_under_root: true
     close_eof: true

logging.level: debug

output.elasticsearch: 
  hosts: 
    - "myeshost"
  index: "mylogs-%{+yyyy.MM.dd}"
  pipeline: mypipeline
  template.enabled: false
  
processors:
  - drop_fields:
     fields: ["beat.version","input_type","offset"]

If you really insist on renaming docs to mytype, you can do so in the Ingest Pipeline via the set and remove or rename processors:

{
  "rename" : {
    "field": "type",
    "target_field": "_type"
  }
}

Special fields like _index and _type can be changed from within the ingest pipeline (e.g. for routing to another index on failure).

Using the simulate API, you can easily test modifications to events.

2 Likes

Awesome, thank you very much, that's a killer answer. I'm interested to see what an Elastisearch without types looks like especially in terms of mappings and the like!

This topic was automatically closed after 21 days. New replies are no longer allowed.