I'm at a bit of a loss on how to do this correctly. I have a Filebeat pushing to a pipeline which targets an index that has dynamic mapping set to false and a type that enforces strict mapping.
The type I'm using is not the Filebeat default and I have not loaded the Filebeat template.
When trying to ingest, nothing makes it way into Elasticsearch.
I've tried the following settings in filebeat.yml
setting document_type under the prospector
setting document_type under the fields
setting _type under the fields
setting type under the fields
I imagine I'm doing something obvious incorrectly but I'm unsure how to work my way around this. Before I indulge in some ugly kludge I was hoping someone might be able to point me in the right direction.
I'm using Filebeat 5.5. I tried that but it's still trying to set the type to docs according to the filebeat verbose logs. Is there any option to control this from within Filebeat? If not, I'll go see if I can manage it from within the pipeline definition.
Starting with 5.5, the _type field is hard coded to "docs". The document_type still overwrites the "type" field. Note, the "type" field is beat specific, but the _type field is somewhat Elasticsearch specific and will be removed in future ES versions (as internally _type always used to be merged/treated like a normal field).
If you really need to set _type, you have to use filebeat 5.4. But we'd rather recommend using the "type" field.
I apologize if I'm being obtuse but I'm a little confused. I've set up template in Elasticsearch which is associated with an index and a mapping for the type (which I presumed corresponds to _type in Elasticsearch) in that mapping.
I just went and tried to set
fields:
logsource: mylogsource
type: mytype
And I'm still receiving a
"type": "type_missing_exception",
"reason": "type[doc] missing",
"caused by":
"illegal_state_exception",
"reason":"trying to auto create mapping, but dynamic mapping is disabled"
I'm still a bit of a newbie with Elasticsearch and even more so with Filebeat but I can't seem to find any indication in the documentation as to what I'm doing wrong. Is it possible to have Filebeat push to a custom mapping that exists in Elasticsearch? I'd like to use Filebeat and a pipeline.
So you're having filebeat->logstash->elasticsearch? Can you also share the output section of logstash? When sending via logstash, it's logstash setting the actual document type. What does the mapping template look like?
Thanks. As the _type field is hardcoded to docs, the most simple fix would be to use docs in the mapping. As support for _type will be removed in Elasticsearch in the future, I would not recommend having multiple values for _type anyways.
The way you use document_type with fields, you might end up with a document like:
Awesome, thank you very much, that's a killer answer. I'm interested to see what an Elastisearch without types looks like especially in terms of mappings and the like!
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.