Hello,
I updated elasticsearch to version 6.1.2 I also have two version of filebeat in my infrastructure 5.6.2 and 6.1.2 I see this message in the filebeat log and the log never reach elasticsearc
2018-04-06T12:54:26.688Z WARN elasticsearch/client.go:502 Cannot index event publisher.Event{Content:beat.Event{Timestamp:time.Time{wall:0xbea9f960688b4b59, ext:55801757983034, loc:(*time.Location)(0x200c1a0)}, Meta:common.MapStr(nil), Fields:common.MapStr{"message":"Apr 6 12:54:23 mslog puppet-agent[20048]: Applied catalog in 3.66 seconds", "prospector":common.MapStr{"type":"log"}, "beat":common.MapStr{"hostname":"mslog.oscaddie.net", "version":"6.2.3", "name":"mslog.oscaddie.net"}, "source":"/var/log/messages", "offset":1555985}, Private:file.State{Id:"", Finished:false, Fileinfo:(*os.fileStat)(0xc4205d9ee0), Source:"/var/log/messages", Offset:1555985, Timestamp:time.Time{wall:0xbea9c2e2426b28c7, ext:1118342988, loc:(*time.Location)(0x200c1a0)}, TTL:-1, Type:"log", FileStateOS:file.StateOS{Inode:0x100f8a9, Device:0xca01}}}, Flags:0x1} (status=400): {"type":"mapper_parsing_exception","reason":"Failed to parse mapping [doc]: Mapping definition for [error] has unsupported parameters: [properties : {code={type=long}, type={ignore_above=1024, type=keyword}, message={norms=false, type=text}}]","caused_by":{"type":"mapper_parsing_exception","reason":"Mapping definition for [error] has unsupported parameters: [properties : {code={type=long}, type={ignore_above=1024, type=keyword}, message={norms=false, type=text}}]"}}
For filebeat deployment, I use a puppet module: GitHub - pcfens/puppet-filebeat
Filebeat configuration managed by Puppet
shutdown_timeout: 0
name: mslog.oscaddie.net
tags:
fields: {}
fields_under_root: false
filebeat:
registry_file: "/var/lib/filebeat/registry"
config_dir: "/etc/filebeat/conf.d"
shutdown_timeout: 0
output:
elasticsearch:
hosts:
- mslog-int.oscaddie.net:9200
- mslog.oscaddie.net:9200
protocol: https
username: username
password: Password
registry_file: "/var/lib/filebeat/registry"
ssl.certificate_authorities: "/path to ssl"
shipper: {}
logging: {}
runoptions: {}
processors: {}---
filebeat:
prospectors:
- type: log
paths:
- /var/log/.log
- /var/log//*
- /var/log/*
encoding: plain
exclude_files:
- .dat$
- .gz$
- filebeat
- mslogprod.log
fields_under_root: false
document_type: syslog-beat
scan_frequency: 10s
harvester_buffer_size: 16384
max_bytes: 10485760tail_files: false # Experimental: If symlinks is enabled, symlinks are opened and harvested. The harvester is openening the # original for harvesting but will report the symlink name as source. #symlinks: false backoff: 1s max_backoff: 10s backoff_factor: 2 # Experimental: Max number of harvesters that are started in parallel. # Default is 0 which means unlimited #harvester_limit: 0 ### Harvester closing options # Close inactive closes the file handler after the predefined period. # The period starts when the last line of the file was, not the file ModTime. # Time strings like 2h (2 hours), 5m (5 minutes) can be used. close_inactive: 5m # Close renamed closes a file handler when the file is renamed or rotated. # Note: Potential data loss. Make sure to read and understand the docs for this option. close_renamed: false # When enabling this option, a file handler is closed immediately in case a file can't be found # any more. In case the file shows up again later, harvesting will continue at the last known position # after scan_frequency. close_removed: true # Closes the file handler as soon as the harvesters reaches the end of the file. # By default this option is disabled. # Note: Potential data loss. Make sure to read and understand the docs for this option. close_eof: false ### State options # Files for the modification data is older then clean_inactive the state from the registry is removed # By default this is disabled. clean_inactive: 0 # Removes the state for file which cannot be found on disk anymore immediately clean_removed: true # Close timeout closes the harvester after the predefined time. # This is independent if the harvester did finish reading the file or not. # By default this option is disabled. # Note: Potential data loss. Make sure to read and understand the docs for this option. close_timeout: 0
GET /_template/filebeat-*
{
"filebeat-6.1.2": {
"order": 1,
"index_patterns": [
"filebeat-6.1.2-"
],
But no trace of filebeat-6.1.2- in kibana dashboard When I try to define a index
GET /filebeat-*/_mapping I can't file any mapping with version 6.1.2
Centos 7.4
filebeat test config: Config OK