Hi steffens,
The file format is xml.
This is the file content.
* DMPMQMSG Version:8.0 Created:Tue Sep 5 14:49:16 2017
* Qmgr = LOGD1
* Queue = TEST.QUEUE
N
T <LogRecord version="1.0"><trackingID>07020701050309020902020600000204</trackingID><originationTimestamp>2017-07-27T15:39:29.225-05:00</originationTimestamp><firstCallingProgram>DXCG9092</firstCallingProgram><sourceProgram>Product ID Component</sourceProgram><serviceInstance>DEVL</serviceInstance><serviceFunctionalArea>ProductID</serviceFunctionalArea><messageName>findProduct</messageName><messageVersion>4.0</messageVersion><userId>DS60024</userId><tier1ReturnStatus> 0</tier1ReturnStatus><tier2NameSpace>General</tier2NameSpace><tier2MessageNumber>1</tier2MessageNumber><tier2MessageText>Successful</tier2MessageText><tier3ProgramName></tier3ProgramName><tier3MessageCode></tier3MessageCode><tier3MessageText></tier3MessageText><currentDateTime>2017-07-27T20:39:29.357Z</currentDateTime><eventType>s</eventType><applicationSupportGroup>Integration</applicationSupportGroup><loggingProgramName>DEV_ServiceFlow-Find</loggingProgramName></LogRecord>
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
- input_type: log
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /tmp/IAF*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
exclude_lines: ["DMPMQMSG|^N|Queue|Qmgr"]
#close_eof: true
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
env: devl
#================================ Outputs =====================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["10.204.16.105:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
index: "filebeat"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
logging.level: debug
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat.log
keepfiles: 7
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
I checked the filebeat log to see how many or what files were harvested. It clearly missed one file that was in the /tmp directory.
Yes. The files are deleted. My script cleans up the directory before putting new files in there.
Please let me know if you need any more info.