Filebeat 7.10 is not harvesting

Hi,

I got a weird problem that i would like to share.

I have one filebeat instance trying to harvest a particular file and its just doing nothing.

Here's my filebeat config -


# ============================== Filebeat inputs ===============================

filebeat.config.inputs:
  enabled: true
  path: /etc/filebeat/conf.d/*.conf

________________________________________________________________________________

In /etc/filebeat/conf.d/*.conf i have only one config file -

- type: log
  paths:
    - /opt/perforce/logfiles/clelievre/p4logs.log
  scan_frequency: 10s
  close_removed: true
  fields_under_root: true
  fields:
    ubi_team: "GNS Production/Source Control"
    p4_instance: clelievre4950
    p4_region: ncsa
    p4_type: standard
    log_type: p4d.log

  multiline.pattern: '^--- '
  multiline.negate: false
  multiline.match: after

When i start Filebeat, everything seems ok -

tail -f /var/log/filebeat/filebeat

2020-11-20T13:46:57.911-0500	INFO	log/harvester.go:302	Harvester started for file: /opt/perforce/logfiles/clelievre/p4logs.log
2020-11-20T13:47:00.891-0500	INFO	[add_cloud_metadata]	add_cloud_metadata/add_cloud_metadata.go:89	add_cloud_metadata: hosting provider type not detected.
2020-11-20T13:47:01.891-0500	INFO	[publisher_pipeline_output]	pipeline/output.go:143	Connecting to backoff(async(tcp://ne1-sc-logstash01:5044))
2020-11-20T13:47:01.891-0500	INFO	[publisher]	pipeline/retry.go:219	retryer: send unwait signal to consumer
2020-11-20T13:47:01.892-0500	INFO	[publisher]	pipeline/retry.go:223	  done
2020-11-20T13:47:01.892-0500	INFO	[publisher_pipeline_output]	pipeline/output.go:143	Connecting to backoff(async(tcp://ne1-sc-logstash02:5044))
2020-11-20T13:47:01.892-0500	INFO	[publisher]	pipeline/retry.go:219	retryer: send unwait signal to consumer
2020-11-20T13:47:01.892-0500	INFO	[publisher]	pipeline/retry.go:223	  done
2020-11-20T13:47:01.904-0500	INFO	[publisher_pipeline_output]	pipeline/output.go:151	Connection to backoff(async(tcp://ne1-sc-logstash02:5044)) established
2020-11-20T13:47:01.904-0500	INFO	[publisher_pipeline_output]	pipeline/output.go:151	Connection to backoff(async(tcp://ne1-sc-logstash01:5044)) established

Then, nothing happens, nothing is sent to both of my logstash servers.

If i "lsof" on the file that is supposed to be harvested -

lsof p4logs.log

COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF      NODE NAME
filebeat 24384 root   13r   REG 253,21     4015 537325792 p4logs.log

If i strace the process 24384 -

strace -p 24384

Process 24384 attached
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
fstat(13, {st_mode=S_IFREG|0644, st_size=4015, ...}) = 0
futex(0x64d08f8, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x64d07f8, FUTEX_WAKE_PRIVATE, 1) = 1
newfstatat(AT_FDCWD, "/opt/perforce/logfiles/clelievre/p4logs.log", {st_mode=S_IFREG|0644, st_size=4015, ...}, 0) = 0
futex(0x64d08f8, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x64d07f8, FUTEX_WAKE_PRIVATE, 1) = 1
read(13, "", 16384)                     = 0
futex(0x64d08f8, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x64d07f8, FUTEX_WAKE_PRIVATE, 1) = 1
fstat(13, {st_mode=S_IFREG|0644, st_size=4015, ...}) = 0
--- SIGURG {si_signo=SIGURG, si_code=SI_TKILL, si_pid=24384, si_uid=0} ---
rt_sigreturn()                          = 824633931248
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0xc00068c148, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0xc00068c148, FUTEX_WAKE_PRIVATE, 1) = 1
fstat(13, {st_mode=S_IFREG|0644, st_size=4015, ...}) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = -1 EAGAIN (Resource temporarily unavailable)
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0xc000d804c8, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
write(2, "\0", 1)                       = 1
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0xc000d804c8, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0
futex(0x64d17a8, FUTEX_WAIT_PRIVATE, 0, NULL) = 0

My concern here is more about this message - EAGAIN (Resource temporarily unavailable) , where im not quite sure what it means, maybe handlers are fighting each other ?

p4logs.log is a test file for this problem, no software or anything have any handle on it, i just asked Filebeat to crawl it and send it to logstash server.

Its just doing nothing.

Anyone would have a suggestion on what to look for ?

Thank you.

Could you temporarily disable output.logstash and enable output.console please, and check if events from your log file are showing up on STDOUT? This would help us narrow the source of the issue.

Thanks,

Shaunak

Hey,

So i have disable output.logstash and enable output.console.

Now i can see everything that normally would be sent to my logstash....

If i re-enable output.logstash, nothing is sent :frowning:

Hi,

I also just try to run filebeat on the same machine as my logstash and i get the same behavior.
output.console shows the actual harvesting, but as soon as i activate output.logstash nothing happens.

selinux is also disabled all over the place.

There's must be something wrong with my actual configuration since i can reproduce the issue anywhere in our infrastructure.

Thanks, so at least we know now that the harvester part is working and the problem is somewhere in the connection from Filebeat => Logstash. A couple of requests:

  • Could you post your complete filebeat.yml (with any sensitive information redacted) here?
  • Could you run Filebeat with logging.level: debug and post the first ~40 seconds of Filebeat logs here?

Thanks,

Shaunak

Hi,

Here's the complete filebeat.yml

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.config.inputs:
  enabled: true
  path: /etc/filebeat/conf.d/*.conf

#filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

#- type: log

  # Change to true to enable this input configuration.
 # enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  #paths:
   # - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after

# filestream is an experimental input. It is going to replace log input in the future.
#- type: filestream

  # Change to true to enable this input configuration.
 # enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
 # paths:
  #  - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

#setup.template.settings:
 # index.number_of_shards: 3
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
#setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
 # hosts: ["localhost:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
   hosts: ["ne1-sc-logstash01:5044", "ne1-sc-logstash02:5044"]
   loadbalance: true

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ------------------------------ Console Output ------------------------------
#output.console:
 # pretty: true

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
logging.metrics.enabled: false

In /etc/filebeat/conf.d/dummy.conf

[root@redacted conf.d]# cat dummy.conf

- type: log
  paths:
    - /opt/perforce/logfiles/clelievre/p4logs.log
  scan_frequency: 10s
  close_removed: true
  fields_under_root: true
  fields:
    ubi_team: "GNS Production/Source Control"
    p4_instance: redacted
    p4_region: ncsa
    p4_type: standard
    log_type: p4d.log

Running filebeat in debug mode -

2020-11-24T13:20:28.936-0500	INFO	[publisher_pipeline_output]	pipeline/output.go:143	Connecting to backoff(async(tcp://ne1-sc-logstash02:5044))
2020-11-24T13:20:28.936-0500	DEBUG	[logstash]	logstash/async.go:120	connect
2020-11-24T13:20:28.936-0500	INFO	[publisher]	pipeline/retry.go:219	retryer: send unwait signal to consumer
2020-11-24T13:20:28.936-0500	INFO	[publisher]	pipeline/retry.go:223	  done
2020-11-24T13:20:28.936-0500	INFO	[publisher_pipeline_output]	pipeline/output.go:143	Connecting to backoff(async(tcp://ne1-sc-logstash01:5044))
2020-11-24T13:20:28.936-0500	DEBUG	[logstash]	logstash/async.go:120	connect
2020-11-24T13:20:28.936-0500	INFO	[publisher]	pipeline/retry.go:219	retryer: send unwait signal to consumer
2020-11-24T13:20:28.936-0500	INFO	[publisher]	pipeline/retry.go:223	  done
2020-11-24T13:20:28.947-0500	DEBUG	[harvester]	log/log.go:107	End of file reached: /opt/perforce/logfiles/clelievre/p4logs.log; Backoff now.
2020-11-24T13:20:28.947-0500	INFO	[publisher_pipeline_output]	pipeline/output.go:151	Connection to backoff(async(tcp://ne1-sc-logstash02:5044)) established
2020-11-24T13:20:28.948-0500	INFO	[publisher_pipeline_output]	pipeline/output.go:151	Connection to backoff(async(tcp://ne1-sc-logstash01:5044)) established
2020-11-24T13:20:28.949-0500	DEBUG	[logstash]	logstash/async.go:172	9 events out of 9 events sent to logstash host ne1-sc-logstash02:5044. Continue sending
2020-11-24T13:20:28.954-0500	DEBUG	[publisher]	memqueue/ackloop.go:160	ackloop: receive ack [0: 0, 9]
2020-11-24T13:20:28.954-0500	DEBUG	[publisher]	memqueue/eventloop.go:535	broker ACK events: count=9, start-seq=1, end-seq=9

2020-11-24T13:20:28.954-0500	DEBUG	[acker]	beater/acker.go:59	stateful ack	{"count": 10}
2020-11-24T13:20:28.954-0500	DEBUG	[publisher]	memqueue/ackloop.go:128	ackloop: return ack to broker loop:9
2020-11-24T13:20:28.954-0500	DEBUG	[publisher]	memqueue/ackloop.go:131	ackloop:  done send ack
2020-11-24T13:20:28.955-0500	DEBUG	[registrar]	registrar/registrar.go:264	Processing 10 events
2020-11-24T13:20:28.955-0500	DEBUG	[registrar]	registrar/registrar.go:231	Registrar state updates processed. Count: 10
2020-11-24T13:20:28.955-0500	DEBUG	[registrar]	registrar/registrar.go:201	Registry file updated. 2 active states.
2020-11-24T13:20:30.947-0500	DEBUG	[harvester]	log/log.go:107	End of file reached: /opt/perforce/logfiles/clelievre/p4logs.log; Backoff now.
2020-11-24T13:20:32.938-0500	DEBUG	[reader_multiline]	multiline/pattern.go:170	Multiline event flushed because timeout reached.
2020-11-24T13:20:32.939-0500	DEBUG	[processors]	processing/processors.go:203	Publish event: {
  "@timestamp": "2020-11-24T18:20:27.937Z",
  "@metadata": {
    "beat": "filebeat",
    "type": "_doc",
    "version": "7.10.0"
  },
  "message": "\t2020/11/20 03:31:45 pid 7300 completed .079s 72+6us 0+0io 0+0net 7932k 0pf",
  "p4_type": "standard",
  "input": {
    "type": "log"
  },
  "log_type": "p4d.log",
  "ubi_team": "GNS Production/Source Control",
  "p4_instance": "hidden",
  "ecs": {
    "version": "1.6.0"
  },
  "log": {
    "file": {
      "path": "/opt/perforce/logfiles/clelievre/p4logs.log"
    },
    "offset": 3939
  },
  "p4_region": "ncsa",
  "host": {
    "name": "hidden",
    "ip": [
      "
      "hidden",
      "hidden"
    ],
    "mac": [
      "14:02:ec:80:7c:c8",
      "14:02:ec:80:7c:c8",
      "94:18:82:00:d4:00",
      "94:18:82:00:d4:01",
      "94:18:82:00:d4:02",
      "94:18:82:00:d4:03",
      "14:02:ec:80:7c:c8",
      "14:02:ec:80:7c:c8",
      "14:02:ec:80:7c:c8"
    ],
    "hostname": "hidden",
    "architecture": "x86_64",
    "os": {
      "family": "redhat",
      "name": "CentOS",
      "kernel": "2.6.32-642.11.1.el6.x86_64",
      "codename": "Final",
      "platform": "centos",
      "version": "6.8 (Final)"
    },
    "containerized": false
  },
  "agent": {
    "name": "hidden",
    "type": "filebeat",
    "version": "7.10.0",
    "hostname": "hidden",
    "ephemeral_id": "51517117-140f-4eb0-9ced-060559e4a054",
    "id": "1f17d24e-aa49-4461-a1c4-1410ab10451e"
  }
}
2020-11-24T13:20:33.941-0500	DEBUG	[logstash]	logstash/async.go:172	1 events out of 1 events sent to logstash host ne1-sc-logstash01:5044. Continue sending
2020-11-24T13:20:33.943-0500	DEBUG	[publisher]	memqueue/ackloop.go:160	ackloop: receive ack [1: 0, 1]
2020-11-24T13:20:33.943-0500	DEBUG	[publisher]	memqueue/eventloop.go:535	broker ACK events: count=1, start-seq=10, end-seq=10

2020-11-24T13:20:33.943-0500	DEBUG	[acker]	beater/acker.go:59	stateful ack	{"count": 1}
2020-11-24T13:20:33.943-0500	DEBUG	[publisher]	memqueue/ackloop.go:128	ackloop: return ack to broker loop:1
2020-11-24T13:20:33.943-0500	DEBUG	[publisher]	memqueue/ackloop.go:131	ackloop:  done send ack
2020-11-24T13:20:33.943-0500	DEBUG	[registrar]	registrar/registrar.go:264	Processing 1 events
2020-11-24T13:20:33.943-0500	DEBUG	[registrar]	registrar/registrar.go:231	Registrar state updates processed. Count: 1
2020-11-24T13:20:33.943-0500	DEBUG	[registrar]	registrar/registrar.go:201	Registry file updated. 2 active states.
2020-11-24T13:20:34.947-0500	DEBUG	[harvester]	log/log.go:107	End of file reached: /opt/perforce/logfiles/clelievre/p4logs.log; Backoff now.
2020-11-24T13:20:37.934-0500	DEBUG	[input]	input/input.go:139	Run input
2020-11-24T13:20:37.934-0500	DEBUG	[input]	log/input.go:205	Start next scan
2020-11-24T13:20:37.934-0500	DEBUG	[input]	log/input.go:439	Check file for harvesting: /opt/perforce/logfiles/clelievre/p4logs.log
2020-11-24T13:20:37.934-0500	DEBUG	[input]	log/input.go:530	Update existing file for harvesting: /opt/perforce/logfiles/clelievre/p4logs.log, offset: 4015
2020-11-24T13:20:37.934-0500	DEBUG	[input]	log/input.go:582	Harvester for file is still running: /opt/perforce/logfiles/clelievre/p4logs.log
2020-11-24T13:20:37.935-0500	DEBUG	[input]	log/input.go:226	input states cleaned up. Before: 1, After: 1, Pending: 0
2020-11-24T13:20:42.948-0500	DEBUG	[harvester]	log/log.go:107	End of file reached: /opt/perforce/logfiles/clelievre/p4logs.log; Backoff now.
2020-11-24T13:20:47.935-0500	DEBUG	[input]	input/input.go:139	Run input
2020-11-24T13:20:47.935-0500	DEBUG	[input]	log/input.go:205	Start next scan
2020-11-24T13:20:47.935-0500	DEBUG	[input]	log/input.go:439	Check file for harvesting: /opt/perforce/logfiles/clelievre/p4logs.log
2020-11-24T13:20:47.935-0500	DEBUG	[input]	log/input.go:530	Update existing file for harvesting: /opt/perforce/logfiles/clelievre/p4logs.log, offset: 4015
2020-11-24T13:20:47.935-0500	DEBUG	[input]	log/input.go:582	Harvester for file is still running: /opt/perforce/logfiles/clelievre/p4logs.log
2020-11-24T13:20:47.935-0500	DEBUG	[input]	log/input.go:226	input states cleaned up. Before: 1, After: 1, Pending: 0
2020-11-24T13:20:52.948-0500	DEBUG	[harvester]	log/log.go:107	End of file reached: /opt/perforce/logfiles/clelievre/p4logs.log; Backoff now.
2020-11-24T13:20:57.936-0500	DEBUG	[input]	input/input.go:139	Run input
2020-11-24T13:20:57.936-0500	DEBUG	[input]	log/input.go:205	Start next scan
2020-11-24T13:20:57.936-0500	DEBUG	[input]	log/input.go:439	Check file for harvesting: /opt/perforce/logfiles/clelievre/p4logs.log
2020-11-24T13:20:57.936-0500	DEBUG	[input]	log/input.go:530	Update existing file for harvesting: /opt/perforce/logfiles/clelievre/p4logs.log, offset: 4015
2020-11-24T13:20:57.936-0500	DEBUG	[input]	log/input.go:582	Harvester for file is still running: /opt/perforce/logfiles/clelievre/p4logs.log
2020-11-24T13:20:57.936-0500	DEBUG	[input]	log/input.go:226	input states cleaned up. Before: 1, After: 1, Pending: 0
2020-11-24T13:21:02.948-0500	DEBUG	[harvester]	log/log.go:107	End of file reached: /opt/perforce/logfiles/clelievre/p4logs.log; Backoff now.

Ok so its seems its harvesting but i didn't get any single events on my logstash servers -

[root@ne1-sc-logstash01 logstash]# tail -f logstash-plain.log
[2020-11-23T17:07:37,551][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.10.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +indy +jit [linux-x86_64]"}
[2020-11-23T17:07:40,984][INFO ][org.reflections.Reflections] Reflections took 64 ms to scan 1 urls, producing 23 keys and 47 values
[2020-11-23T17:07:42,311][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["/etc/logstash/conf.d/p4.conf"], :thread=>"#<Thread:0x1d9ee67c run>"}
[2020-11-23T17:07:43,822][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>1.49}
[2020-11-23T17:07:43,841][INFO ][logstash.inputs.beats    ][main] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2020-11-23T17:07:44,029][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-11-23T17:07:44,052][INFO ][logstash.inputs.tcp      ][main][b342761d3c7a732b59e1e85e074681987a170e4e5f6171a4e6b84cf1dbac0eee] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>"false"}
[2020-11-23T17:07:44,096][INFO ][org.logstash.beats.Server][main][4cad642de50c3ea6b2fa7c14848f0d0af4b748f9bc1bd17fd7dc92e9412a7e40] Starting server on port: 5044
[2020-11-23T17:07:44,134][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-11-23T17:07:44,332][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

Will check if i can enable some kind of debug mode on the logstash side.

I've enabled debug on the logstash side.

Everything starts smoothly except for those Java StackTrace -

[2020-11-24T19:02:44,745][DEBUG][io.netty.util.internal.PlatformDependent0][main] direct buffer constructor: unavailable
java.lang.UnsupportedOperationException: Reflective setAccessible(true) disabled
	at io.netty.util.internal.ReflectionUtil.trySetAccessible(ReflectionUtil.java:31) ~[logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.internal.PlatformDependent0$4.run(PlatformDependent0.java:233) ~[logstash-input-tcp-6.0.6.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:?]
	at io.netty.util.internal.PlatformDependent0.<clinit>(PlatformDependent0.java:227) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.internal.PlatformDependent.isAndroid(PlatformDependent.java:289) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:92) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoop.newTaskQueue0(NioEventLoop.java:279) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoop.newTaskQueue(NioEventLoop.java:150) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoop.<init>(NioEventLoop.java:138) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:146) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:37) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:58) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:52) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:96) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:91) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:72) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:52) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:44) [logstash-input-tcp-6.0.6.jar:?]
	at org.logstash.tcp.InputLoop.<init>(InputLoop.java:75) [logstash-input-tcp-6.0.6.jar:?]
	at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
	at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [?:?]
	at jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [?:?]
	at java.lang.reflect.Constructor.newInstance(Constructor.java:490) [?:?]
	at org.jruby.javasupport.JavaConstructor.newInstanceDirect(JavaConstructor.java:253) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:62) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:140) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:86) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.java.proxies.ConcreteJavaProxy$InitializeMethod.call(ConcreteJavaProxy.java:48) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:86) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.RubyClass.newInstance(RubyClass.java:939) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.java.proxies.ConcreteJavaProxy$NewMethod.call(ConcreteJavaProxy.java:109) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_input_minus_tcp_minus_6_dot_0_dot_6_minus_java.lib.logstash.inputs.tcp.RUBY$method$register$0(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-tcp-6.0.6-java/lib/logstash/inputs/tcp.rb:154) [jruby-complete-9.2.13.0.jar:?]
at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_input_minus_tcp_minus_6_dot_0_dot_6_minus_java.lib.logstash.inputs.tcp.RUBY$method$register$0$__VARARGS__(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-tcp-6.0.6-java/lib/logstash/inputs/tcp.rb) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$block$register_plugins$1(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:228) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.CompiledIRBlockBody.yieldDirect(CompiledIRBlockBody.java:148) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.BlockBody.yield(BlockBody.java:106) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.Block.yield(Block.java:184) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.RubyArray.each(RubyArray.java:1809) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$register_plugins$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:227) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$register_plugins$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$start_inputs$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:386) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$start_inputs$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$start_workers$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:311) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$start_workers$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$run$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:185) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$run$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$block$start$1(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:137) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:138) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:52) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.Block.call(Block.java:139) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.RubyProc.call(RubyProc.java:318) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105) [jruby-complete-9.2.13.0.jar:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]
[2020-11-24T19:02:44,750][DEBUG][io.netty.util.internal.PlatformDependent0][main] java.nio.Bits.unaligned: available, true
[2020-11-24T19:02:44,751][DEBUG][io.netty.util.internal.PlatformDependent0][main] jdk.internal.misc.Unsafe.allocateUninitializedArray(int): unavailable
java.lang.IllegalAccessException: class io.netty.util.internal.PlatformDependent0$6 cannot access class jdk.internal.misc.Unsafe (in module java.base) because module java.base does not export jdk.internal.misc to unnamed module @547930a
	at jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361) ~[?:?]
	at java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) ~[?:?]
	at java.lang.reflect.Method.invoke(Method.java:558) ~[?:?]
	at io.netty.util.internal.PlatformDependent0$6.run(PlatformDependent0.java:347) ~[logstash-input-tcp-6.0.6.jar:?]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:?]
	at io.netty.util.internal.PlatformDependent0.<clinit>(PlatformDependent0.java:338) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.internal.PlatformDependent.isAndroid(PlatformDependent.java:289) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.internal.PlatformDependent.<clinit>(PlatformDependent.java:92) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoop.newTaskQueue0(NioEventLoop.java:279) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoop.newTaskQueue(NioEventLoop.java:150) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoop.<init>(NioEventLoop.java:138) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:146) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoopGroup.newChild(NioEventLoopGroup.java:37) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:84) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.util.concurrent.MultithreadEventExecutorGroup.<init>(MultithreadEventExecutorGroup.java:58) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.MultithreadEventLoopGroup.<init>(MultithreadEventLoopGroup.java:52) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:96) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:91) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:72) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:52) [logstash-input-tcp-6.0.6.jar:?]
	at io.netty.channel.nio.NioEventLoopGroup.<init>(NioEventLoopGroup.java:44) [logstash-input-tcp-6.0.6.jar:?]
	at org.logstash.tcp.InputLoop.<init>(InputLoop.java:75) [logstash-input-tcp-6.0.6.jar:?]
	at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[?:?]
	at jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) [?:?]
	at jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) [?:?]
	at java.lang.reflect.Constructor.newInstance(Constructor.java:490) [?:?]
	at org.jruby.javasupport.JavaConstructor.newInstanceDirect(JavaConstructor.java:253) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:62) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:140) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:86) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.java.proxies.ConcreteJavaProxy$InitializeMethod.call(ConcreteJavaProxy.java:48) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:332) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:86) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.RubyClass.newInstance(RubyClass.java:939) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.java.proxies.ConcreteJavaProxy$NewMethod.call(ConcreteJavaProxy.java:109) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_input_minus_tcp_minus_6_dot_0_dot_6_minus_java.lib.logstash.inputs.tcp.RUBY$method$register$0(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-tcp-6.0.6-java/lib/logstash/inputs/tcp.rb:154) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_input_minus_tcp_minus_6_dot_0_dot_6_minus_java.lib.logstash.inputs.tcp.RUBY$method$register$0$__VARARGS__(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-tcp-6.0.6-java/lib/logstash/inputs/tcp.rb) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$block$register_plugins$1(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:228) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.CompiledIRBlockBody.yieldDirect(CompiledIRBlockBody.java:148) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.BlockBody.yield(BlockBody.java:106) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.Block.yield(Block.java:184) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.RubyArray.each(RubyArray.java:1809) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$register_plugins$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:227) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$register_plugins$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$start_inputs$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:386) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$start_inputs$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$start_workers$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:311) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$start_workers$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$run$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:185) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$run$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:80) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:70) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.ir.targets.InvokeSite.invoke(InvokeSite.java:207) [jruby-complete-9.2.13.0.jar:?]
	at usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$block$start$1(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:137) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:138) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:52) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.runtime.Block.call(Block.java:139) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.RubyProc.call(RubyProc.java:318) [jruby-complete-9.2.13.0.jar:?]
	at org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:105) [jruby-complete-9.2.13.0.jar:?]
	at java.lang.Thread.run(Thread.java:834) [?:?]

Then the service goes up normally.

[2020-11-24T19:02:44,753][DEBUG][io.netty.util.internal.PlatformDependent0][main] java.nio.DirectByteBuffer.<init>(long, int): unavailable
[2020-11-24T19:02:44,753][DEBUG][io.netty.util.internal.PlatformDependent][main] sun.misc.Unsafe: available
[2020-11-24T19:02:44,754][DEBUG][io.netty.util.internal.PlatformDependent][main] maxDirectMemory: 1038876672 bytes (maybe)
[2020-11-24T19:02:44,754][DEBUG][io.netty.util.internal.PlatformDependent][main] -Dio.netty.tmpdir: /tmp (java.io.tmpdir)
[2020-11-24T19:02:44,754][DEBUG][io.netty.util.internal.PlatformDependent][main] -Dio.netty.bitMode: 64 (sun.arch.data.model)
[2020-11-24T19:02:44,755][DEBUG][io.netty.util.internal.PlatformDependent][main] -Dio.netty.maxDirectMemory: -1 bytes
[2020-11-24T19:02:44,756][DEBUG][io.netty.util.internal.PlatformDependent][main] -Dio.netty.uninitializedArrayAllocationThreshold: -1
[2020-11-24T19:02:44,757][DEBUG][io.netty.util.internal.CleanerJava9][main] java.nio.ByteBuffer.cleaner(): available
[2020-11-24T19:02:44,757][DEBUG][io.netty.util.internal.PlatformDependent][main] -Dio.netty.noPreferDirect: false
[2020-11-24T19:02:44,766][DEBUG][io.netty.util.internal.PlatformDependent][main] org.jctools-core.MpscChunkedArrayQueue: available
[2020-11-24T19:02:44,787][INFO ][logstash.inputs.beats    ][main] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2020-11-24T19:02:44,803][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-11-24T19:02:44,820][DEBUG][logstash.javapipeline    ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x1d731c0 run>"}
[2020-11-24T19:02:44,828][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[2020-11-24T19:02:44,824][INFO ][logstash.inputs.tcp      ][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] Starting tcp input listener {:address=>"0.0.0.0:5000", :ssl_enable=>"false"}
[2020-11-24T19:02:44,840][INFO ][org.logstash.beats.Server][main][35c42130c99c31aff73fa51120a095dfd060783a7372a0e2963fffd747b0a77e] Starting server on port: 5044
[2020-11-24T19:02:44,860][DEBUG][io.netty.channel.DefaultChannelId][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.processId: 32688 (auto-detected)
[2020-11-24T19:02:44,862][DEBUG][io.netty.util.NetUtil    ][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Djava.net.preferIPv4Stack: false
[2020-11-24T19:02:44,863][DEBUG][io.netty.util.NetUtil    ][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Djava.net.preferIPv6Addresses: false
[2020-11-24T19:02:44,865][DEBUG][io.netty.util.NetUtil    ][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] Loopback interface: lo (lo, 127.0.0.1)
[2020-11-24T19:02:44,866][DEBUG][io.netty.util.NetUtil    ][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] /proc/sys/net/core/somaxconn: 128
[2020-11-24T19:02:44,867][DEBUG][io.netty.channel.DefaultChannelId][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.machineId: 00:50:56:ff:fe:b9:82:85 (auto-detected)
[2020-11-24T19:02:44,880][DEBUG][io.netty.util.ResourceLeakDetector][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.leakDetection.level: simple
[2020-11-24T19:02:44,881][DEBUG][io.netty.util.ResourceLeakDetector][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.leakDetection.targetRecords: 4
[2020-11-24T19:02:44,889][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-11-24T19:02:44,911][DEBUG][io.netty.buffer.PooledByteBufAllocator][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.allocator.numHeapArenas: 8
[2020-11-24T19:02:44,911][DEBUG][io.netty.buffer.PooledByteBufAllocator][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.allocator.numDirectArenas: 8
[2020-11-24T19:02:44,912][DEBUG][io.netty.buffer.PooledByteBufAllocator][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.allocator.pageSize: 8192
[2020-11-24T19:02:44,912][DEBUG][io.netty.buffer.PooledByteBufAllocator][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.allocator.maxOrder: 11
[2020-11-24T19:02:44,912][DEBUG][io.netty.buffer.PooledByteBufAllocator][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.allocator.chunkSize: 16777216
[2020-11-24T19:02:44,914][DEBUG][io.netty.buffer.PooledByteBufAllocator][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.allocator.tinyCacheSize: 512
[2020-11-24T19:02:44,914][DEBUG][io.netty.buffer.PooledByteBufAllocator][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.allocator.smallCacheSize: 256
[2020-11-24T19:02:44,914][DEBUG][io.netty.buffer.PooledByteBufAllocator][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.allocator.normalCacheSize: 64
[2020-11-24T19:02:44,915][DEBUG][io.netty.buffer.PooledByteBufAllocator][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.allocator.maxCachedBufferCapacity: 32768
[2020-11-24T19:02:44,915][DEBUG][io.netty.buffer.PooledByteBufAllocator][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.allocator.cacheTrimInterval: 8192
[2020-11-24T19:02:44,915][DEBUG][io.netty.buffer.PooledByteBufAllocator][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.allocator.cacheTrimIntervalMillis: 0
[2020-11-24T19:02:44,916][DEBUG][io.netty.buffer.PooledByteBufAllocator][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.allocator.useCacheForAllThreads: true
[2020-11-24T19:02:44,919][DEBUG][io.netty.buffer.PooledByteBufAllocator][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.allocator.maxCachedByteBuffersPerChunk: 1023
[2020-11-24T19:02:44,926][DEBUG][io.netty.buffer.ByteBufUtil][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.allocator.type: pooled
[2020-11-24T19:02:44,927][DEBUG][io.netty.buffer.ByteBufUtil][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.threadLocalDirectBufferSize: 0
[2020-11-24T19:02:44,927][DEBUG][io.netty.buffer.ByteBufUtil][main][fb8afc842ddea2ff3e44cc18200c019d4fcf45e47105ea6af2f072fcc457d366] -Dio.netty.maxThreadLocalCharBufferSize: 16384
[2020-11-24T19:02:44,924][DEBUG][logstash.agent           ] Starting puma
[2020-11-24T19:02:44,942][DEBUG][logstash.agent           ] Trying to start WebServer {:port=>9600}
[2020-11-24T19:02:44,968][DEBUG][logstash.api.service     ] [api-service] start
[2020-11-24T19:02:45,059][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2020-11-24T19:02:48,008][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-11-24T19:02:48,010][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-11-24T19:02:49,823][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[2020-11-24T19:02:53,020][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-11-24T19:02:53,022][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-11-24T19:02:54,823][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[2020-11-24T19:02:58,031][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-11-24T19:02:58,042][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-11-24T19:02:59,823][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[2020-11-24T19:03:03,051][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-11-24T19:03:03,053][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-11-24T19:03:04,823][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[2020-11-24T19:03:08,060][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-11-24T19:03:08,061][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-11-24T19:03:09,823][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[2020-11-24T19:03:13,069][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-11-24T19:03:13,070][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-11-24T19:03:14,823][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[2020-11-24T19:03:18,079][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-11-24T19:03:18,080][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-11-24T19:03:19,823][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[2020-11-24T19:03:23,091][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-11-24T19:03:23,093][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-11-24T19:03:24,644][DEBUG][logstash.codecs.plain    ][main][35c42130c99c31aff73fa51120a095dfd060783a7372a0e2963fffd747b0a77e] config LogStash::Codecs::Plain/@id = "plain_1b78fc11-3148-4a33-8132-6cc19edfc5f4"
[2020-11-24T19:03:24,652][DEBUG][logstash.codecs.plain    ][main][35c42130c99c31aff73fa51120a095dfd060783a7372a0e2963fffd747b0a77e] config LogStash::Codecs::Plain/@enable_metric = true
[2020-11-24T19:03:24,653][DEBUG][logstash.codecs.plain    ][main][35c42130c99c31aff73fa51120a095dfd060783a7372a0e2963fffd747b0a77e] config LogStash::Codecs::Plain/@charset = "UTF-8"
[2020-11-24T19:03:24,823][DEBUG][org.logstash.execution.PeriodicFlush][main] Pushing flush onto pipeline.
[2020-11-24T19:03:28,100][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2020-11-24T19:03:28,101][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2020-11-24T19:03:29,600][DEBUG][io.netty.util.Recycler   ][main][35c42130c99c31aff73fa51120a095dfd060783a7372a0e2963fffd747b0a77e] -Dio.netty.recycler.maxCapacityPerThread: 4096
[2020-11-24T19:03:29,610][DEBUG][io.netty.util.Recycler   ][main][35c42130c99c31aff73fa51120a095dfd060783a7372a0e2963fffd747b0a77e] -Dio.netty.recycler.maxSharedCapacityFactor: 2
[2020-11-24T19:03:29,610][DEBUG][io.netty.util.Recycler   ][main][35c42130c99c31aff73fa51120a095dfd060783a7372a0e2963fffd747b0a77e] -Dio.netty.recycler.linkCapacity: 16
[2020-11-24T19:03:29,611][DEBUG][io.netty.util.Recycler   ][main][35c42130c99c31aff73fa51120a095dfd060783a7372a0e2963fffd747b0a77e] -Dio.netty.recycler.ratio: 8
[2020-11-24T19:03:29,622][DEBUG][io.netty.buffer.AbstractByteBuf][main][35c42130c99c31aff73fa51120a095dfd060783a7372a0e2963fffd747b0a77e] -Dio.netty.buffer.checkAccessible: true
[2020-11-24T19:03:29,623][DEBUG][io.netty.buffer.AbstractByteBuf][main][35c42130c99c31aff73fa51120a095dfd060783a7372a0e2963fffd747b0a77e] -Dio.netty.buffer.checkBounds: true

Here, we can see one event coming from my Filebeat -

[2020-11-24T19:03:29,624][DEBUG][io.netty.util.ResourceLeakDetectorFactory][main][35c42130c99c31aff73fa51120a095dfd060783a7372a0e2963fffd747b0a77e] Loaded default ResourceLeakDetector: io.netty.util.ResourceLeakDetector@4d233c1a
[2020-11-24T19:03:29,638][DEBUG][org.logstash.beats.ConnectionHandler][main][35c42130c99c31aff73fa51120a095dfd060783a7372a0e2963fffd747b0a77e] 109e7925: batches pending: true
[2020-11-24T19:03:29,662][DEBUG][org.logstash.beats.BeatsHandler][main][35c42130c99c31aff73fa51120a095dfd060783a7372a0e2963fffd747b0a77e] [local: 10.136.67.41:5044, remote: 10.129.19.111:40136] Received a new payload
[2020-11-24T19:03:29,668][DEBUG][org.logstash.beats.BeatsHandler][main][35c42130c99c31aff73fa51120a095dfd060783a7372a0e2963fffd747b0a77e] [local: 10.136.67.41:5044, remote: 10.129.19.111:40136] Sending a new message for the listener, sequence: 1
[2020-11-24T19:03:29,775][DEBUG][org.logstash.beats.BeatsHandler][main][35c42130c99c31aff73fa51120a095dfd060783a7372a0e2963fffd747b0a77e] 109e7925: batches pending: false

So the data is being sent...

I don't have my hand on the ElasticSearch cluster, will look with the team.

Actually the event coming from Filebeat doesn't show if they are matched or not.

At this point, im starting to think that none of my messages are being match in filter {} , in the logstash parsing config, so at the end of the day, nothing is sent to ElasticSearch.

Hi @elkwhat, thanks for posting all the details — really appreciated! I looked through them and it's pretty clear to me as well that the events are being sent from Filebeat to Logstash so now it's something on the Logstash side that needs checking.

One quick test you could do is to replace your current Logstash pipeline with a really simple one that has the beats input plugin and the stdout output plugin. Run Logstash with this pipeline and check if your events are showing up on STDOUT on the Logstash server. If they are, then gradually add in more sections to your Logstash pipeline and repeat this test until events stop showing. That will help you identify the problematic section.

Hope that helps,

Shaunak

Hi,

We finally been able to pin point what was happening.

I did like you said, breaking down the logstash config into pieces and use stout to troubleshoot.

It seems like the very first IF statement of my logstash config was not met.

In our dev env, we used to use this syntax -

if [fields][log_type] == "p4d.log"
     then blabla

But it seems like that now, we need to use it like this -

if [log_type] == "p4d.log"
     then blabla

So at the end of the day, everything was dropped because of my last section -

else {
    drop {}
  }

My troubleshooting was kind of misleading a bit but i've learned a lot.

Thanks for the support, we can consider this case solved.

Regards,