Hello expert,
In order to parse openbsd events I chosed to use pipeline to pipleline communications, I have three pipelines named input which point to openbsd_2_ecs pipleline and this last point to output pipeline.
Please find bellow the pipeline.yml configuration
- pipeline.id: input
config.string: |
file{
path => "/etc/logstash/logsamples/PF.log"
start_position => "beginning"
sincedb_path => "/dev/null"
}
path.config: "/etc/logstash/conf.d/input-FP.conf"
- pipeline.id: input
config.string: |
output{ pipeline{ send_to => output}}
path.config: "/etc/logstash/conf.d/openbsd_2_ecs.conf"
- pipeline.id: output
path.config: "/etc/logstash/conf.d/output.conf"
when I run logstash via the command / usr / share / logstash / bin / logstash --path.settings / etc / logstash / -f /etc/logstash/conf.d/input-FP.conf --debug ,which you find bellow the results ,the logs are not sent to the output pipeline.
2021-06-02T14:53:08,373][INFO ][logstash.javapipeline ][openbsd_2_ecs] Pipeline Java execution initialization time {"seconds"=>2.39}
[2021-06-02T14:53:08,441][INFO ][logstash.javapipeline ][openbsd_2_ecs] Pipeline started {"pipeline.id"=>"openbsd_2_ecs"}
[2021-06-02T14:53:08,472][DEBUG][logstash.javapipeline ] Pipeline started successfully {:pipeline_id=>"openbsd_2_ecs", :thread=>"#<Thread:0x1e954d1c run>"}
[2021-06-02T14:53:08,555][DEBUG][org.logstash.execution.PeriodicFlush][openbsd_2_ecs] Pushing flush onto pipeline.
[2021-06-02T14:53:08,736][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:openbsd_2_ecs], :non_running_pipelines=>}
[2021-06-02T14:53:08,813][DEBUG][logstash.agent ] Starting puma
[2021-06-02T14:53:08,886][DEBUG][logstash.agent ] Trying to start WebServer {:port=>9600}
[2021-06-02T14:53:08,995][DEBUG][logstash.api.service ] [api-service] start
[2021-06-02T14:53:09,230][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2021-06-02T14:53:13,307][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2021-06-02T14:53:13,314][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2021-06-02T14:53:13,537][DEBUG][org.logstash.execution.PeriodicFlush][openbsd_2_ecs] Pushing flush onto pipeline.[2021-06-02T14:53:18,324][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2021-06-02T14:53:18,325][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2021-06-02T14:53:18,537][DEBUG][org.logstash.execution.PeriodicFlush][openbsd_2_ecs] Pushing flush onto pipeline.
For more details please find bellow the logstash.yml configuration
------------ Data path ------------------
Which directory should be used by logstash and its plugins
for any persistent needs. Defaults to LOGSTASH_HOME/data
path.data: /var/lib/logstash
------------ Pipeline Settings --------------
The ID of the pipeline.
#pipeline.id: openbsd_2_ecs
Set the number of workers that will, in parallel, execute the filters+outputs
stage of the pipeline.
This defaults to the number of the host's CPU cores.
#pipeline.workers: 2
How many events to retrieve from inputs before sending to filters+workers
#pipeline.batch.size: 125
How long to wait in milliseconds while polling for the next event
before dispatching an undersized batch to filters+outputs
#pipeline.batch.delay: 50
Force Logstash to exit during shutdown even if there are still inflight
events in memory. By default, logstash will refuse to quit until all
received events have been pushed to the outputs.
WARNING: enabling this can lead to data loss during shutdown
pipeline.unsafe_shutdown: false
Set the pipeline event ordering. Options are "auto" (the default), "true" or "false".
"auto" will automatically enable ordering if the 'pipeline.workers' setting
is also set to '1'.
"true" will enforce ordering on the pipeline and prevent logstash from starting
if there are multiple workers.
"false" will disable any extra processing necessary for preserving ordering.
pipeline.ordered: auto
------------ Pipeline Configuration Settings --------------
Where to fetch the pipeline configuration for the main pipeline
path.config:
Pipeline configuration string for the main pipeline
config.string:
At startup, test if the configuration is valid and exit (dry run)
config.test_and_exit: false
Periodically check if the configuration has changed and reload the pipeline
This can also be triggered manually through the SIGHUP signal
config.reload.automatic: false
How often to check if the pipeline configuration has changed (in seconds)
Note that the unit value (s) is required. Values without a qualifier (e.g. 60)
are treated as nanoseconds.
Setting the interval this way is not recommended and might change in later versions.
config.reload.interval: 3s
Show fully compiled configuration as debug log message
NOTE: --log.level must be 'debug'
config.debug: false
When enabled, process escaped characters such as \n and " in strings in the
pipeline configuration files.
config.support_escapes: false
------------ HTTP API Settings -------------
Define settings related to the HTTP API here.
The HTTP API is enabled by default. It can be disabled, but features that rely
on it will not work as intended.
http.enabled: true
By default, the HTTP API is bound to only the host's local loopback interface,
ensuring that it is not accessible to the rest of the network. Because the API
includes neither authentication nor authorization and has not been hardened or
tested for use as a publicly-reachable API, binding to publicly accessible IPs
should be avoided where possible.
http.host: 127.0.0.1
The HTTP API web server will listen on an available port from the given range.
Values can be specified as a single port (e.g.,
9600
), or an inclusive rangeof ports (e.g.,
9600-9700
).http.port: 9600-9700
------------ Module Settings ---------------
Define modules here. Modules definitions must be defined as an array.
The simple way to see this is to prepend each
name
with a-
, and keepall associated variables under the
name
they are associated with, andabove the next, like this:
modules:
- name: MODULE_NAME
var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
Module variable names must be in the format of
var.PLUGIN_TYPE.PLUGIN_NAME.KEY
modules:
------------ Cloud Settings ---------------
Define Elastic Cloud settings here.
Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
and it may have an label prefix e.g. staging:dXMtZ...
This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
cloud.id:
Format of cloud.auth is: :
This is optional
If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
cloud.auth: elastic:
------------ Queuing Settings --------------
Internal queuing model, "memory" for legacy in-memory based queuing and
"persisted" for disk-based acked queueing. Defaults is memory
queue.type: memory
If using queue.type: persisted, the directory path where the data files will be stored.
Default is path.data/queue
PLease help me to resolve this issue.
Best regards