I am trying to integrate CloudWatch Logs to ELK stack. I have successfully managed to install all 3 components of the stack. But still unable to monitor and visualize log streams from CloudWatch. I have set up the ELK stack on AWS t2.large EC2 instance in us-east-1 region.
Following are my config files for Logstash/ElasticSearch/Kibana respectively. Please help:
Settings file in YAML
Settings can be specified either in hierarchical form, e.g.:
Or as flat keys:
------------ Node identity ------------
Use a descriptive name for the node:
If omitted the node name will default to the machine's host name
------------ Data path ------------------
Which directory should be used by logstash and its plugins
for any persistent needs. Defaults to LOGSTASH_HOME/data
------------ Pipeline Settings --------------
The ID of the pipeline.
Set the number of workers that will, in parallel, execute the filters+outputs
stage of the pipeline.
This defaults to the number of the host's CPU cores.
How many events to retrieve from inputs before sending to filters+workers
How long to wait in milliseconds while polling for the next event
before dispatching an undersized batch to filters+outputs
Force Logstash to exit during shutdown even if there are still inflight
events in memory. By default, logstash will refuse to quit until all
received events have been pushed to the outputs.
WARNING: enabling this can lead to data loss during shutdown
------------ Pipeline Configuration Settings --------------
Where to fetch the pipeline configuration for the main pipeline
Pipeline configuration string for the main pipeline
At startup, test if the configuration is valid and exit (dry run)
Periodically check if the configuration has changed and reload the pipeline
This can also be triggered manually through the SIGHUP signal
#How often to check if the pipeline configuration has changed (in seconds)
Show fully compiled configuration as debug log message
NOTE: --log.level must be 'debug'
When enabled, process escaped characters such as \n and " in strings in the
pipeline configuration files.
------------ Module Settings ---------------
Define modules here. Modules definitions must be defined as an array.
The simple way to see this is to prepend each
name with a
-, and keep
all associated variables under the
name they are associated with, and
above the next, like this:
- name: MODULE_NAME
Module variable names must be in the format of
------------ Cloud Settings ---------------
Define Elastic Cloud settings here.
Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
and it may have an label prefix e.g. staging:dXMtZ...
This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
Format of cloud.auth is: <user>:<pass>
This is optional
If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
------------ Queuing Settings --------------
Internal queuing model, "memory" for legacy in-memory based queuing and
"persisted" for disk-based acked queueing. Defaults is memory
If using queue.type: persisted, the directory path where the data files will be stored.
Default is path.data/queue
If using queue.type: persisted, the page data files size. The queue data consists of
append-only data files separated into pages. Default is 64mb
If using queue.type: persisted, the maximum number of unread events in the queue.
Default is 0 (unlimited)
If using queue.type: persisted, the total capacity of the queue in number of bytes.
If you would like more unacked events to be buffered in Logstash, you can increase the
capacity using this setting. Please make sure your disk drive has capacity greater than
the size specified here. If both max_bytes and max_events are specified, Logstash will pick
whichever criteria is reached first
Default is 1024mb or 1gb
If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
Default is 1024, 0 for unlimited
If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
Default is 1024, 0 for unlimited
If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
Default is 1000, 0 for no periodic checkpoint.
------------ Dead-Letter Queue Settings --------------
Flag to turn on dead-letter queue.
If using dead_letter_queue.enable: true, the maximum size of each dead letter queue. Entries
will be dropped if they would increase the size of the dead letter queue beyond this setting.
Default is 1024mb
If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
Default is path.data/dead_letter_queue
# ------------ Metrics Settings --------------
# Bind address for the metrics REST endpoint
# http.host: "127.0.0.1"
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
# http.port: 9600-9700
------------ Debugging Settings --------------
Options for log.level:
* info (default)