What is my "index patter"?

Hello.
I installed "Elasticsrach","Kibana" and "Logstash" for Windows log management and I configured "syslog-NG" as below :

options {
        flush_lines (0);
        time_reopen (10);
        log_fifo_size (1000);
        long_hostnames (off);
        use_dns (no);
        use_fqdn (no);
        create_dirs (no);
        keep_hostname (yes);
        ts_format(iso);
	encoding(UTF-8);
};

source s_netsyslog {
   # system();
   # internal();
   udp(ip(0.0.0.0) port(514) flags(no-hostname));
   tcp(ip(0.0.0.0) port(514) flags(no-hostname));
};

destination d_netsyslog { file("/var/log/network.log" owner("root") group("root") perm(0644)); };
destination d_separatedbyhosts{file("/var/log/$HOST/messages" owner("root") group("root") perm(0655) dir_perm(0755) create_dirs(yes));
};
log { source(s_netsyslog); destination(d_separatedbyhosts); };

Logstash is:

 input {
file {
    path => ["/var/log/myserverlog/messages"]
    sincedb_path => "/var/log/logstash"
    start_position => "beginning"
    type => "server 1"
    tags => [ "netsyslog" ]
  }
 generator {
  }
output {
  elasticsearch {
    protocol => "node"
    host => "172.30.9.20"
    cluster => "elasticsearch"
  }
}

What is my "Index Pattern" ?

Thank you.

hi,
you need to create an index pattern on the management > index patterns web page in kibana.
If the index pattern can be created, if your selections are correct based on the data you have comeing in.
Then data should be seen on the discover page.

(This assumed you are using kibana5 , if kibana 4 go to setting >indices )

Regards,
Daragh

https://www.elastic.co/guide/en/kibana/current/index-patterns.html#settings-create-pattern

Excuse me if I asked it, according to my "Logstash" config what is the best index pattern? Can you offer any advice?

Hi,
I have included as default index pattern creation snip for kibana 5.2.2 version .
If you create it like that it will get you started.

so subsequently assuming the following.

  1. there is data coming in
  2. your logstash config can parse your incomeing data

Then you should see data flowing into your discover page.

I am not so familiar with using ELK on windows so cant really comment on your windows config.

Regards,
Daragh

It is not Windows. I installe ELK on Linux Box and I forwarded Windows Eventlog to my Linux Box.

hi,

ok I see.
so if ELK processing are running and you have a created the default index pattern ( if data is incomeing ).
Then you should see data flowing into the discover page.

If things are not working, meaning no index pattern created or no data comeing in then you will see the error
page snip attached.

If you see _grokparsefailure in the logs then that means data is coming into discover page but your logstash
config can not parse it properly

Regards,
Daragh

I can't select "Time-filed name" !!!


When I click on "Discover" then:


What is my problem? Syslog-NG can forward my Windows events to my Linux box correctly and my Logstash config is:

# cat /etc/logstash/conf.d/logstash.conf
input {
file {
    path => ["/var/log/myserverlog/messages"]
    sincedb_path => "/var/log/logstash"
    start_position => "beginning"
    type => "server 1"
    tags => [ "netsyslog" ]
  }
 generator {
  }
}

filter {
}

output {
  elasticsearch {
    protocol => "node"
    host => "172.30.9.20"   
    cluster => "elasticsearch"
  }
}

What is my problem?

Thank you.

Any idea? I can't understand what is my problem :(. Is my configuration wrong or...?

Check here regarding indexing.

https://www.elastic.co/guide/en/logstash/5.4/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-index

Set the logging in /etc/logstash/logstash.yml to debug and actually tail the logs to see what is happening

This configuration seem related to a very old version of Logstash as the node protocol I no longer supported. Make sure you check the documentation for the correct configuration parameters related to the version of Logstash you are using as these have changed over time.

"Logging" ? I can't see this option!!!

# Settings file in YAML
#
# Settings can be specified either in hierarchical form, e.g.:
#
#   pipeline:
#     batch:
#       size: 125
#       delay: 5
#
# Or as flat keys:
#
#   pipeline.batch.size: 125
#   pipeline.batch.delay: 5
#
# ------------  Node identity ------------
#
# Use a descriptive name for the node:
#
# node.name: test
#
# If omitted the node name will default to the machine's host name
#
# ------------ Data path ------------------
#
# Which directory should be used by logstash and its plugins
# for any persistent needs. Defaults to LOGSTASH_HOME/data
#
path.data: /var/lib/logstash
#
# ------------ Pipeline Settings --------------
#
# Set the number of workers that will, in parallel, execute the filters+outputs
# stage of the pipeline.
#
# This defaults to the number of the host's CPU cores.
#
# pipeline.workers: 2
#
# How many workers should be used per output plugin instance
#
# pipeline.output.workers: 1
#
# How many events to retrieve from inputs before sending to filters+workers
#
# pipeline.batch.size: 125
#
# How long to wait before dispatching an undersized batch to filters+workers
# Value is in milliseconds.
#
# pipeline.batch.delay: 5
#
# Force Logstash to exit during shutdown even if there are still inflight
# events in memory. By default, logstash will refuse to quit until all
# received events have been pushed to the outputs.
#
# WARNING: enabling this can lead to data loss during shutdown
#
# pipeline.unsafe_shutdown: false
#
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
path.config: /etc/logstash/conf.d
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
# At startup, test if the configuration is valid and exit (dry run)
#
# config.test_and_exit: false
#
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
#
# config.reload.automatic: false
#
# How often to check if the pipeline configuration has changed (in seconds)
#
# config.reload.interval: 3
#
# Show fully compiled configuration as debug log message
# NOTE: --log.level must be 'debug'
#
# config.debug: false
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
# queue.type: memory
#
# If using queue.type: persisted, the directory path where the data files will be stored.
# Default is path.data/queue
#
# path.queue:
#
# If using queue.type: persisted, the page data files size. The queue data consists of
# append-only data files separated into pages. Default is 250mb
#
# queue.page_capacity: 250mb
#
# If using queue.type: persisted, the maximum number of unread events in the queue.
# Default is 0 (unlimited)
#
# queue.max_events: 0
#
# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the
# capacity using this setting. Please make sure your disk drive has capacity greater than
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
# queue.max_bytes: 1024mb
#
# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.acks: 1024
#
# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.writes: 1024
#
# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# Default is 1000, 0 for no periodic checkpoint.
#
# queue.checkpoint.interval: 1000
#
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
# http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
#   * fatal
#   * error
#   * warn
#   * info (default)
#   * debug
#   * trace
#
# log.level: info
path.logs: /var/log/logstash
#
# ------------ Other Settings --------------
#
# Where to find custom plugins
# path.plugins: []

I looked at Logstash log and it tell me:

[2017-06-27T08:17:43,322][ERROR][logstash.agent           ] Cannot create pipeline {:reason=>"Something is wrong with your configuration."}
[2017-06-27T08:18:10,306][ERROR][logstash.outputs.elasticsearch] Unknown setting 'protocol' for elasticsearch
[2017-06-27T08:18:10,308][ERROR][logstash.outputs.elasticsearch] Unknown setting 'host' for elasticsearch
[2017-06-27T08:18:10,308][ERROR][logstash.outputs.elasticsearch] Unknown setting 'cluster' for elasticsearch
[2017-06-27T08:18:10,318][ERROR][logstash.agent           ] Cannot create pipeline {:reason=>"Something is wrong with your configuration."}
[2017-06-27T08:19:38,206][ERROR][logstash.outputs.elasticsearch] Unknown setting 'protocol' for elasticsearch
[2017-06-27T08:19:38,208][ERROR][logstash.outputs.elasticsearch] Unknown setting 'host' for elasticsearch
[2017-06-27T08:19:38,208][ERROR][logstash.outputs.elasticsearch] Unknown setting 'cluster' for elasticsearch

Any idea?

Ah, Can you offer any help?

Have you looked at the documentation?

I looked at https://www.elastic.co/guide/en/logstash/5.4/configuration-file-structure.html and I want to know in my configuration type => "server 1" must change to type => "syslog" ?
How about "output" ? Is below configuration OK?

output {
  elasticsearch {
    action => "%{[@metadata][action]}"
    document_id => "%{[@metadata][_id]}"
    hosts => ["172.30.9.20"]
    index => "index_name"
    protocol => "http"
  }
}

Any idea?

Thank you.

Usually you do not need to specify action and id (unless you are explicitly setting this in your filter section), and protocol is no longer a valid parameter according to the documentation.

I copy and paste the example from https://www.elastic.co/guide/en/logstash/5.4/event-dependent-configuration.html
I must comment "protocol" section?
Can you offer a simple configuration for me?

Thank you.

That example is related to an example using the CouchDB Changes input plugin. Remove the lines I highlighted and test it.

I removed three lines and Logstash log is as below :

[2017-07-04T04:16:55,186][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://172.30.9.20:9200/]}}
[2017-07-04T04:16:55,190][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://172.30.9.20:920
0/, :path=>"/"}
[2017-07-04T04:16:55,274][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>#<URI::HTTP:0x2a880895 URL:http
://172.30.9.20:9200/>, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://172.30.9.20:9200/][Man
ticore::SocketException] Connection refused (Connection refused)"}
[2017-07-04T04:16:55,275][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2017-07-04T04:16:55,281][WARN ][logstash.outputs.elasticsearch] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Ela
sticsearch Unreachable: [http://172.30.9.20:9200/][Manticore::SocketException] Connection refused (Connection refused) {:url=>http://172.30.9.20:9200/, :error_message=>"Elastic
search Unreachable: [http://172.30.9.20:9200/][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient
::Pool::HostUnreachableError"}
[2017-07-04T04:16:55,282][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Elasticsearch Unreachable: [http://172.30.9.20:9200/][Manticore::Socket
Exception] Connection refused (Connection refused)", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError", :backtrace=>["/usr/share/logstash/vendo
r/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:271:in `perform_request_to_url'", "/usr/share/logstash/v
endor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:257:in `perform_request'", "/usr/share/logstash/vend
or/bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:347:in `with_connection'", "/usr/share/logstash/vendor/
bundle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:256:in `perform_request'", "/usr/share/logstash/vendor/bun
dle/jruby/1.9/gems/logstash-output-elasticsearch-7.3.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:264:in `get'", "/usr/share/logstash/vendor/bundle/jruby/1.9/g
ems/logstash-output-elasticsearch-7.3.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:86:in `get_version'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-
output-elasticsearch-7.3.1-java/lib/logstash/outputs/elasticsearch/template_manager.rb:16:in `get_es_version'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-outpu
t-elasticsearch-7.3.1-java/lib/logstash/outputs/elasticsearch/template_manager.rb:20:in `get_es_major_version'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-outp
ut-elasticsearch-7.3.1-java/lib/logstash/outputs/elasticsearch/template_manager.rb:7:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-e
lasticsearch-7.3.1-java/lib/logstash/outputs/elasticsearch/common.rb:57:in `install_template'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-
7.3.1-java/lib/logstash/outputs/elasticsearch/common.rb:24:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:9:in `register'
", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:41:in `register'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:268:in `register_plugin'",
 "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:279:in `register_plugins'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logst
ash/pipeline.rb:279:in `register_plugins'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:288:in `start_workers'", "/usr/share/logstash/logstash-core/lib/logstash
/pipeline.rb:214:in `run'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:398:in `start_pipeline'"]}
[2017-07-04T04:16:55,283][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#<URI::Generic:0x573db7c0 URL://
172.30.9.20>]}
[2017-07-04T04:16:55,287][INFO ][logstash.pipeline        ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipe
line.max_inflight"=>1000}
[2017-07-04T04:16:55,490][ERROR][logstash.pipeline        ] Error registering plugin {:plugin=>"<LogStash::Inputs::File path=>[\"/var/log/172.30.10.17/messages\"], sincedb_path
=>\"/var/log/logstash\", start_position=>\"beginning\", type=>\"syslog\", tags=>[\"netsyslog\"], id=>\"fa08036ca31fc35ee5eb95f0190efd3fd04f0f29-1\", enable_metric=>true, codec=
><LogStash::Codecs::Plain id=>\"plain_ca00dbf9-ea9a-4c06-b075-a20966b22b26\", enable_metric=>true, charset=>\"UTF-8\">, stat_interval=>1, discover_interval=>15, sincedb_write_i