Understanding sending data to ELK

Hello All,

I am kind a newbie to this whole elk field, looking for pointers to my problem.

Idea is to generate meaning full reports from logs. Logs include windows DC autentication logs, fw logs, or any application logs such as apache or zabbix.

1). Would like to send logs from servers (which would vary between windows/linux etc) to elk stack machine.

  • basically logs such as syslogs ( messages, secure) etc
  • general monitoring logs such as zabbix_server , zabbix_proxy kind of logs
  • may be logs from Firewalls/Routers etc to be sent.

Idea is to represent logs in a beautiful way where data can be generated. From windows logs, i would like to understand system events.
Also from Domain Controller logs i would like to understand how many users are authenticated , time , etc.

2). What do i have is one windows machine where winlogbeat is installed and sending logs to elk machine.

3). I also have elk installed in a instance where its taking stuffs. Below are the steps involved.

rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch yum -y install elasticsearch chkconfig --add elasticsearh service elasticsearch start

Contents of /etc/elasticsearch/elasticsearch.yml
cluster.name: elk-stage node.name: elk-stage-node-1 path.logs: /logs/elasticsearch/logs/ bootstrap.memory_lock: true network.host: 127.0.0.1

For Kibana,

yum -y install kibana chkconfig kibana on service start kibana

Contents of /opt/kibana/config/kibana.yml
server.host: "localhost" elasticsearch.url: "http://localhost:9200"
Installed nginx ,
yum -y install nginx httpd-tools htpasswd -c /etc/nginx/htpasswd.users elkadmin
Contents of /etc/nginx/nginx.conf
user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; include /usr/share/nginx/modules/*.conf; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; include /etc/nginx/conf.d/*.conf; }

Contents of /etc/nginx/conf.d/kibana.conf
server { listen 80; server_name elk.domain.com auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/htpasswd.users;
rewrite ^ https://$server_name$request_uri? permanent;
}
server {
listen 443 ssl;
server_name elk.domain.com;

access_log /var/log/nginx/access_log main;
error_log /var/log/nginx/error_log error; ssl_certificate /etc/pki/tls/certs/elk.chained.crt; ssl_certificate_key /etc/pki/tls/certs/elk.key; ssl_session_timeout 10m; location / { proxy_pass http://localhost:5601; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } }

Installed logstash
yum -i install logstash

Runnig curl -XGET http://localhost:9200/_cluster/health?pretty; echo would give me,
{ "cluster_name" : "elk-stage", "status" : "yellow", "timed_out" : false, "number_of_nodes" : 1, "number_of_data_nodes" : 1, "active_primary_shards" : 1265, "active_shards" : 1265, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 1265, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 50.0 }

So it seems my setup is correct.
This is how my filter looks
filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" } add_field => [ "received_at", "%{@timestamp}" ] add_field => [ "received_from", "%{host}" ] } syslog_pri { } date { match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } if [type] == "log" { grok { match => { "message" => "%{NUMBER:num}\:%{TIMESTAMP:zbx_timestamp}.{%TIMESTAM:}%{GREEDYDATA:zbx_message}" } add_field => [ "received_at", "%{@timestamp}"] add_field => [ "received_from", "%{host}" ] } date { match => [ "zbx_timestamp", "pppppp:yyyyMMdd:hhmmss" ] } } }

Problem: Does not know how to take it forward. I mean i have installed iwnlogbeat in windows its sending logs to elk machine. But that's now i want.
I am kind a stuck in whole understanding of ELK. Did checkout webinars, but they seems to be too old and talking about marvel and all which i think i don't need as of now.
Can somebody point me to step by step undestanding of this whole elk please.
I am also stuck in writing custom grok filters. Somewhere in this forum i read you can write custom filters but i just don't know how to, any tutorials are greatly helpful.

PS: For debugging i did looked at all the logs available including /var/log/logstash/logstash.log etc

Thanks

Can somebody share some pointers please.

Thanks.

Waiting for any pointers.

Thanks

What do you want?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.