Normalize the logs with ELK

Dear ELK Team,

it is my first time with ELK, in the my workplace i woud get collection , standardization, analysis, correlation, reporting the logs of Firewall palo and switch CISCO...I am in a hurry, I worked with a lot of tools mangement, but i can't normalize the logs :slightly_frowning_face::frowning_face:

please help and many thanks to everyone

Hi,

You can use a Logstash server to parse the logs using grok send to elasticsearch server and see the data with Kibana Dashboards.

1 Like

Hi @Labidi_Ayoub,

I am not very familiar with configuring Cisco devices but I would expect it to be similar to what we use for our Juniper syslogs... My assumption is based on this

Add a syslog input to Logstash. Something like this

  # Juniper log JSON input
  syslog {
    port => 5518
    grok_pattern => "%{SYSLOG5424PRI}%{NUMBER:syslog_severity} %{SYSLOGLINE}"
    type => "juniper"
    add_field => {
      "[@metadata][index]" => "juniper"
      "[@metadata][log_prefix]" => "dc"
    }
  }

Only the port number is really required. Change Juniper to Cisco (or whatever you want to call it...) The rest are optional configurations for my environment and log formats. That grok pattern will probably not work for you so start without it.

Configure your Cisco devices to use the IP of your Logstash machine and the port for the syslog input.

The you need at least an output configuration. This is what I use

output {
  elasticsearch {
    hosts => ["my_elasticsearch_host:9200"]
    index => "%{[@metadata][log_prefix]}-%{[@metadata][index]}-%{+YYYY.MM.dd}"
  }
}

You can see how I use the @metadata fields to name the Elasticsearch indices.

For more parsing of the logs you need to add some filters which depend on the log format.

Hope that helps.

1 Like

Hi @A_B , Thanks a lot for your reply. Sincerely it's not very clear to me :slightly_frowning_face: I already use graylog 2.5 for log collection, I would like to work with ELK for normalization and log analysis...in my case i work in switch cisco and firewall palo alto....I will continue to search again

Hi @Labidi_Ayoub,

I know nothing about graylog so I'm making some assumptions :slight_smile:

Sounds like you have log collection taken care of. That is great.

To get started with Logstash, at least read this.

In the section Parsing Logs with Logstash there are a few sample Logstash configurations e.g.

 input {
    beats {
        port => "5044"
    }
}
filter {
    grok {
        match => { "message" => "%{COMBINEDAPACHELOG}"}
    }
}
output {
    stdout { codec => rubydebug }
}

The example is for a beats input but you see the basic structure of a complete Logstash config which consists of input, filter and output.

To use Logstash with Graylog I guess you would probably use a tcp input. Personally I prefer JSON logs if they are available.

I would start with a Logstash config something like

input {
  tcp {
    port => 12345
    codec => json
  }
}
filter {}
output {
    stdout { codec => rubydebug }
}

With this you would have Logstash listening on TCP port 12345 and expecting JSON data. The output is sent to STDOUT, so your console. Send one test message to Logstash to make sure everything works.

Then you can add an elasticsearch output and start to add filters that will let you manipulate the logs.

Please @Labidi_Ayoub, no PMs...

What is the current status? In what format can you send logs from your graylog setup to Logstash?

Did you try the above minimal Logstash config?

i am block bro, it so diffcult for me to combinate Graylog + ELK.
1/ ELK ( logs cisco switch) :
the logging service is activated on the cisco switch, it sends its logs on port 1514, UDP

/etc/logstash/conf.d/s-input.conf

input {

udp {
port => 1514
type => "syslog-cisco"
}

tcp {
port => 1514
type => "syslog-cisco"
}
}


Please there are other configurations to do to have the switch logs ??

Any errors in the Logstash logs? Not 100% sure if you can use the same port for both udp and tcp.

Anyway, as I mentioned before, I do not know anything about graylog. I'm picturing the logs to go like

Cisco > UDP port 1514 > graylog input

Is that correct?

Your desired outcome would be

Cisco > UDP port 1514 > graylog input > graylog output in JSON to Logstash host > UDP port 1524 > Logstash UDP input > Logstash filter (for normalizing) > Logstash Elasticsearch output > Elasticsearch

Hope that makes some sense... Is that what you want to do?

Please Forget graylog , now I am just focused on ELK how I collect Cisco switch logs with ELK ... ( swich send udp, port 1514)

input {
  udp {
    port => 1514
    codec => json # add codec if you can set Cisco to output JSON, otherwise remove this line for now
  }
}
filter {}
output {
    stdout { codec => rubydebug }
}

If you start Logstash with that config you should start seeing logs on coming in on your console.

Start Logstash with (if you are on Linux)

/path/to/logstash-version/bin/logstash -f /path/to/config/logstash.conf

Here is what it looks like on one of my machines when it starts up

# cat logstash.conf
input {
  udp {
    port => 1514
  }
}
filter {}
output {
    stdout { codec => rubydebug }
}

# /usr/share/logstash/bin/logstash -f /tmp/logstash.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2019-05-07 13:55:25.132 [main] writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[INFO ] 2019-05-07 13:55:25.142 [main] writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[WARN ] 2019-05-07 13:55:25.403 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2019-05-07 13:55:25.409 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"6.6.2"}
[INFO ] 2019-05-07 13:55:25.414 [LogStash::Runner] agent - No persistent UUID file found. Generating new UUID {:uuid=>"5da5f75f-d3a1-4778-a720-f587073b16b3", :path=>"/usr/share/logstash/data/uuid"}
[INFO ] 2019-05-07 13:55:31.880 [Converge PipelineAction::Create<main>] pipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[INFO ] 2019-05-07 13:55:32.007 [Converge PipelineAction::Create<main>] pipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x714946bc@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:173 sleep>"}
[INFO ] 2019-05-07 13:55:32.022 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-05-07 13:55:32.056 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:1514"}
[INFO ] 2019-05-07 13:55:32.093 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:1514", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[INFO ] 2019-05-07 13:55:32.106 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9601}

Once you are that far you can change the output to elasticsearch

1 Like

@A_B thank you for your efforts , I will follow your example. I want to know precisely who is responsible for log normalization in ELK: Logstash or elasticsearch ??

Logstash, at least in the sense I think you mean. That is what the filter section in the config is for.

These days Elasticsearch ingest nodes can do some things but I personally think of Elasticsearch as only the storage part.

1 Like

@A_B , my cisco switch is configured to send logs on port 514, udp
vi /etc/logstash/conf.d/switch-cisco.conf
switch%20cisco%20logs

i don't receive the logs yet, are there any other configurations to do
Please help

Is Logstash listening on an IP the switches can connect to?

You can test the UDP input from anywhere.

I have this small script to do that

#!/bin/bash

  BLAH=$1

   DATE=$(date)
   echo "{ \"log\": \"Testing JSON logs $BLAH - $DATE\"}" | nc -u -w2 LOGSTASH_IP 1514
exit

That is sending JSON data so I have also the json codec set for the Logstash input. The data should be received no matter what, it will just not be parsed correctly... Change LOGSTASH_IP to the IP or hostname you are using. The script expects one command line argument like

$ ./script_name.sh test

netcat needs to be installed for it to work as well...

1 Like

vi /usr/local/bin/test-switch.sh

my cisco switch is configured to send logs on port 514, udp

sc

i can't run the script ( os : centos7 server) "permission not granted"

You are root. You would definitely have the permissions to run it... That script does not need any elevated privileges.

Is the script executable? Also, as I mentioned above, it expects a command line argument (the ยด$1`). Like

$ ./test-switch.sh hello

Your script name is slightly misleading (doesn't really matter). You are testing Logstash on port UDP port 1514.

Even before you try to send any data you should make sure Logstash is listening on the correct IP and port. Here is an example using netcat. My Logstash is listening on UDP port 5515

$ nc -v -z -u logstash.example.com 5515
logstash1.log1.hay0.bwcom.net [10.3.255.71] 5515 (?) open
1 Like

07

What does ls -la test-switch.sh show in /usr/local/bin/?