Need help configuring ELK to receive logs from log4net/udpappender

I have setup a UDP appender on a windows 7 machine with VSTS EnvLog.Config file. Also a ELK stack is installed on a centos 6.5 instance.
here's my /etc/logstash/conf.d/logstash.conf

input section

  input {
         udp {
           port => 8081
           type => "log4net"
          }
        }

#Filter

                filter {
                        if [type] == "iis" {
                            grok {
                                 match => { "message" => "%{TIMESTAMP_ISO8601:log_timestamp} %{WORD:S-SiteName} %{NOTSPACE:S-ComputerName} %{IPORHOST:S-IP} %{WORD:CS-Method} %{URIPATH:CS-URI-Stem} (?:-|\"%{URIPATH:CS-URI-Query}\") %{NUMBER:S-Port} %{NOTSPACE:CS-Username} %{IPORHOST:C-IP} %{NOTSPACE:CS-Version} %{NOTSPACE:CS-UserAgent} %{NOTSPACE:CS-Cookie} %{NOTSPACE:CS-Referer} %{NOTSPACE:CS-Host} %{NUMBER:SC-Status} %{NUMBER:SC-SubStatus} %{NUMBER:SC-Win32-Status} %{NUMBER:SC-Bytes} %{NUMBER:CS-Bytes} %{NUMBER:Time-Taken}"}
                    }
                  }
                }

#output
output {
elasticsearch {
host => localhost
port => "9200"
protocol => "http"

     }
      stdout {
            codec => rubydebug
     }
    }

And here's my appender in EnvLog.Config

<appender name="UdpAppender" type="log4net.Appender.UdpAppender">
      <param name="RemoteAddress" value="ELK_SERVER_IP" />
      <param name="RemotePort" value="8081" />
      <layout type="log4net.Layout.PatternLayout, log4net">
        <conversionPattern value="%date [%thread] %-5level - %property{log4net:HostName} - ApplicationName - %logger - %message%newline" />
      </layout>
    </appender>

A tcpdump of udp packets from VSTS to ELK server Port (8081) does show packets with INFO, DEBUG, etc messages. But im not able to figure if Logstash is able to read it and/or able to transfer them to ES and/or how to get kibana to display them from ES.

I am also not sure how to setup an index on Kibana or if it has to be noted in the appender + kibana needs to directed to that index. Kibana by default has logstash-* index which doesn't show any data. Also * in index too doesn't show anything. Please advise how to get Kibana see what UdpAppender is sending (through ES and Logstash). is anything wrong with my Logstash conf file??

You can check to see if there are any indices created in ES first, by running a cat/indices on the ES server. https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-indices.html
This way we can pinpoint the problem in your data stream: if it's somewhere between your logs and ES or between ES and Kibana.

here's the output:

# curl -XGET 'localhost:9200/_cat/indices/*?v&s=index&pretty'
health status index               pri rep docs.count docs.deleted store.size pri.store.size
yellow open   logstash-2017.06.09   5   1          3            0     14.7kb         14.7kb
yellow open   .kibana               1   1          2            0      8.3kb          8.3kb
yellow open   logstash-2017.06.08   5   1        419            0      150kb          150kb

When I started this post I was only getting .kibana index.. I left it on overnight and now I see these other two indexes as well.. Also now the @timestamp options shows up under indices tab in Kibana and I can see the hearbeat logs too which I have configured in the /etc/logstash/conf.d/logstash.conf file. These heartbeat log come every 10 seconds, but Kibana only shows them after 4 hours.
If I select
Last 15 minutes Last 30 minutes Last 1 hour
under Discover tab, nothing shows up. is there some setting to change that, to show latest logs, every second ??

Also, Time stamp in Kibana is 5 hours ahead of the Central time we follow.


{
       "message" => "2017-06-09 05:02:16,751 [30] DEBUG - WS206 - ApplicationName - someapp.Logging.someappLogger - REQ_END POST: /api/client/reportbuilder/getReportFilterValueData  [200, 49 ms]\r\n",
      "@version" => "1",
    "@timestamp" => "2017-06-09T10:00:39.902Z",
          "type" => "log4net",
          "host" => "192.168.x.xxx",
          "tags" => [
        [0] "_grokparsefailure"
    ]
}
{
       "message" => "2017-06-09 05:02:19,270 [48] DEBUG - WS206 - ApplicationName - someapp.Logging.someappLogger - REQ_START POST: /api/client/reportbuilder/getReportFilterValueData \r\n",
      "@version" => "1",
    "@timestamp" => "2017-06-09T10:00:42.422Z",
          "type" => "log4net",
          "host" => "192.168.x.xxx",
          "tags" => [
        [0] "_grokparsefailure"
    ]
}

I have started getting the Logs from Visual studio UdpAppender on Logstash and Kibana. The logs are like above, My filter is as shown in the main post at the top of this page.

Couple of issues, Logging time and actual time of the log generated is different (I guess ELK follows UTC and our application logs follow central time), how do I fix that??

All of our logs will be mostly around Exceptions received from Visual studio/IIS servers/asp .net applications and web apps. What is the best filter for these logs??

What to do with the [0] "_grokparsefailure"

Kibana by default uses the timezone of your browser. You can change that in the Management -> Advanced settings page, the option name being: dateFormat:tz. This will probably fix your heartbeat logs issue as well.

As for the Grok parser error, I assume that is due to something failing in the filter from Logstash, but I'm not that good with grok to be able to debug it. Maybe somebody from the Logstash forum can help with that.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.