Weird Log Output


(Marcos Felix) #1

I am getting a weird log output and it seems to be an issue with grok

Here is a sample of the logs:

July 10th 2018, 09:31:00.427 type:random_logs @timestamp:July 10th 2018, 09:31:00.427 tags:grokparsefailure host:10.130.33.29 @version:1 message:V4\u0010\x8C\x88e\xAB\xF4j\x86 \u001D\u0001B\xA5)Ƿ\xF5\xD1!\u0010\u0015\x82\r\xF3e\x82&\xD3\t\x90\xAE\xA7nm\xF7\u000EX\x85\xC3\u0004OxF\xD1\xD5$ц\a3\xD3\t\xDE\u0019\u0010̨\u001D\xE2-2\xB8\u0003\x89\xAE1\x92Am\x92\u001C\xA41D2\u001E\x98\xEA-\xEE\xD1\xD9\u001Eԏ\xC6\u0003St\xA4\x89J\xDCPA\u0012ˊ\xC9,\xFA\x95\u0015\x8FPe]a\x8F\xC9\xE5?Ir\xF4\xC3Nr9{2;\xDEϿ\xB4\u000Fr\xBC0\xE3\xEC\xE4\xD9\u001E\x87\u001A\xCDAyN4\xA0s\xE9\u0002\xF3\u0002\xE6'\rRc1\u0014C\x9D\u0006\u0006F\xF3rc\xD2\xC9=\xF3\xD3\xCAYf\b\u0002\xADAX\xE4\x89G\f\xEEdFơ\xA1\xB9c=\xBE\x8B\xF8\x8Cc\x9624\x9F.\xE9\u0018\xEC\xEB`\xF2\xB87\xCD\xE1p\u000E\xBFY>\xC2\xFC-\x8E~|:\xE9\x8Ah"u{\u0010u\x9B\rԡk784w\u0013R\u0019U\xE8x\xD8\xDD:.\u001F\xB9\e\xCC\xC1;\xD2\u001D\u000F\xBB\xD7A\x8E73{8\xEC\xBEg!\xAA\xA7jm\xCA\u0002Â\x85/\xC7\xD9Ń\xC3U\xB8\xDF\xFE\v\u0000\u0000\xFF\xFF\x8A\u0013\xA3& port:52,824 _id:xphQg2QB_Ewnis7Rrwiy _type:doc _index:logstash-2018.07.10 _score: -

Any clue?


(Marcos Felix) #2

hello?


(Magnus Bäck) #3

What's the origin of this message? Looks like something from a binary protocol that hasn't been decoded.


(Magnus Bäck) #4

hello?

Yes? If you're trying to annoy people you're on the right track.


(Marcos Felix) #5

Going to ignore your last message because it was uncalled for.
This is from a log output in Kibana:


(Magnus Bäck) #6

Going to ignore your last message because it was uncalled for.

It was snarky, yes, but pinging threads after just a couple of hours is also uncalled for.

This is from a log output in Kibana:

Please answer my previous question. How does this piece of data end up in Logstash? Via the network it seems, but what's on the other end?


(Marcos Felix) #7

its my linux server. I have ELK and beats installed on my Linux server and am accessing Kibana via the web


(Magnus Bäck) #8

Okay. And what does your Logstash pipeline configuration look like?


(Marcos Felix) #9

Quite simple:

  • pipeline.id: mypipeline_1
    path.config: "/etc/logstash/conf.d/*.conf"

(Magnus Bäck) #10

But what's in /etc/logstash/conf.d/*.conf?


(Marcos Felix) #11

These configs-
Logstash-Beats.conf:

input {
  beats {
    port => 5044
  }
}
output {
  elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
}

Logstash-Apache.conf

input {
  file {
    path => "/var/log/logstash/*_log"
  }
}
filter {
  if [path] =~ "access" {
    mutate { replace => { type => "apache_access" } }
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
    date {
      match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
    }
  } else if [path] =~ "error" {
    mutate { replace => { type => "apache_error" } }
  } else {
    mutate { replace => { type => "random_logs" } }
  }
}
output {
  elasticsearch { hosts => ["localhost:9200"] }
  stdout { codec => rubydebug }
}

Logstash-syslog.conf

input {
  tcp {
    port => 5000
    type => syslog
  }
}
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
output {
  elasticsearch { hosts => ["localhost:9200"] }
  stdout { codec => rubydebug }
}

demo-metrics-pipeline.conf

input {
  beats {
    port => 5044
  }
}
output {
  elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
}

Logstash-simple.conf

input { stdin { } }
filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}
output {
  elasticsearch { hosts => ["localhost:9200"] }
  stdout { codec => rubydebug }
}

(Magnus Bäck) #12

So which of these inputs is setting type to "random_logs" and port to 33378? Are you sure there isn't an extra tcp or udp input somewhere? Or is it the beats input, receiving data from Filebeat that's reading a .gz file or something? You'll have to do some debugging on your end.

Side note: These configuration files are almost certainly incorrect and missing lots of conditionals to prevent events from one file to "leak" into the outputs defined in another file. None of that is responsible for the problem you're asking about though.


(Marcos Felix) #13

Yeah, I got the files configurations from the documentation. It does state that's a simple/standard configuration.
On the syslog config there's a TCP and there was also a UDP but I deleted.

"Or is it the beats input, receiving data from Filebeat that's reading a .gz file or something?"

How can I check this? im fairly new to this

Funny thing is with Winlogbeat I get logs just fine, they are readable.


#14

I don't think you understand what this does. It concatenates all those configuration files and creates one pipeline to run that configuration. That means that events from your beats input go through your logstash-Apache.conf filter, where they will get type set to random_logs, and then through your logstash-simple.conf, which will add a _grokparsefailure tag. That means you should have 3 copies of each event in your logstash indexes and another two in your beats indexes. As Magnus implied, you need all this processing to be conditional, as you have done in logstash-syslog.conf. Alternatively, run each configuration in its own pipeline.


(Magnus Bäck) #15

How can I check this? im fairly new to this

Start by commenting out pieces of the configuration to narrow down which input is causing these messages. If it turns out to be the beats input, look into which beats clients are sending messages and narrow it down to a single client.


(system) #16

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.