Couple of issues with logstash input file :(

Hello.
Got a couple of questions relating to how the input file really works.
I would like to brand (change type) for certain inputs, but the system doesn't seem to be picking up my 'type' comment. I also am having troubles with my tweets in that the system just puts them in 1 files (messages).
Below is my config. Is there anything wrong with the config that you guys can see?

Sorry didn't attach config file.

root@log01:~# cat /etc/logstash/conf.d/02-logstash-input.conf
input {
twitter {
consumer_key => “key"
consumer_secret => “secrett"
oauth_token => “token"
oauth_token_secret => “token_secret"
keywords => ["elk","logstash","elasticsearch","kibana"]
type => tweet
}
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder-log01.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder-log01.key"
type => beats
}
tcp {
port => 5514
type => syslog5514
}
tcp {
port => 5515
type => syslog5515
}
}

It's supposed to be a string - https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html#plugins-inputs-file-type - you shoudl wrap it in quotes.

HI, thanks.
Did that, but still in my Kibana screen, it shows type as:

%{[@metadata][type]}

It was doing the same before.

The [@metadata][type] field is set by Beats. The type option in Logstash options ends up in the type field.

Ahh yes I found that in my LS Output file:

30-elasticsearch-output.conf
output {
elasticsearch {
hosts => ["els-node2:9200","els-node3:9200","els-node4:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
user => user
password => password
}
}

Do you know what it should be?
This is a direct copy from this site for when I built the System up under Logstash configure part.

Cheers.

There is no definite right or wrong here. It's up to you how you want to organize and set the document types. If you're setting a good type value in all your beats you can use that, but if you have other inputs you'll have to set the type in the Logstash configuration. Something like

filter {
  if [@metadata][type] {
    mutate {
      replace => {
        "type" => "%{[@metadata][type]}"
      }
    }
  }
}

might be useful to change the type into the beats's type if it's set.

Hi, thanks for this. I have managed to split my logs by type and index so when I look at my indexes via 'elasticsearch _head' I can see a seperate indexes for twitter, syslog5514 and syslog 5515'.

For some reason now most of my records are getting displayed in the correct newly created type/index, but others are still coming through with the old format (which I do not want).

Below is an extract of my input/filter/output config along with a couple examples of what I get when I search kibana showing a good format and a bad format.

Any ideas why some logs are correctly re-written, but others are not?

Thanks in advance.

------ Input file --------

input {

input {
twitter {
consumer_key => "key"
consumer_secret => "secret"
oauth_token => "token"
oauth_token_secret => "token-secret"
keywords => ["elk","logstash","elasticsearch","kibana"]
type => "twitter-log02"
}
beats {
port => 5044
type => "beats-log02"
}
tcp {
port => 5514
type => "syslog5514-log02"
}
tcp {
port => 5515
type => "syslog5515-log02"

## ------ Filter file --------

filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{@timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

## ------ Output file --------

output {
if [type] == "twitter-log02" {
elasticsearch {
hosts => ["els03:9200","els02:9200"]
sniffing => true
manage_template => false
index => "twitter-%{+YYYY.MM.dd}"
document_type => "twitter-log02"
}
} if [type] == "syslog5514-log02" {
elasticsearch {
hosts => ["els03:9200","els02:9200"]
sniffing => true
manage_template => false
index => "syslog5514-%{+YYYY.MM.dd}"
document_type => "syslog5514-log02"
}
} if [type] == "syslog5515-log02" {
elasticsearch {
hosts => ["els03:9200","els02:9200"]
sniffing => true
manage_template => false
index => "syslog5515-%{+YYYY.MM.dd}"
document_type => "syslog5515-log02"
}
} else {
elasticsearch {
hosts => ["els03:9200","els02:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
}

----------------- end of output file.

Right format (_index and _type are looking good).

{
"_index": "syslog5514-2016.06.04",
"_type": "syslog5514-log01",
"_id": "AVUbMu6b4mUVQoJOugLT",
"_score": null,
"_source": {
"message": "<191>2289: 002129: Jun 4 21:36:09.475 AEST: IP ARP: rcvd req src 192.168.10.65 9410.3e4e.f6d1, dst 192.168.10.14 Vlan10",
"@version": "1",
"@timestamp": "2016-06-04T11:36:10.494Z",
"host": "192.168.10.252",
"port": 60624,
"type": "syslog5514"
},
"fields": {
"@timestamp": [
1465040170494
]
},
"highlight": {
"host": [
"@kibana-highlighted-field@192.168.10.252@/kibana-highlighted-field@"
],
"type": [
"@kibana-highlighted-field@syslog5514@/kibana-highlighted-field@"
]
},
"sort": [
1465040170494
]
}

And in the wrong format

{
"_index": "%{[@metadata][beat]}-2016.06.04",
"_type": "%{[@metadata][type]}",
"_id": "AVUbMu9Do0XXlD820rpF",
"_score": null,
"_source": {
"message": "<191>2290: 002130: Jun 4 21:36:09.475 AEST: IP ARP: ignored gratuitous arp src 192.168.10.65 9410.3e4e.f6d1, dst 192.168.10.14 e8ba.7099.f541, interface Vlan10",
"@version": "1",
"@timestamp": "2016-06-04T11:36:10.494Z",
"host": "192.168.10.252",
"port": 60624,
"type": "syslog5514"
},
"fields": {
"@timestamp": [
1465040170494
]
},
"highlight": {
"host": [
"@kibana-highlighted-field@192.168.10.252@/kibana-highlighted-field@"
],
"type": [
"@kibana-highlighted-field@syslog5514@/kibana-highlighted-field@"
]
},
"sort": [
1465040170494
]
}

Screen print, may help to show the problem I'm having.


1 Like

Hi. This is fixed !
Had to modify my input filters a bit, was missing the 'else if' ...

e.g.

} else if [type] == "syslog5514-log01" {
elasticsearch {
hosts => ["els02:9200","els03:9200"]
sniffing => true
manage_template => false
index => "syslog5514-%{+YYYY.MM.dd}"
document_type => "syslog5514-log01"
}
} else if [type] == "syslog5515-log01" {
elasticsearch {
hosts => ["els02:9200","els03:9200"]
sniffing => true
manage_template => false
index => "syslog5515-%{+YYYY.MM.dd}"
document_type => "syslog5515-log01"
}