How could I set index & type at logstash?

Hi All.

I'm switching current river to logstash and want to keep the same mapping that I used at river.

I test logstash to set in the logstash config and it work fine.But, how I could set type ?
output {
elasticsearch { host => "127.0.0.1"
index => "INDEX_NAME"
protocol => http
port => 9200 }
stdout { codec => rubydebug }
}

In real production, I must be able to set index & type at log data.
How could I set up ?
I tried to set type using below message, but type always set to log.
{"_type":"TYPE1","_id":"1521","Id":"TEST","NAME":"DUCHEOL"}

Thanks

1 Like

Hi.
I believe that the type can be set in the input section of the logstash.
For example for the file input I have something like this:
input{
file{
path => ["/home/constantin/work/rts/RTSErrors*"]
start_position => beginning
add_field => {
"host" => "RTS1"
}
codec => multiline {
pattern => "^%{DATIME}"
negate => "true"
what => "previous"
}
type => "rtserrors" #this is where you set the type
}
}

As for the index you can set it in the output section , here is mine:
elasticsearch{
host => "127.0.0.1"
cluster => "elk"
protocol => "http"
index => "logstash-%{+YYYY.MM.dd}"
}
If you create a different index, other than logstash, than you might want to look into elasticsearch-template.json (search for this under you logstash installation). This template will tell you what fields need to be analyzed, among other things, in elasticsearch.

Good luck and I hope that this is what you were asking for.

3 Likes

Thanks Constatin.

Your answer will work if we just use one index & type.
But, my current river using multiple index & type. ( Index is used for customer identity and type is used for element name )

So we have to find a solution how specify index & type at input data instead of setting at logstash config.

I also curious whether we could update data using logstash.
I include _Id field at input message, but auto generated _Id is set on the document.
So update document isn't happen in my test.

You can use things like this;

index => "%{index}-%{+YYYY.MM.dd}"
type => "%{type}"

Where you would be adding a type and an index value during input or filtering.

Thanks Constantin & warkolm.

Now, I can set index , type and id using below config.

input { stdin {
codec => "json"
type => "%{type}"
} }
output {
elasticsearch { host => "127.0.0.1"
index => "%{index}"
document_id => "%{_id}"
protocol => http
port => 9200 }
stdout { codec => rubydebug }
}

My main problem was I don't set "codec= "json" " at input clause.
So my whole json data treated as one string instead of individual values.

I have one more question.
It is same issue that I notice at river.

Is there any way just update fields which is included in input ?

Below is my test.
1st input : {"_id":"TEST","index":"TEST_INDEX","type":"TEST2","name":"UPDATE"}
2nd input : {"_id":"TEST","index":TEST_INDEX", "type":"TEST2","Description":TEST123"}

I expect to below result.
"_source":{"_id":"TEST","index":"TEST_INDEX","type":"TEST2","name":"UPDATE","Description":"TEST123","@version":"1","@timestamp":"2015-05-21T14:16:58.890Z","host":"dkim.local"}

But, actual response is below
{"_id":"TEST","index":"TEST_INDEX","type":"TEST2","Description":"TEST123","@version":"1","@timestamp":"2015-05-21T14:16:58.890Z","host":"dkim.local"}

As you see that, whole document is replace with next input.
I want to keep the fields which isn't included in next input.

Is there any way to keep the old fields and only update fields which are included in the input ?

Regards