I can't reprocess any field in Logstash which is ingested from Jdbc. All Fields are properly inserted into Elastic, they have proper name and data, however I can't transform them anyhow. Here is the configuration:
input {
jdbc {
jdbc_driver_library => "/usr/share/logstash/config/mssql-jdbc-12.10.0.jre11.jar"
jdbc_driver_class => "com.microsoft.sqlserver.jdbc.SQLServerDriver"
jdbc_connection_string => "jdbc:sqlserver://******:1433;databaseName=******;encrypt=true;trustServerCertificate=true;integratedSecurity=true;user=******;password=******;authenticationScheme=NTLM;domain=xy;authentication=NotSpecified"
jdbc_user => '*****'
jdbc_password => '******'
statement => "select top 1000 x, z, y, Created, code, w, status, z, f, d, a, b from ******* where Created > :sql_last_value ORDER by Created ASC"
tracking_column => created
tracking_column_type => "timestamp"
use_column_value => true
schedule => "*/15 * * * * *"
last_run_metadata_path => "/usr/share/logstash/data/logstash_jdbc_ms_last_run"
}
}
filter {
if [status] == "60000" {
mutate {
add_field => { "wa_status_word" => "OK" }
}
} else if [status] == "60001" {
mutate {
add_field => { "wa_status_word" => "Nový" }
}
} else if [status] == "60002" {
mutate {
add_field => { "wa_status_word" => "Probíhá" }
}
} else if [status] == "60003" {
mutate {
add_field => { "wa_status_word" => "Hotovo" }
}
} else if [status] == "60004" {
mutate {
add_field => { "wa_status_word" => "Chyba" }
}
} else if [wastatus] == "60005" {
mutate {
add_field => { "wa_status_word" => "Upozornění" }
}
}
if "0" == [code] {
mutate {
add_field => { "code_word" => "aktivní" }
}
} else if "1" == [code] {
mutate {
add_field => { "code_word" => "neaktivní" }
}
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
xyz...
}
}
and sample data in elastic:
{
"_index": "index-2025",
"_id": "******MILHAqJKaal",
"_version": 1,
"_score": 0,
"_source": {
"b": null,
"a": 0,
"c": "somedata",
"status": 60003,
"appname": "db-crm",
"x": 0,
"y": "somedata",
"created": "2025-04-26T09:27:06.000Z",
"@timestamp": "2025-04-26T09:27:16.017386975Z",
"@version": "1",
"code": 0,
"z": "somedata",
"d": "somedata",
"e": null,
"f": 0
},
"fields": {
"a": [
0
],
"y": [
"somedata"
],
"x.keyword": [
"somedata"
],
"@version.keyword": [
"1"
],
"appname.keyword": [
"appnamexy"
],
"code": [
0
],
"otherdata.keyword": [
"data"
],
"x": [
0
],
"created": [
"2025-04-26T09:27:06.000Z"
],
"e": [
"somedata"
],
"f": [
0
],
"w.keyword": [
"data"
],
"y.keyword": [
"data"
],
"@timestamp": [
"2025-04-26T09:27:16.017Z"
],
"appname": [
"appnamexy"
],
"@version": [
"1"
],
"status": [
60003
],
"d": [
"data"
],
"i": [
"data"
]
}
}
Please don't be confused from fields inconsistency compared to select and fields in elastic, I had to remove names and data by hand, important are only fields "status" and "appname".
The problem is, the Logstash does not see the field "status" which is somehow ignored therefore the new field "wa_status_word" is not created. Filter is ok as it has no error in logs, and it creates field "appname" according to given condition. Any ideas what is wrong?