How to map fields from Oracle table to Index fields in Logstash?

Hello, It is not clear for me how to tell Logstash which field from my Oracle table corresponds to specific field of my predefined Elasticsearch index.

For example, I have the following SQL configured in the .conf file using jdbc input in Logstash:

Select id_log, nombre_procedimiento, mensaje, detalle, fecha, to_char(trunc(fecha),'YYYY-MM-DD') fechadia from STGCBPRD.STG_FT_LOG where fecha BETWEEN TRUNC(SYSDATE)-1 AND TRUNC(SYSDATE)

On the other side, I have an index (named: logbim) with the following mapping structure:
{
"settings": {
"index": {
"number_of_shards": 1,
"number_of_replicas": 1
}
},
"mappings": {
"properties": {
"@timestamp": {
"type": "date"
},
"@version": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"detalle": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"fecha": {
"type": "date"
},
"fechadia": {
"type": "keyword"
},
"id_log": {
"type": "long"
},
"mensaje": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
},
"nombre_procedimiento": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
}

The question is, how to specify in Logstash (for example) that the field "id_log" from the Oracle SQL, corresponds to the field "id_log" at the index predefined in Elasticsearch ?. Does Logstash do a match by name ?.

Below is my .conf file for Logstash. Thank you!.

input {
jdbc {
jdbc_validate_connection => true
jdbc_connection_string => "jdbc:oracle:thin:@172.26.150.124:1525/BIPRD"
jdbc_user => "DWHCBPRD"
jdbc_password => "xxxxxxxx"
jdbc_driver_library => "D:\Temp\SW\ELK\OJDBC-Full\ojdbc7.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
statement => "Select id_log, nombre_procedimiento, mensaje, detalle, fecha, to_char(trunc(fecha),'YYYY-MM-DD') fechadia from STGCBPRD.STG_FT_LOG where fecha BETWEEN TRUNC(SYSDATE)-1 AND TRUNC(SYSDATE)"
}
}

output {
elasticsearch {
hosts => ["http://127.0.0.1:9200"]
index => "logbim"
#user => "elastic"
#password => "changeme"
}
}

If you do "Select id_log, nombre_procedimiento, ..." then your events will have fields called id_log, nombre_procedimiento etc., and the documents written to elasticsearch will also have fields with those names.

I don't have events, Logstash is working with the index (in Elasticsearch) and the .conf file (for Logstash) defined as I described previously.
What I mean is how can I tell Logstash, take the value from the field "fecha" in Oracle and then put it in the field "fechadia" in the Elasticsearch index ?. Can you point me to any reference regarding this point ?.

You do have events. Each row fetch by the jdbc input is an event.

If you want to copy a field to another field, or rename it, you can use a mutate filter.

Thank you for your answer, I'm sorry but I'm new and I didn't know each line at input section in .conf file is called an event!.
Anyway, if you take a look at the SQL statement, all the Oracle fields are in one line.
How is the syntax to tell Logstash, take the field fecha from source and put it into fechadia target field ?. I'm assuming that Logstash by default read the fields from source and seek to match the same names to target fields. If this is true, how can be changed.

As I said, you can use a mutate filter.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.