What are the default data types when Logstash reads from SQL DB?


(Amruth) #1

Hi,

Can someone please say what are the default data types when Logstash reads data from SQL DB. Will it be the same as declared in the Sql server Database? And when I want to send it to Elasticsearch do I need to perform mutate convert option?

Thanks


(Mark Walkom) #2

No, Logstash has no concept of the types in the database.

Yes, or use a template or mapping.


(Amruth) #3

If the field is float in SQL server, what would it be when it's pushed to Elasticsearch? float or string?

And also I am pushing to ES index on monthly index i.e index-%{+YYYY.MM} from SQL server using Logstash. For the first month field "ID" was of type float and for the next month it is integer due to which there is a conflict of fields in Kibana.


(Mark Walkom) #4

Logstash doesn't care, it's just data to it. Elasticsearch will try to detect what it is though.


(Magnus B├Ąck) #5

Logstash doesn't care, it's just data to it. Elasticsearch will try to detect what it is though.

Well, Logstash does have a data type concept, and that data type could affect how ES detects the type. I'd assume that the SQL type is translated to roughly the same data type in Logstash, i.e. float and bools in the database should become floats and bools in Logstash (but whether those values become float and bool fields in ES is another matter).


(Mark Walkom) #6

Yeah, but unless you explicitly cast them then it just looks it as a value with no type.
And yes, you can cast that value so it's set and it will influence Elasticsearch, but there is still room for these sorts of conflicts.

Best option is to always have a template/mapping in place.

(I know you know this, just wanting to clarify for the thread :slight_smile:)


(system) #7

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.