Converting more than one column destroys data

converting more than one column from string > integer completely invalidates the logstash output.

The values are interpreted as keys somehow

What am I missing when converting?

regardless of:
CSV { convert => {}}
mutate{ convert => {}}

What do you mean by this?

Thank you for your reply.

I am quite new to the logstash pipeline, which means I am struggling to identify causes and problems:

my scenario is as follows:

i have +.csv s that are read via filebeat, and parsed via filter csv plugin.

  csv {
    autodetect_column_names => true
    skip_header => true
    separator =>";"
    convert => {

When converting any or a number a colums, the output gets strange:

the original strings


that should look like


may become

I have a single worker and I (maybe) depending on the amount of data sometimes the conversion works without problem, sometime it adds a random "column28".

Is there some additional synchronizing to be done when handling many large files?

autogenerate_column_names is true by default. If the header row has 27 entries and a later row has 28 then that is what it will be called.

Which version of logstash are you running?

Thank you for your reply.
I am aware of the auto generation feature.
I am also using autodetect names
The columns match and the pipeline succeeds sometimes, hence my confusion.

I am using the latest version

Mit freundlichen Grüßen

Dr.-Ing. Christoph Weber



Am Wolfsmantel 2
D-91058 Erlangen
Tel.: +49 91 31 94 08-309

Gerhard Pölz
Walter Greul
Amtsgericht Fürth, HRB 6549
UST.-Id.: DE186320885

The only thing I can think of is that pipeline.ordered is set to false, so that sometimes it picks up a data row as a header.

Unfortunately the pipeline.workers are already set to 1.

Ordering is then set by default.


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.