I cannot see every column or see wrong output for the exisiting ones

Hi everyone,

I have been trying to import my csv file with logstash, however, the output is not seen as I expected.
Firstly, I cannot see every column in the output, then I cannot see the correct value for some existing columns in the output.

My config file is:

> input {
>         file {
>             path => "/home/burak/Downloads/QA2/*"
>             start_position => "beginning"
>             sincedb_path => "/dev/null"
>         }
>     }
>     filter {
>         csv {
>             separator => ","
>             columns => ["Epic","Total User Stories","Closed User Stories","Not Closed User Stories","User Stories with Test Case Creation in Progress","Total Test Cases Forecast","Total Created Test Cases","Total Executed","Pass","Fail","Blocked","NA"]
>         }
>     }
>     output {
>         elasticsearch {
>             hosts => "http://localhost:9200"
>             index => "qadata"
>         }
>         stdout {}
>     }

This is a simple output:

{
                                      "Total User Stories" => "83;318;231;194;33;4;0;87;87;54",
                                                    "path" => "/home/burak/Downloads/QA2/qadata.csv",
                                              "@timestamp" => 2020-04-29T08:51:11.946Z,
                         "User Stories Without Test Cases" => "97;1",
                                                    "Epic" => ";236;129;107;29;5;0;372",
        "User Stories with Test Case Creation in Progress" => "25;2020-04-17",
                                                    "host" => "burak-VirtualBox",
                                                 "message" => ";236;129;107;29;5;0;372,83;318;231;194;33;4;0;87;87;54,83;178,83;161;137;24;4;17;3;0,7;10,97;1,25;2020-04-17\r",
                                 "Not Closed User Stories" => "83;161;137;24;4;17;3;0",
                                     "Closed User Stories" => "83;178",
                      "User Stories with Test Cases Ready" => "7;10",
                                                "@version" => "1"
    }

There are many fields that I did not show it here but basically it's like this.
I'm new in ELK I may do simple mistakes.

OS: Ubuntu 18.04.4 LTS
ElasticSearch: 7.6.2
Logstash: 7.6.2

What is your question?

Sorry, I should have been clear. My question is why I cannot see all fields in my .csv file when I used logstash to import it.

Your [message] has six commas in it and the csv filter parsed it into seven fields. I have no idea how you could expect anything more.

If you check here ;

>     filter {
>         csv {
>             separator => ","
>             columns => ["Epic","Total User Stories","Closed User Stories","Not Closed User Stories","User Stories with Test Case Creation in Progress","Total Test Cases Forecast","Total Created Test Cases","Total Executed","Pass","Fail","Blocked","NA"]
>         }
>     }

I have 12 columns but I can't see any field such as Pass, Fail etc. when I used logstash

As I said, your message field only has 6 commas in it, so you will only get 7 fields.

OK. So why do I have 6 commas in message?
Sorry, if this is a silly question, I'm quite new. From my searches, I found that I should use the above config to import my csv file. I would expect I should see whole 12 columns in the output as a field.

The csv filter will not create fields that do not exist in the file that you are processing.

I have these fields in the file that is the confusing part for me. I did not add anything that doesn't exist in the file I've been using.

does anyone can help me on this issue?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.