Error parsing csv invalid file format

hello all below is the log format and I was using ; as separator and trying to using the following logstash configuration and I am getting parsing error

Kindly suggest what can be done thanks in advance

csv {
separator => ";"
columns => ["Date","transactionID","TransactionType","Address","EndPointName","Status","FailureStatusCode","TransactionDuration","x-device-auth-mode","x-user-auth-mode","x-device-type","x-nsds-version","timestamp","conditiontype","reason","event-body"]

2018-05-15 03:00:00,321 ;2607:fb90:a2e9:8275:6cf:8670:2dbe:81a3:VG1:1526378339989:MnBgMn;2;OUT:;DASH;1;500;60110;IP;null;Handheld with SIM;2.0;1526378340;device;activated;{"timeStamp":1526378340,"condition-type":"device","reason":"activated","event-body":{"devices-info":[{"device-id":"urn:gsma:imei:35826907-082230-0","imsi":"310260626967007","msisdn":"12246196061","user-ids":[]}]}}

Is there anything in the Logstash log that might give clues about what's wrong?

@magnusbaeck thanks for getting back to me here is the error

message", :source=>"activated;{"timeStamp":1526378340,"condition-type":"device","reason":"activated","event-body":{"devices-info":[{"device-id":"urn:gsma:imei:35826907-082230-0","imsi":"310260626967007","msisdn":"12246196061","user-ids":[]}]}}", :exception=>#<CSV::MalformedCSVError: Illegal quoting in line 1

Aha. It probably doesn't like the double quotes in the last column. Use a dissect or grok filter for the parsing.

Thanks Magnus one quick question when using grok can i map timestamp to DATA filed??

Hi Magnus,

Tried using the following for grok filter for the same input still no luck

    match => { "message" => "%{DATA:timestamp1} %{DATA:TID} %{DATA:Ttype} %{DATA:Add} %{DATA:stat} %{DATA:FailedScode} %{DATA:deviceautmode} %{DATA:userauthmode} %{DATA:devicetype} %{DATA:nsdsvers} %{DATA:timest1} %{DATA:conditionty}  %{GREEDYDATA:eventbod}" }


Never use more than one DATA or GREEDYDATA pattern in the same grok expression. Use something more exact. For example, [^;]+ matches zero or more characters of any kind except semicolon. That can be used to parse a string containing semicolon-delimited values. Secondly, you have spaces between the tokens you want to match. This is obviously incorrect since the columns are semicolon-delimited.

Thanks @magnusbaeck

input {
file {
path =>"/opt/LogStashOutputFormatted/ENS/*"
start_position => "beginning"
sincedb_path => "/dev/null"
type => "esn"
filter {
if [type] == "esn" {
dissect {
mapping => {
"message" => "%{stat};%{{messagebody}}
output {
if [type] == "esn" {
elasticsearch {
action => "index"
hosts => "http://localhost:9200"
index => "esn"

sample input:

error: The following is the error when using the following
[ERROR] 2018-05-21 11:49:45.195 [Ruby-0-Thread-1: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:22] agent - Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, {, }

I tried debugging while commenting the lines the error is random

I want data between first which is ; separated and next was between {} and i am unable extract the data

Thanks in advance for the support

You are missing a closing double quote. And you might mean {%{ rather than %{{, although I do not think either will actually do what you want.

Thanks Badger I changed the following config to this after your suggestion

"message" => "%{stat};{%{messagebody}}"

I don't see any error but index is not getting created in kibana

I all want is to get the entire message

I want filter the data after ; and assign to messagebody

282123K->2101K(314560K), 0.0032156 secs] 777032K->497010K(1013632K), 0.0033993 secs] [Times: user=0.03 sys=0.00, real=0.01 secs]
2018-05-21T12:21:05.488-0700: 2419872.268: Total time for which application threads were stopped: 0.0050471 seconds, Stopping threads took: 0.0001995 seconds
2018-05-21T12:25:58.458-0700: 2420165.238: [GC (Allocation Failure) 2018-05-21T12:25:58.458-0700: 2420165.238: [ParNew
Desired survivor size 17891328 bytes, new threshold 6 (max 6)

  • age 1: 390272 bytes, 390272 total

I do see this error in elastic search

That is not an error, it is just very verbose garbage collection logging.

Badger can you provide any inputs on how to filter the following i have tried everything I can


i want to filter activated => status
rest to message body => messagebody


You should get that from

dissect { mapping => { "message" => "%{stat};%{messagebody}" } }


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.