Erreur d'importation avec logstash sous linux

Bonjour,
J'importe mes logs avec logstash vers elasticserach, ça marche bien en local ( sous windows).
Mais pas sous linux, j'ai pleins d'erreurs et je ne sais pas pourquoi!
j'ai remarqué que le dans fichier.txt , s'est écrit "convertie en DOS" je me dis donc que peut être le format a changé.
J'ai remarqué aussi qu'un ligne s'importe avec succès et la suivante non, et il y a des \\t qui apparaissent à la place de \t.
Est ce que vous pourriez m'aider ?
Merci.

Exemple du log.txt

20191101  00:00:00.390  MPC CarteAbsente A:3,I:1,P:0,R:0,E:1
20191101  00:00:00.500  MPC GereRecepMPC10 | EOT1 - Msg Reçu :Aa0040000a A:6,I:1,P:0,R:1,E:1

Exemple du log.conf

grok {
	match => ["message","(?<timestamp>%{YEAR:YYYY}%{MONTHNUM:MM}%{MONTHDAY:dd}%{SPACE}%{HOUR}:%{MINUTE}:%{SECOND}.[0-9]{3})%{SPACE}%{GREEDYDATA:data}"]
	}
	date {
		match => ["timestamp", "YYYYMMdd        HH:mm:ss.SSS", "YYYYMMdd HH:mm:ss.SSS"]
		target => "@timestamp"
		timezone => "GMT"
		locale => "fr"
	}

If you want to ask a question in French then you may get a better answer in another forum. I did a Google translate and did not really understand the question about tab conversion. A native speaker may well understand better.

Can I delete this post and ask another the same question in english ?

Hello,
I import my logs with logstash to elasticserach, it works well locally (under windows).
But not under Linux, I have lots of errors and I don't know why!
I noticed that the in file.txt, was written "converted to DOS" so I tell myself that maybe the format has changed.
I also noticed that one line imports successfully and the next does not, and there are \ t that appear in place of \ t.
Can you help me?
Thank you.

Can you post the errors you are getting? If you output the log file in a terminal in linux (cat file.txt) does it correctly print multiple lines?

There is a difference in the message field

{
          "data" => "MPC CarteAbsente A:3,I:1,P:0,R:0,E:1",
     "timestamp" => "20191101\t04:24:38.484",
       "message" => "20191101\t04:24:38.484\tMPC CarteAbsente A:3,I:1,P:0,R:0,E:1",
    "@timestamp" => 2019-11-01T04:24:38.484Z,
           "seq" => 58175,
          "host" => "ns510529"
}
{
       "message" => "20191101\\t04:24:38.593\\tMPC GereRecepMPC10 | EOT1 - Msg Re\\xE7u :\\u0002Aa0040000\\u0010\\u0003\\a A:6,I:1,P:0,R:1,E:1",
    "@timestamp" => 2020-05-20T08:18:53.952Z,
          "tags" => [
        [0] "_grokparsefailure"
    ],
           "seq" => 58176,
          "host" => "ns510529"
}

I am not 100% sure but i think the issue is with parsing the unicode characters in that second line

I think the \\t in the message is misleading as on failure grok will put the entire line in the message and is probably re escaping the \t in the log line

I think you need to specify the charset as UTF-8 - but Im not sure how to do that. Something like
https://www.elastic.co/guide/en/logstash/current/plugins-codecs-plain.html

1 Like

I found this :

codec => plain {
      charset => "name of charset here"
    }

but having already used

codec => multiline {
		}

I don't know how to use both?

You cannot use both, pick one or the other.

I tried

codec => multiline {
                 charset  => "UTF-8",

		}

But in this case, logstash don't run !!!

Remove the trailing comma.