I am having some issues with parsing out some logstash data. In my data, I have \r's and \t's that are floating around that I need to get rid of.
\tLogon ID:\t\t(0x2,0x50722C31)\r
I started trying to remove the \t's first so I made a kv
kv {
source => [message]
trim => "\t"
}
but I found out that it's some sort of regular expression and I had to use another \ to escape the first one. I tried instead using \\t but then it removed all lower case t's and the 's instead of just the \t's :sad: I also tried \\\t but that only took out the \ and left the t's. Lastly, I tried \\\\t but it took me back to the \\t results.
If anyone could offer help, that would be appreciated.
There are literal tabs in the data. I think it's being converted into json and is shipped as pain text, which is making this complicated. I have NxLog shipping the logs to LogStash.
I can try. They data doesn't show up in elasticsearch with tabs though, even if I have a blank filter just to see how the data shows up. It shows up as
\tUser Name:\t\taccount\r so there isn't literal spaces
If I click on the JSON part of the data, there is an extra \ at each one.
Just tried your suggestion and it didn't seem to do any parsing. Assuming you mean pressing ctrl and v then ctrl and l it showed up as ^V^L in the config file. I am using CentOS btw
Yeah I guess that what I was trying to say. In the original CSV, there are new lines, but NxLog seemed to replace those lines with plain-text JSON formatting for those, so the tabs were \t and the returns were \r, but as plain-text instead of code.
That worked like a charm @Badger. You are the boss!
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.