Not able to load text files into elastic search


(RUBAN SOUNDRAPANDYAN) #1

I am new to elastic search. I have a code to load text files into elastic search. The text file has two columns and its not working. But the same code is working fine for single column. Please some one help me to fix this issue

Here is my code
input {
file {
path => "/ruban/data/logs/sample"
start_position => "beginning"
type => "md5"
}
}
filter {
csv {
columns => ["name","age"]
separator => " ,"
remove_field => [ "host", "message", "path" ]
}
}

output {
elasticsearch {
hosts => "http://localhost:9200"
index => "employee"
}
stdout {
codec => rubydebug
}
}


(Rijin) #2

Can you please share a single line sample text ?

What is its output now in logstash ?


(RUBAN SOUNDRAPANDYAN) #3

Here is the sample

input {
file {
path => "/ruban/data/logs/sample"
start_position => "beginning"
type => "md5"
}
}
filter {
csv {
columns => ["name","age"]
separator => " ,"
remove_field => [ "host", "message", "path" ]
}
}

output {
elasticsearch {
hosts => "http://localhost:9200"
user => logfileload
password => digital
index => "employee"
}
stdout {
codec => rubydebug
}
}
Logstash output

[2018-10-19T14:42:45,678][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-10-19T14:42:46,420][INFO ][logstash.inputs.file ] No sincedb_path set, generating one based on the "path" setting {:sincedb_path=>"C:/Ruban/ELK/logstash-6.4.1/data/plugins/inputs/file/.sincedb_dad80e46eb98251c2e14f6bb22f93c17", :path=>["/ruban/data/logs/sample"]}
[2018-10-19T14:42:46,486][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x69eca271 sleep>"}
[2018-10-19T14:42:46,616][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-10-19T14:42:46,619][INFO ][filewatch.observingtail ] START, creating Discoverer, Watch with file and sincedb collections
[2018-10-19T14:42:47,375][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}


(Rijin) #4

Hi I am asking about the sample log .

Can you provide the contend in "/ruban/data/logs/sample" file.

Only some sample lines


(Rijin) #5

You are applying CSV filter for a log/text file . I am not sure about its working .

Normally CSV filter is applying to CSV files.

One example:
input{

    file {

          path => "/home/elastic/elk/samplelog/cpu.csv"
          start_position => "beginning"

         }
 }

filter {

 csv {


       separator => ","
       columns => [
                                            "table_id",
                                            "sampletime",
                                            "CPU_Utilization",
                                            "samplestdev",
                                            "samplerate",
                                            "samplemax",
                                            "tz_offset"
                          ]

}


(Rijin) #6

For a text/log file grok filter is suitable . Please check it

https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html


(RUBAN SOUNDRAPANDYAN) #7

Thanks Rijin

The above code is working fine for csv file, I need the same in .txt file version.


(Rijin) #8

If you give me the sample log ( one line) , i will try to crate a filter for you by grok


(Mustapha Mj) #9

Hi Mr @rijinmp

i have the same question, i need to know the grok filter for MS SQL server ErrorLog file
it is a text file, in the following is a piece of this text file:

2018-10-23 12:27:47.93 spid54 Using 'xpstar.dll' version '2014.120.2000' to execute extended stored procedure 'xp_instance_regread'. This is an informational message only; no user action is required.
2018-10-23 12:29:32.49 spid54 Attempting to load library 'xplog70.dll' into memory. This is an informational message only. No user action is required.
2018-10-23 12:29:32.52 spid54 Using 'xplog70.dll' version '2014.120.2000' to execute extended stored procedure 'xp_msver'. This is an informational message only; no user action is required.
2018-10-23 13:45:21.71 Logon Error: 18456, Severity: 14, State: 7.
2018-10-23 13:45:21.71 Logon Login failed for user 'sa'. Reason: An error occurred while evaluating the password. [CLIENT: ]
2018-10-23 13:46:54.70 Logon Error: 18470, Severity: 14, State: 1.
2018-10-23 13:46:54.70 Logon Login failed for user 'sa'. Reason: The account is disabled. [CLIENT: ]


(system) #10

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.