Logstash - Error

Hi

I am running a logstash program to read the log file into the elastic db. Below is the sample file:

[2018-05-29 08:04:23,687] DEBUG - http-outgoing-2845 HTTP/1.1 500 Internal Server Error {org.apache.synapse.transport.http.headers}
[2018-05-29 08:04:11,797] DEBUG - >> "OPTIONS /intrCaco2.1.1/2.1.1/api/service/predict HTTP/1.1[\r][\n]" {org.apache.synapse.transport.http.wire}
[2018-05-29 08:04:12,109] DEBUG - http-outgoing-2842 >> POST /intrcaco2_2.1.1/api/service/predict HTTP/1.1 {org.apache.synapse.transport.http.headers}
[2018-05-29 08:04:18,037] DEBUG - http-outgoing-2842 << HTTP/1.1 200 OK {org.apache.synapse.transport.http.headers}
[2018-05-29 08:04:23,687] DEBUG - http-outgoing-2845 << HTTP/1.1 500 Internal Server Error {org.apache.synapse.transport.http.headers}

Code:

input {
file{
path => "/home/abhi/filesmall.log"
}
}

filter {
grok {
match => {
'message' => '%{SYSLOG5424SD:date1} %{WORD:level} - %{GREEDYDATA:extra1}'
}
}
}

output{
elasticsearch
{
hosts => [ "SCPUBU:9200" ]
}
}

Instead of elasticsearch in output plugin if i use stdout i get the ouput. If i use elasticsearch there is no result and below is the log:

[2018-06-19T18:19:55,111][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.4"}
[2018-06-19T18:19:55,206][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-06-19T18:19:55,442][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-06-19T18:19:55,586][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://SCPUBU:9200/]}}
[2018-06-19T18:19:55,588][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://SCPUBU:9200/, :path=>"/"}
[2018-06-19T18:19:55,643][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://SCPUBU:9200/"}
[2018-06-19T18:19:55,667][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-06-19T18:19:55,667][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2018-06-19T18:19:55,669][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-06-19T18:19:55,672][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-06-19T18:19:55,677][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//SCPUBU:9200"]}
[2018-06-19T18:19:55,832][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x5dc58d8c@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 sleep>"}
[2018-06-19T18:19:55,842][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}
^C[2018-06-19T18:20:13,761][WARN ][logstash.runner ] SIGINT received. Shutting down.
[2018-06-19T18:20:14,682][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x5dc58d8c@/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:247 run>"}

can someone help me?

If you change hosts => [ "SCPUBU:9200" ]
to hosts => [ "http://SCPUBU:9200" ]

Does it help?

Not working

Can you show the pipeline.yml file?

Regards,

# List of pipelines to be loaded by Logstash
#
# This document must be a list of dictionaries/hashes, where the keys/values are pipeline settings.
# Default values for ommitted settings are read from the logstash.yml file.
# When declaring multiple pipelines, each MUST have its own pipeline.id.
#
# Example of two pipelines:
#
# - pipeline.id: test
# pipeline.workers: 1
# pipeline.batch.size: 1
# config.string: "input { generator {} } filter { sleep { time => 1 } } output { stdout { codec => dots } }"
# - pipeline.id: another_test
# queue.type: persisted
# path.config: "/tmp/logstash/*.config"
#
# Available options:
#
# # name of the pipeline
# pipeline.id: mylogs
#
# # The configuration string to be used by this pipeline
# config.string: "input { generator {} } filter { sleep { time => 1 } } output { stdout { codec => dots } }"
#
# # The path from where to read the configuration text
# path.config: "/etc/conf.d/logstash/myconfig.cfg"
#
# # How many worker threads execute the Filters+Outputs stage of the pipeline
# pipeline.workers: 1 (actually defaults to number of CPUs)
#
# # How many events to retrieve from inputs before sending to filters+workers
# pipeline.batch.size: 125
#
# # How long to wait in milliseconds while polling for the next event
# # before dispatching an undersized batch to filters+outputs
# pipeline.batch.delay: 50
#
# # How many workers should be used per output plugin instance
# pipeline.output.workers: 1
#
# # Internal queuing model, "memory" for legacy in-memory based queuing and
# # "persisted" for disk-based acked queueing. Defaults is memory
# queue.type: memory
#
# # If using queue.type: persisted, the page data files size. The queue data consists of
# # append-only data files separated into pages. Default is 64mb
# queue.page_capacity: 64mb
#
# # If using queue.type: persisted, the maximum number of unread events in the queue.
# # Default is 0 (unlimited)
# queue.max_events: 0
#
# # If using queue.type: persisted, the total capacity of the queue in number of bytes.
# # Default is 1024mb or 1gb
# queue.max_bytes: 1024mb
#
# # If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# # Default is 1024, 0 for unlimited
# queue.checkpoint.acks: 1024
#
# # If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# # Default is 1024, 0 for unlimited
# queue.checkpoint.writes: 1024
#
# # If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# # Default is 1000, 0 for no periodic checkpoint.
# queue.checkpoint.interval: 1000
#
# # Enable Dead Letter Queueing for this pipeline.
# dead_letter_queue.enable: false
#
# If using dead_letter_queue.enable: true, the maximum size of dead letter queue for this pipeline. Entries
# will be dropped if they would increase the size of the dead letter queue beyond this setting.
# Default is 1024mb
# dead_letter_queue.max_bytes: 1024mb
#
# If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
# Default is path.data/dead_letter_queue
#
# path.dead_letter_queue:

There is no error in the log, so the problem must be with ElasticSearch rather than with Logstash. Maybe you can try putting the ip adress instead of the name SCPUBU?

thanks..but it doesn't work either!!

In another instance where i have a config file that extracts the data from the oracle db loads data in elastic db without any issues.

So i am confused as to why this isn't working

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.