HTTP Poller not working - No errors

Hi, i have the configuration below in my logstash.conf file. But is not working and is not givin any errors. I got a certificate error before that i solved with a jks. Any hints on how can i troubleshoot? NOTE: Is a containerized environment.

input {
stdin{}
http_poller {
urls => {
epowebapi => {
method => get
user => "api_user"
password => "api_Pass"
url => "https://xx.xx.xx.xx:9443/remote/core.executeQuery?queryId=855"
headers => {
Accept => "application/json"
}
}
}
truststore => "/usr/share/logstash/pipeline/ssl/"
truststore_password => "Password"
request_timeout => 160
schedule => { cron => "* * * * * UTC"}
codec => "json"
metadata_target => "http_poller_metadata"
}
}

output{
elasticsearch {
manage_template => false
hosts => "elasticsearch:9200"
user => elastic
password => changeme
}
stdout {}
}

Increase the log level to get additional clues about what Logstash is doing. You could also try using Wireshark to see what network connections Logstash is making.

This is what i get in the debug.

[2018-04-16T20:29:00,267][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x578503b6 sleep>"}
[2018-04-16T20:29:00,304][DEBUG][logstash.inputs.http_poller] Fetching URL {:name=>"ePoWebapi", :url=>[:get, "https://xx.xx.xx.xx:8443/remote/core.executeQuery?queryId=855", {:headers=>{"Accept"=>"application/json"}, :auth=>{:user=>"api", :pass=>"Pass", :eager=>true}}]}
[2018-04-16T20:29:00,501][DEBUG][logstash.instrument.periodicpoller.cgroup] Error, cannot retrieve cgroups information {:exception=>"Errno::ENOENT", :message=>"No such file or directory - /sys/fs/cgroup/cpuacct/docker/777382b06c8067aa9a580f9103bae5ec1bae4a4542157608010b9552811c6335/cpuacct.usage"}
[2018-04-16T20:29:00,891][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-04-16T20:29:00,897][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-04-16T20:29:02,919][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x7583fca2 sleep>"}
[2018-04-16T20:29:05,269][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x578503b6 sleep>"}
[2018-04-16T20:29:05,512][DEBUG][logstash.instrument.periodicpoller.cgroup] Error, cannot retrieve cgroups information {:exception=>"Errno::ENOENT", :message=>"No such file or directory - /sys/fs/cgroup/cpuacct/docker/777382b06c8067aa9a580f9103bae5ec1bae4a4542157608010b9552811c6335/cpuacct.usage"}
[2018-04-16T20:29:05,912][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-04-16T20:29:05,927][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-04-16T20:29:07,930][DEBUG][logstash.pipeline ] Pushing flush onto pipeline {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x7583fca2 sleep>"}
[2018-04-16T20:29:08,582][DEBUG][logstash.inputs.metrics ] Metrics input: received a new snapshot {:created_at=>2018-04-16 20:29:08 UTC, :snapshot=>#<LogStash::Instrument::Snapshot:0x4b38f652 @metric_store=#<LogStash::Instrument::MetricStore:0x2154bd22 @store=#<Concurrent::map:0x00000000000fbc entries=3 default_proc=nil>, @structured_lookup_mutex=#Mutex:0x23894583, @fast_lookup=#<Concurrent::map:0x00000000000fc0 entries=84 default_proc=nil>>, @created_at=2018-04-16 20:29:08 UTC>}

Hmm, okay. Comment out your elasticsearch output and use a stdout { codec => rubydebug } output to dump the raw events. Are you getting anything?

This is the output. No error, and stdout working fine.

docker attach logstashtest
Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2018-04-24T20:13:31,521][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-04-24T20:13:31,549][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-04-24T20:13:33,503][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"arcsight", :directory=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.3-java/modules/arcsight/configuration"}
[2018-04-24T20:13:33,762][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2018-04-24T20:13:33,770][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2018-04-24T20:13:34,825][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"39605706-a3f3-4a1d-b6a0-cbe04f7148ca", :path=>"/usr/share/logstash/data/uuid"}
[2018-04-24T20:13:36,324][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.3"}
[2018-04-24T20:13:36,891][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-04-24T20:13:39,893][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::Elasticsearch hosts=>[http://elasticsearch:9200], bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", manage_template=>false, document_type=>"%{[@metadata][document_type]}", sniffing=>false, user=>"logstash_system", password=>, id=>"a8534760ec12a086fe293ee32232f724b17c660fec5c5bee2bbb376965e5bb43", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_1e95cb1f-a8ce-4e24-b8fb-c9c450974adb", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-04-24T20:13:40,038][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50}
[2018-04-24T20:13:40,901][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://logstash_system:xxxxxx@elasticsearch:9200/]}}
[2018-04-24T20:13:40,931][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
[2018-04-24T20:13:41,329][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://logstash_system:xxxxxx@elasticsearch:9200/"}
[2018-04-24T20:13:41,418][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-04-24T20:13:41,423][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2018-04-24T20:13:41,451][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::Elasticsearch", :hosts=>["http://elasticsearch:9200"]}
[2018-04-24T20:13:41,640][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://logstash_system:xxxxxx@elasticsearch:9200/]}}
[2018-04-24T20:13:41,642][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://logstash_system:xxxxxx@elasticsearch:9200/, :path=>"/"}
[2018-04-24T20:13:41,651][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://logstash_system:xxxxxx@elasticsearch:9200/"}
[2018-04-24T20:13:41,660][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>6}
[2018-04-24T20:13:41,660][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2018-04-24T20:13:41,845][INFO ][logstash.pipeline ] Pipeline started succesfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x87cedd2 run>"}
[2018-04-24T20:13:47,518][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-04-24T20:13:47,593][INFO ][logstash.inputs.http_poller] Registering http_poller Input {:type=>nil, :schedule=>{"in"=>"10 s"}, :timeout=>nil}
[2018-04-24T20:13:47,650][INFO ][logstash.pipeline ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x1d808cf sleep>"}
The stdin plugin is now waiting for input:
[2018-04-24T20:13:47,766][INFO ][logstash.agent ] Pipelines running {:count=>2, :pipelines=>[".monitoring-logstash", "main"]}
[2018-04-24T20:13:47,804][INFO ][logstash.inputs.metrics ] Monitoring License OK
Test
{
"message" => "Test",
"@version" => "1",
"@timestamp" => 2018-04-24T20:14:29.394Z,
"host" => "a147d8cf00cd"
}
Test2
{
"message" => "Test2",
"@version" => "1",
"@timestamp" => 2018-04-24T20:15:50.982Z,
"host" => "a147d8cf00cd"
}
Test3
{
"message" => "Test3",
"@version" => "1",
"@timestamp" => 2018-04-24T20:22:14.592Z,
"host" => "a147d8cf00cd"
}

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.