POST method http_poller

OK, after input plugin correction the log file is changed to

[2018-08-01T12:16:24,906][ERROR][logstash.pipeline        ] A plugin had an unrecoverable error. Will restart this plugin.
  Pipeline_id:main
  Plugin: <LogStash::Inputs::Beats port=>5044, ssl=>true, ssl_certificate=>"/etc/pki/tls/certs/logstash-forwarder.crt", ssl_key=>"/etc/pki/tls/private/logstash-forwarder.key", id=>"f6bc4efe737842d9a1535ea27942726278e10e880baefa98f7b5144120a988b1", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_9ae7abfe-57b0-407d-bae2-645c640323f1", enable_metric=>true, charset=>"UTF-8">, host=>"0.0.0.0", ssl_verify_mode=>"none", include_codec_tag=>true, ssl_handshake_timeout=>10000, tls_min_version=>1, tls_max_version=>1.2, cipher_suites=>["TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384", "TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256"], client_inactivity_timeout=>60, executor_threads=>4>
  Error: Address already in use
  Exception: Java::JavaNet::BindException
  Stack: sun.nio.ch.Net.bind0(Native Method)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:433)
sun.nio.ch.Net.bind(sun/nio/ch/Net.java:425)
sun.nio.ch.ServerSocketChannelImpl.bind(sun/nio/ch/ServerSocketChannelImpl.java:223)
io.netty.channel.socket.nio.NioServerSocketChannel.doBind(io/netty/channel/socket/nio/NioServerSocketChannel.java:128)
io.netty.channel.AbstractChannel$AbstractUnsafe.bind(io/netty/channel/AbstractChannel.java:558)
io.netty.channel.DefaultChannelPipeline$HeadContext.bind(io/netty/channel/DefaultChannelPipeline.java:1283)
io.netty.channel.AbstractChannelHandlerContext.invokeBind(io/netty/channel/AbstractChannelHandlerContext.java:501)
io.netty.channel.AbstractChannelHandlerContext.bind(io/netty/channel/AbstractChannelHandlerContext.java:486)
io.netty.channel.DefaultChannelPipeline.bind(io/netty/channel/DefaultChannelPipeline.java:989)
io.netty.channel.AbstractChannel.bind(io/netty/channel/AbstractChannel.java:254)
io.netty.bootstrap.AbstractBootstrap$2.run(io/netty/bootstrap/AbstractBootstrap.java:364)
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(io/netty/util/concurrent/AbstractEventExecutor.java:163)
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(io/netty/util/concurrent/SingleThreadEventExecutor.java:403)
io.netty.channel.nio.NioEventLoop.run(io/netty/channel/nio/NioEventLoop.java:463)
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(io/netty/util/concurrent/SingleThreadEventExecutor.java:858)
io.netty.util.concurrent.FastThreadLocalRunnable.run(io/netty/util/concurrent/FastThreadLocalRunnable.java:30)
java.lang.Thread.run(java/lang/Thread.java:748)
[2018-08-01T12:16:25,909][INFO ][org.logstash.beats.Server] Starting server on port: 5044

Any solutions?
Do I need to delete all others config files?

Screenshot%20from%202018-08-01%2012-22-30

One more question is this OK?

Error: Address already in use

This indicates that your configuration contains more than one beats input listening on the same port. You can't have that.

OK I have a such log in my /var/log/logstash/logstash-plain.log

[2018-08-01T18:32:56,951][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.3.2"}
[2018-08-01T18:33:06,301][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-08-01T18:33:07,869][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-08-01T18:33:07,887][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-08-01T18:33:08,157][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-08-01T18:33:08,516][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-08-01T18:33:08,521][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-08-01T18:33:08,558][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-08-01T18:33:08,627][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-08-01T18:33:08,661][INFO ][logstash.inputs.exec     ] Registering Exec Input {:type=>nil, :command=>"/usr/bin/python /home/maksym/PathFolder/pythonpractice/postsample.py", :interval=>20, :schedule=>nil}
[2018-08-01T18:33:08,684][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-08-01T18:33:08,698][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x7ccea73 run>"}
[2018-08-01T18:33:08,897][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-08-01T18:33:09,471][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

Is here everything is OK? Still nothing in Kibana showing

Use a stdout { codec => rubydebug } output and comment out your elasticsearch output. Are you getting anything in the log? If not, raise Logstash's log level to debug (there's a command line option for it, see the docs) and try again.

After changing the output is

[2018-08-01T19:12:24,940][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2018-08-01T19:12:26,358][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x7ccea73 run>"}
[2018-08-01T19:13:26,181][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.3.2"}
[2018-08-01T19:13:30,638][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-08-01T19:13:30,734][INFO ][logstash.inputs.exec     ] Registering Exec Input {:type=>nil, :command=>"/usr/bin/python /home/maksym/PathFolder/pythonpractice/postsample.py", :interval=>20, :schedule=>nil}
[2018-08-01T19:13:30,772][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x783ac53a run>"}
[2018-08-01T19:13:31,029][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-08-01T19:13:31,749][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

Is it OK? Do I need to raise Logstash's log level to debug?
logstash.conf

input {
  exec {
        command => "/usr/bin/python /home/maksym/PathFolder/pythonpractice/postsample.py"
        interval => 20
        codec => "json"
        }
}

output {
  # elasticsearch {
#       hosts => ["localhost:9200"]
#       index => "logstash-%{+YYYY.MM.dd}"
#}
   stdout {
     codec => rubydebug
   }
}

Since you're still not getting any events the next course of action would be to raise the log level.

Well, I've made the following

curl -XPUT 'localhost:9600/_node/logging?pretty' -H 'Content-Type: application/json' -d'
{
    "logger.logstash.outputs.elasticsearch" : "DEBUG"
}
'

It showed me:

{
  "host" : "localhost.localdomain",
  "version" : "6.3.2",
  "http_address" : "127.0.0.1:9600",
  "id" : "51e56ed9-489b-4b04-bb97-9ab40976ffbd",
  "name" : "localhost.localdomain",
  "acknowledged" : true
}

After I run this curl -XGET 'localhost:9600/_node/logging?pretty'

{
  "host" : "localhost.localdomain",
  "version" : "6.3.2",
  "http_address" : "127.0.0.1:9600",
  "id" : "51e56ed9-489b-4b04-bb97-9ab40976ffbd",
  "name" : "localhost.localdomain",
  "loggers" : {
    "logstash.agent" : "INFO",
    "logstash.api.service" : "INFO",
    "logstash.codecs.json" : "INFO",
    "logstash.codecs.rubydebug" : "INFO",
    "logstash.config.source.local.configpathloader" : "INFO",
    "logstash.config.source.multilocal" : "INFO",
    "logstash.config.sourceloader" : "INFO",
    "logstash.configmanagement.extension" : "INFO",
    "logstash.inputs.exec" : "INFO",
    "logstash.instrument.periodicpoller.deadletterqueue" : "INFO",
    "logstash.instrument.periodicpoller.jvm" : "INFO",
    "logstash.instrument.periodicpoller.os" : "INFO",
    "logstash.instrument.periodicpoller.persistentqueue" : "INFO",
    "logstash.modules.scaffold" : "INFO",
    "logstash.modules.xpackscaffold" : "INFO",
    "logstash.monitoringextension" : "INFO",
    "logstash.monitoringextension.pipelineregisterhook" : "INFO",
    "logstash.outputs.stdout" : "INFO",
    "logstash.pipeline" : "INFO",
    "logstash.plugins.registry" : "INFO",
    "logstash.runner" : "INFO",
    "org.logstash.Event" : "INFO",
    "org.logstash.Logstash" : "INFO",
    "org.logstash.common.DeadLetterQueueFactory" : "INFO",
    "org.logstash.common.io.DeadLetterQueueWriter" : "INFO",
    "org.logstash.config.ir.CompiledPipeline" : "INFO",
    "org.logstash.instrument.metrics.gauge.LazyDelegatingGauge" : "INFO",
    "org.logstash.plugins.pipeline.PipelineBus" : "INFO",
    "org.logstash.secret.store.SecretStoreFactory" : "INFO",
    "slowlog.logstash.codecs.json" : "TRACE",
    "slowlog.logstash.codecs.rubydebug" : "TRACE",
    "slowlog.logstash.inputs.exec" : "TRACE",
    "slowlog.logstash.outputs.stdout" : "TRACE"
  }

What should I do next? Still don't have an output in Kibana

Changing the loglevel of the logger.logstash.outputs.elasticsearch logger is useless when you're not using that plugin. It's the logstash.input.exec logger that's interesting. However, I was actually thinking about the global loglevel (that can be changed via a command-line option) but in this case I'm mostly interested in what the exec plugin logger says.

OK after changing to logstash.input.exec the output is the following

curl -XPUT 'localhost:9600/_node/logging?pretty' -H 'Content-Type: application/json' -d'
> {
>      "logger.logstash.input.exec" : "DEBUG"
> }
> '
{
  "host" : "localhost.localdomain",
  "version" : "6.3.2",
  "http_address" : "127.0.0.1:9600",
  "id" : "51e56ed9-489b-4b04-bb97-9ab40976ffbd",
  "name" : "localhost.localdomain",
  "acknowledged" : true

logstash-plain.log

[2018-08-02T14:52:49,679][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.3.2"}
[2018-08-02T14:52:55,778][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-08-02T14:52:56,042][INFO ][logstash.inputs.exec     ] Registering Exec Input {:type=>nil, :command=>"/usr/bin/python /home/maksym/PathFolder/pythonpractice/postsample.py", :interval=>20, :schedule=>nil}
[2018-08-02T14:52:56,148][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x32a50483 run>"}
[2018-08-02T14:52:56,535][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-08-02T14:52:57,482][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

Still nothing in Kibana...

Are you sure you're not getting any DEBUG entries from logstash.input.exec if you wait at least 20 seconds? Have you tried raising the global log level?

Yes, still nothing showing in Kibana. Didn't find how to do a global log level raising...

Set this in your logstash.yml

log.level: debug

or use '--log.level debug' on the command line.

Thank you, did it and a such lines are in logs of logstash

[2018-08-02T15:58:16,598][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x7d6b6d7a sleep>"}
[2018-08-02T15:58:16,780][DEBUG][logstash.inputs.exec     ] Running exec {:command=>"/usr/bin/python /home/maksym/PathFolder/pythonpractice/postsample.py"}
[2018-08-02T15:58:16,830][DEBUG][logstash.inputs.exec     ] Command completed {:command=>"/usr/bin/python /home/maksym/PathFolder/pythonpractice/postsample.py", :duration=>0.049361}
[2018-08-02T15:58:20,326][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-08-02T15:58:20,327][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-08-02T15:58:21,598][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x7d6b6d7a sleep>"}
[2018-08-02T15:58:25,336][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-08-02T15:58:25,337][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-08-02T15:58:26,600][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x7d6b6d7a sleep>"}
[2018-08-02T15:58:30,349][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-08-02T15:58:30,350][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-08-02T15:58:31,601][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x7d6b6d7a sleep>"}
[2018-08-02T15:58:35,359][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-08-02T15:58:35,360][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-08-02T15:58:36,601][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x7d6b6d7a sleep>"}
[2018-08-02T15:58:36,781][DEBUG][logstash.inputs.exec     ] Running exec {:command=>"/usr/bin/python /home/maksym/PathFolder/pythonpractice/postsample.py"}
[2018-08-02T15:58:36,848][DEBUG][logstash.inputs.exec     ] Command completed {:command=>"/usr/bin/python /home/maksym/PathFolder/pythonpractice/postsample.py", :duration=>0.06631100000000001}
[2018-08-02T15:58:40,372][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-08-02T15:58:40,373][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-08-02T15:58:41,602][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x7d6b6d7a sleep>"}
[2018-08-02T15:58:45,385][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
[2018-08-02T15:58:45,385][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
[2018-08-02T15:58:46,602][DEBUG][logstash.pipeline        ] Pushing flush onto pipeline {:pipeline_id=>"main", :thread=>"#<Thread:0x7d6b6d7a sleep>"}

Please look in a log after turning on a debug. I wrote it in another message

Can you comment out the codec on the input? Your script prints commentary on what it is doing, so its output is not JSON.

Yes, still nothing showing in Kibana.

No, of course not since the elasticsearch output has been commented out. Once we get Logstash to produce the events we can enable that output again.

Can you comment out the codec on the input? Your script prints commentary on what it is doing, so its output is not JSON.

I suspect the codec needs to be json_lines rather than json.

My logstash.yml after comment out

input {
  exec {
        command => "/usr/bin/python /home/maksym/PathFolder/pythonpractice/postsample.py"
        interval => 20
        #codec => "json_lines"
        } 
}

output {
  # elasticsearch {
#       hosts => ["localhost:9200"]
#       index => "logstash-%{+YYYY.MM.dd}"
#}
   stdout { 
     codec => rubydebug
   }
}