Webhdfs Exception: undefined method `read_uint32'

ENV:

  • Ambari 2.7
  • HDP 3.1
  • Enabled Kerberos and AD

It then installs gssapi for logstash according to the operation of the use kerberos webhdfs start error with exception "no such file to load -- gssapi" dengshaochun host. After that, and then runs it again with an error.

    [2019-04-30T10:36:39,079][DEBUG][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main"}
    [2019-04-30T10:36:39,658][ERROR][logstash.outputs.webhdfs ] Webhdfs check request failed. (namenode: m1.node.hadoop:50070, Exception: undefined method `read_uint32' for #<FFI::MemoryPointer address=0x7f39740901a0 size=4>)
    [2019-04-30T10:36:39,665][DEBUG][logstash.outputs.stdout  ] Closing {:plugin=>"LogStash::Outputs::Stdout"}
    [2019-04-30T10:36:39,711][ERROR][logstash.javapipeline    ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<WebHDFS::KerberosError: undefined method `read_uint32' for #<FFI::MemoryPointer address=0x7f39740901a0 size=4>>, :backtrace=>["/usr/local/share/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/webhdfs-0.8.0/lib/webhdfs/client_v1.rb:323:in `request'", "/usr/local/share/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/webhdfs-0.8.0/lib/webhdfs/client_v1.rb:275:in `operate_requests'", "/usr/local/share/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/webhdfs-0.8.0/lib/webhdfs/client_v1.rb:138:in `list'", "/usr/local/share/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/logstash-output-webhdfs-3.0.6/lib/logstash/outputs/webhdfs_helper.rb:49:in `test_client'", "/usr/local/share/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/logstash-output-webhdfs-3.0.6/lib/logstash/outputs/webhdfs.rb:155:in `register'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:106:in `register'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:48:in `register'", "/usr/local/share/logstash-7.0.0/logstash-core/lib/logstash/java_pipeline.rb:191:in `block in register_plugins'", "org/jruby/RubyArray.java:1792:in `each'", "/usr/local/share/logstash-7.0.0/logstash-core/lib/logstash/java_pipeline.rb:190:in `register_plugins'", "/usr/local/share/logstash-7.0.0/logstash-core/lib/logstash/java_pipeline.rb:445:in `maybe_setup_out_plugins'", "/usr/local/share/logstash-7.0.0/logstash-core/lib/logstash/java_pipeline.rb:203:in `start_workers'", "/usr/local/share/logstash-7.0.0/logstash-core/lib/logstash/java_pipeline.rb:145:in `run'", "/usr/local/share/logstash-7.0.0/logstash-core/lib/logstash/java_pipeline.rb:104:in `block in start'"], :thread=>"#<Thread:0x7cac5f13 run>"}
    [2019-04-30T10:36:39,738][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
    [2019-04-30T10:36:39,794][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
    [2019-04-30T10:36:39,795][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
    [2019-04-30T10:36:39,834][DEBUG][logstash.instrument.periodicpoller.os] Stopping
    [2019-04-30T10:36:39,877][DEBUG][logstash.agent           ] Starting puma
    [2019-04-30T10:36:39,883][DEBUG][logstash.instrument.periodicpoller.jvm] Stopping
    [2019-04-30T10:36:39,888][DEBUG][logstash.instrument.periodicpoller.persistentqueue] Stopping
    [2019-04-30T10:36:39,894][DEBUG][logstash.instrument.periodicpoller.deadletterqueue] Stopping
    [2019-04-30T10:36:39,893][DEBUG][logstash.agent           ] Trying to start WebServer {:port=>9600}
    [2019-04-30T10:36:39,940][DEBUG][logstash.api.service     ] [api-service] start
    [2019-04-30T10:36:40,068][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
    [2019-04-30T10:36:44,971][DEBUG][logstash.agent           ] Shutting down all pipelines {:pipelines_count=>0}
    [2019-04-30T10:36:44,981][DEBUG][logstash.agent           ] Converging pipelines state {:actions_count=>0}
    [2019-04-30T10:36:44,987][INFO ][logstash.runner          ] Logstash shut down.

Below is the log that docker runs

    [2019-04-30T03:32:59,304][ERROR][logstash.outputs.webhdfs ] Webhdfs check request failed. (namenode: m1.node.hadoop:50070, Exception: undefined method `read_uint32' for #<FFI::MemoryPointer address=0x7fa824119b90 size=4>
    Did you mean?  read_uint
                   read_int
                   read_array_of_uint32
                   read_array_of_int32
                   read_pointer
                   read_ulong
                   read_string
                   read_ushort
                   read_array_of_uint64
                   read_array_of_uint16
                   get_uint32)

Thank you for your help.

When I first started using webhdfs, the parameters were configured correctly and the following error was reported in the run

    [2019-04-30T10:08:09,493][DEBUG][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main"}
    warning: thread "[main]-pipeline-manager" terminated with exception (report_on_exception is true):
    LoadError: no such file to load -- gssapi
                      require at org/jruby/RubyKernel.java:984
                      require at /usr/local/share/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/polyglot-0.3.5/lib/polyglot.rb:65
               prepare_client at /usr/local/share/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/logstash-output-webhdfs-3.0.6/lib/logstash/outputs/webhdfs_helper.rb:27
                     register at /usr/local/share/logstash-7.0.0/vendor/bundle/jruby/2.5.0/gems/logstash-output-webhdfs-3.0.6/lib/logstash/outputs/webhdfs.rb:153
                     register at org/logstash/config/ir/compiler/OutputStrategyExt.java:106
                     register at org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:48
             register_plugins at /usr/local/share/logstash-7.0.0/logstash-core/lib/logstash/java_pipeline.rb:191
                         each at org/jruby/RubyArray.java:1792
             register_plugins at /usr/local/share/logstash-7.0.0/logstash-core/lib/logstash/java_pipeline.rb:190
      maybe_setup_out_plugins at /usr/local/share/logstash-7.0.0/logstash-core/lib/logstash/java_pipeline.rb:445
                start_workers at /usr/local/share/logstash-7.0.0/logstash-core/lib/logstash/java_pipeline.rb:203
                          run at /usr/local/share/logstash-7.0.0/logstash-core/lib/logstash/java_pipeline.rb:145
                        start at /usr/local/share/logstash-7.0.0/logstash-core/lib/logstash/java_pipeline.rb:104
    [2019-04-30T10:08:09,758][ERROR][logstash.agent           ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<main>, action_result: false", :backtrace=>nil}
    [2019-04-30T10:08:09,890][DEBUG][logstash.agent           ] Starting puma
    [2019-04-30T10:08:09,926][DEBUG][logstash.agent           ] Trying to start WebServer {:port=>9600}
    [2019-04-30T10:08:09,935][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ParNew"}
    [2019-04-30T10:08:09,937][DEBUG][logstash.instrument.periodicpoller.jvm] collector name {:name=>"ConcurrentMarkSweep"}
    [2019-04-30T10:08:10,017][DEBUG][logstash.api.service     ] [api-service] start
    [2019-04-30T10:08:10,042][ERROR][org.logstash.Logstash    ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (LoadError) no such file to load -- gssapi


    kevin@ubuntu-kevin:~/test$ ll /home/kevin/keytab/whg.keytab 
    -rw-rw-r-- 1 kevin kevin 442 4月  30 09:52 /home/kevin/keytab/whg.keytab
    kevin@ubuntu-kevin:~/test$ tail -10 test.conf 
      webhdfs {
        host => "m1.node.hadoop"
        port => 50070
        path => "/user/whg/test-logstash.text"
        user => "whg"
        kerberos_keytab => "/home/kevin/keytab/whg.keytab"
        use_kerberos_auth => true
      }

    }



    kevin@ubuntu-kevin:~/test$ klist 
    Ticket cache: FILE:/tmp/krb5cc_1000
    Default principal: whg@EXAMPLE.CN

    Valid starting       Expires              Service principal
    2019-04-30T09:54:37  2019-05-01T09:54:37  krbtgt/EXAMPLE.CN@EXAMPLE.CN
    2019-04-30T09:54:47  2019-05-01T09:54:37  HTTP/m1.node.hadoop@
    2019-04-30T09:54:47  2019-05-01T09:54:37  HTTP/m1.node.hadoop@EXAMPLE.CN

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.