Loading events from database to elasticsearch crashing

Hi all,

building a proof of concept to validate log analysis by ELK.

My config:

elasticsearch 2.3.1
logstash 2.3.1

input {    
    jdbc {
        type => "MonitoringBroker"
        jdbc_driver_library => "C:\Oracle\ora11g_64\jdbc\lib\ojdbc6.jar"
        jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
        jdbc_connection_string => "jdbc:oracle:thin:@bb-oracrp1:1521/oracr"
        jdbc_user =>  "xxxxx"
        jdbc_password => "xxxxx"
        statement => " 
                        SELECT a.*,
                               b.data_type,
                               CASE (NVL (b.payload_size, 0))
                                  WHEN 0 THEN 0
                                  ELSE b.payload_size / 1024
                               END
                                  AS payload_size
                          FROM mbrecord.WMB_MSGS a, mbrecord.wmb_msgs_details b
                         WHERE a.wmb_msgkey = b.wmb_msgkey(+)
                               AND event_timestamp >
                                      TO_CHAR (TRUNC (SYSDATE) , 'yyyy-mm-dd hh24:mi:ss')                                      
                    "
        tags => ["MonitoringBroker"]
    }
}
filter {
    date {
         match => [ "event_timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ]
         target => "@timestamp"
         locale => "UTC"
    }
}
 output {
    elasticsearch {
      hosts => ["localhost:9200"]
       index => "logstash-broker"
      workers => 8
      document_id => "%{wmb_msgkey}"
    }
 }

After running for while, after loading events 5806 events to elasticSearch , logstash crashes with the following error:

Pipeline main has been shutdown stopping pipeline {:id=>"main"}
The signal HUP is in use by the JVM and will not work correctly on this platform

in elasticsearch "console" i see the following error:

2016-05-19 11:58:18,581][WARN ][http.netty ] [Sunspot] Caught exc
ption while handling client http traffic, closing connection [id: 0xd61858f9, /
27.0.0.1:60745 => /127.0.0.1:9200]
ava.io.IOException: Uma liga├º├úo existente foi for├ºada a fechar pelo anfitri├
o remoto
at sun.nio.ch.SocketDispatcher.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:43)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(Abstract
ioWorker.java:108)
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNi
Selector.java:337)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioW
rker.java:89)
at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnabl
.java:108)
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProof
orker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
ava:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
java:617)
at java.lang.Thread.run(Thread.java:745)

How can i investigate further to find the error cause ?

Running some tests with SQLServer and get the same issue :rolling_eyes:

After 15.000 lines read from sql server and indexed to elasticSearch, logstash crashes with the same error message.

Adding

jdbc_fetch_size => 3000

to jdbc input seems to solve the problem :slight_smile:

logstash working configuration bellow:

input {    
    jdbc {
        type => "MonitoringBroker"
        jdbc_driver_library => "C:\Oracle\ora11g_64\jdbc\lib\ojdbc6.jar"
        jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
        jdbc_connection_string => "jdbc:oracle:thin:@bb-oracrp1:1521/oracr"
        jdbc_user =>  "xxxxx"
        jdbc_password => "xxxxx"
 jdbc_fetch_size => 3000
statement => " 
                            SELECT a.*,
                                   b.data_type,
                                   CASE (NVL (b.payload_size, 0))
                                      WHEN 0 THEN 0
                                      ELSE b.payload_size / 1024
                                   END
                                      AS payload_size
                              FROM mbrecord.WMB_MSGS a, mbrecord.wmb_msgs_details b
                             WHERE a.wmb_msgkey = b.wmb_msgkey(+)
                                   AND event_timestamp >
                                          TO_CHAR (TRUNC (SYSDATE) , 'yyyy-mm-dd hh24:mi:ss')                                      
                        "
            tags => ["MonitoringBroker"]
        }
    }


    filter {
        date {
             match => [ "event_timestamp" , "yyyy-MM-dd HH:mm:ss.SSS" ]
             target => "@timestamp"
             locale => "UTC"
        }
    }


     output {
        elasticsearch {
          hosts => ["localhost:9200"]
           index => "logstash-broker"
          workers => 8
          document_id => "%{wmb_msgkey}"
        }
     }

best regards

Rui Madaleno