Error connecting MongoDB and Elasticseach through Logstash

I can't do the ingestion between the MongoBD database and Elasticseach using Logstash, below is my .conf file

    input {
        uri => 'mongodb://localhost:27017/exemplo'
        placeholder_db_dir => '/opt/logstash-mongodb/'
        placeholder_db_name => 'logstash_sqlite.db'
        collection => 'clientes'
        batch_size => 5000
}
output {
        stdout {
                codec => rubydebug
        }
        elasticsearch {
                action => "index"
                index => "mongo_log_data"
                hosts => ["localhost:9200"]
        }
}

The following error occurs:

[2020-05-28T07:16:16,770][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of [ \t\r\n], "#", "{" at line 2, column 13 (byte 22) after input {\r\n uri ", :backtrace=>["C:/Users/Administrator/Documents/Elastic/versao7.7/logstash-7.7.0/logstash-core/lib/logstash/compiler.rb:58:in compile_imperative'", "C:/Users/Administrator/Documents/Elastic/versao7.7/logstash-7.7.0/logstash-core/lib/logstash/compiler.rb:66:in compile_graph'", "C:/Users/Administrator/Documents/Elastic/versao7.7/logstash-7.7.0/logstash-core/lib/logstash/compiler.rb:28:in block in compile_sources'", "org/jruby/RubyArray.java:2577:in map'", "C:/Users/Administrator/Documents/Elastic/versao7.7/logstash-7.7.0/logstash-core/lib/logstash/compiler.rb:27:in compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:181:in initialize'", "org/logstash/execution/JavaBasePipelineExt.java:67:in initialize'", "C:/Users/Administrator/Documents/Elastic/versao7.7/logstash-7.7.0/logstash-core/lib/logstash/java_pipeline.rb:43:in initialize'", "C:/Users/Administrator/Documents/Elastic/versao7.7/logstash-7.7.0/logstash-core/lib/logstash/pipeline_action/create.rb:52:in execute'", "C:/Users/Administrator/Documents/Elastic/versao7.7/logstash-7.7.0/logstash-core/lib/logstash/agent.rb:342:in block in converge_state'"]}
[2020-05-28T07:16:17,473][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-05-28T07:16:22,380][INFO ][logstash.runner ] Logstash shut down.

Remembering that I performed the installation by command:
C:\Users\Administrator\Documents\Elastic\versao7.7\logstash-7.7.0\bin\logstash-plugin install logstash-output-mongodb

I see the blog post you followed, and it is wrong. You have to tell logstash which input you want to use. So to quote the example from github

input {
  mongodb {
    uri => 'mongodb://10.0.0.30/my-logs?ssl=true'
    placeholder_db_dir => '/opt/logstash-mongodb/'
    placeholder_db_name => 'logstash_sqlite.db'
    collection => 'events_'
    batch_size => 5000
  }
}
1 Like

Thanks for the answer!
I made the changes and presented the following error:
Could you help me?

[2020-05-28T09:33:05,831][ERROR][logstash.agent ] An exception happened when converging configuration {:exception=>LogStash::Error, :message=>"Don't know how to handle Java::JavaLang::IllegalStateException for PipelineAction::Create<main>", :backtrace=>["org/logstash/execution/ConvergeResultExt.java:129:in create'", "org/logstash/execution/ConvergeResultExt.java:57:in add'", "C:/Users/Administrator/Documents/Elastic/versao7.7/logstash-7.7.0/logstash-core/lib/logstash/agent.rb:355:in block in converge_state'"]} [2020-05-28T09:33:05,908][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<LogStash::Error: Don't know how to handle Java::JavaLang::IllegalStateExceptionforPipelineAction::Create>, :backtrace=>["org/logstash/execution/ConvergeResultExt.java:129:in create'", "org/logstash/execution/ConvergeResultExt.java:57:in add'", "C:/Users/Administrator/Documents/Elastic/versao7.7/logstash-7.7.0/logstash-core/lib/logstash/agent.rb:355:in block in converge_state'"]}
[2020-05-28T09:33:05,967][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: Logstash stopped processing because of an error: (SystemExit) exit
warning: thread "Api Webserver" terminated with exception (report_on_exception is true):
NameError: uninitialized constant Rack::Builder
Did you mean? Rack::Builder
const_missing at org/jruby/RubyModule.java:3760
app at C:/Users/Administrator/Documents/Elastic/versao7.7/logstash-7.7.0/logstash-core/lib/logstash/api/rack_app.rb:97
start_webserver at C:/Users/Administrator/Documents/Elastic/versao7.7/logstash-7.7.0/logstash-core/lib/logstash/webserver.rb:99
run at C:/Users/Administrator/Documents/Elastic/versao7.7/logstash-7.7.0/logstash-core/lib/logstash/webserver.rb:60
each at org/jruby/RubyRange.java:526
each_with_index at org/jruby/RubyEnumerable.java:1258
run at C:/Users/Administrator/Documents/Elastic/versao7.7/logstash-7.7.0/logstash-core/lib/logstash/webserver.rb:55
start_webserver at C:/Users/Administrator/Documents/Elastic/versao7.7/logstash-7.7.0/logstash-core/lib/logstash/agent.rb:393

    input {
  mongodb {
        uri => 'mongodb://localhost:27017/exemplo'
        placeholder_db_dir => '/opt/logstash-mongodb/'
        placeholder_db_name => 'logstash_sqlite.db'
        collection => 'clientes'
        batch_size => 5000

  }
}
output {
        stdout {
                codec => rubydebug
        }
        elasticsearch {
                action => "index"
                index => "mongo_log_data"
                hosts => ["localhost:9200"]
        }
}

You seem to have loaded a version of the rack library that does not have a Builder class. I do not know much about dependency management in Ruby but I would have expected the system to detect that your rack is incompatible with your mongodb input.

I changed the version to the previous one and used the commands
Validating logstash-output-mongodb
Installing logstash-output-mongodb
Installation successful
Validating logstash-input-mongodb
Installing logstash-input-mongodb
Installation successful

And I ran my conf file and a different error appeared.

   2020-05-28T10:29:06,578][INFO ][logstash.outputs.elasticsearch][main] Attempting to install template {:manage_template=>{"index_patterns"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s", "number_of_shards"=>1}, "mappings"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}
D, [2020-05-28T10:29:09.206000 #6008] DEBUG -- : MONGODB | EVENT: #<TopologyOpening topology=Unknown[]>
D, [2020-05-28T10:29:09.252000 #6008] DEBUG -- : MONGODB | Topology type 'unknown' initializing.
D, [2020-05-28T10:29:09.494000 #6008] DEBUG -- : MONGODB | EVENT: #<TopologyChanged prev=Unknown[] new=Unknown[localhost:27017]>
D, [2020-05-28T10:29:09.565000 #6008] DEBUG -- : MONGODB | Topology type 'Unknown' changed to type 'Unknown'.
D, [2020-05-28T10:29:09.630000 #6008] DEBUG -- : MONGODB | EVENT: #<ServerOpening address=localhost:27017 topology=Unknown[localhost:27017]>
D, [2020-05-28T10:29:09.814000 #6008] DEBUG -- : MONGODB | Server localhost:27017 initializing.
D, [2020-05-28T10:29:09.895000 #6008] DEBUG -- : MONGODB | Waiting for up to 29.79 seconds for servers to be scanned: #<Cluster topology=Unknown[localhost:27017] servers=[#<Server address=localhost:27017 UNKNOWN>]>
D, [2020-05-28T10:29:10.536000 #6008] DEBUG -- : MONGODB | Error running ismaster on localhost:27017: Mongo::Error::SocketError: OpenSSL::SSL::SSLError: An existing connection was forcibly closed by the remote host (for 127.0.0.1:27017 (localhost:27017, TLS)) (MongoDB may not be configured with SSL support):
C:/Users/Administrator/Documents/Elastic/logstash-7.6.1/vendor/bundle/jruby/2.5.0/gems/mongo-2.12.1/lib/mongo/socket.rb:343:in `handle_errors'
C:/Users/Administrator/Documents/Elastic/logstash-7.6.1/vendor/bundle/jruby/2.5.0/gems/mongo-2.12.1/lib/mongo/socket/ssl.rb:56:in `block in connect!'
C:/Users/Administrator/Documents/Elastic/logstash-7.6.1/vendor/bundle/jruby/2.5.0/gems/mongo-2.12.1/lib/mongo/timeout.rb:43:in `block in timeout'
org/jruby/ext/timeout/Timeout.java:99:in `timeout'
org/jruby/ext/timeout/Timeout.java:75:in `timeout'
C:/Users/Administrator/Documents/Elastic/logstash-7.6.1/vendor/bundle/jruby/2.5.0/gems/mongo-2.12.1/lib/mongo/timeout.rb:42:in `timeout'

Well that is an improvement. Are there any errors in the mongodb log at that time?

I'm accessing the database normally through Robo 3T and terminal, there is no username and password for testing, do you imagine where this error could be?

MongoDB is closing the connection. If it does not log a reason why then I cannot think of a way to find out. Check the MongoDB logs.

1 Like

There were these log errors, you know what it might be, I researched it but found nothing conclusive
2020-05-28T12:05:17.108-0700 I COMMAND [conn4] killcursors: found 0 of 1
2020-05-28T12:05:17.197-0700 I COMMAND [conn4] killcursors exemplo.clientes appName: "MongoDB Shell" numYields:0 locks:{ Global: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 165ms
2020-05-28T12:05:17.197-0700 I NETWORK [conn4] end connection 127.0.0.1:50297 (2 connections now open)
2020-05-28T12:06:13.188-0700 I NETWORK [listener] connection accepted from 127.0.0.1:50861 #7 (3 connections now open)
2020-05-28T12:06:13.193-0700 I NETWORK [conn7] received client metadata from 127.0.0.1:50861 conn7: { application: { name: "robo3t" }, driver: { name: "MongoDB Internal Client", version: "3.4.3-10-g865d2fb" }, os: { type: "Windows", name: "Microsoft Windows Server 2012", architecture: "x86_64", version: "6.2 (build 9200)" } }
2020-05-28T12:06:13.229-0700 I NETWORK [listener] connection accepted from 127.0.0.1:50862 #8 (4 connections now open)
2020-05-28T12:06:13.230-0700 I NETWORK [conn8] received client metadata from 127.0.0.1:50862 conn8: { application: { name: "MongoDB Shell" }, driver: { name: "MongoDB Internal Client", version: "3.4.3-10-g865d2fb" }, os: { type: "Windows", name: "Microsoft Windows Server 2012", architecture: "x86_64", version: "6.2 (build 9200)" } }
2020-05-28T12:16:30.176-0700 I NETWORK [listener] connection accepted from 127.0.0.1:50891 #9 (5 connections now open)
2020-05-28T12:16:30.294-0700 I NETWORK [conn9] received client metadata from 127.0.0.1:50891 conn9: { driver: { name: "mongo-ruby-driver", version: "2.12.1" }, os: { type: "mswin", name: "mswin32", architecture: "x86_64" }, platform: "2.5.7, java, java1.8" }
2020-05-28T12:16:38.279-0700 I NETWORK [conn9] end connection 127.0.0.1:50891 (4 connections now open)

We need to see the MongoDB messages around 10:29:10, or from another occurrence of that SocketError.

In the previous example it was as follows:
mongodb://localhost:27017/exemplo? ssl=true

But ssl authorization is not enabled, I removed it and left it without ssl and presented the following error.

[2020-05-28T14:01:38,070][ERROR][logstash.javapipeline ][main] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<Sequel::DatabaseConnectionError: Java::JavaSql::SQLException: path to '/opt/logstash-mongodb/logstash_sqlite.db': 'C:\opt' does not exist>

Log error:

2020-05-28T14:01:35.748-0700 I NETWORK  [listener] connection accepted from 127.0.0.1:51183 #16 (7 connections now open)
2020-05-28T14:01:35.844-0700 I NETWORK  [conn16] received client metadata from 127.0.0.1:51183 conn16: { driver: { name: "mongo-ruby-driver", version: "2.12.1" }, os: { type: "mswin", name: "mswin32", architecture: "x86_64" }, platform: "2.5.7, java, java1.8" }
2020-05-28T14:01:43.471-0700 I NETWORK  [conn16] end connection 127.0.0.1:51183 (6 connections now open)

That seems quite clear to me.

Yes, but what would these files be? Do you know if they are needed?

The placeholderdb is required. The input uses it, I think, to store information about the most recent document it has seen in each mongodb collection.

Do you know how to explain more about this file? Do you have a solution that you propose to me?

Everything I know about that file I learned by a quick review of the code after you posted the message.

The solution is to point placeholder_db_dir to a directory on your machine that exists and is writeable by the user running logstash.

1 Like

Okay, thank you so much for helping me!
I only have one more question because I used the following .conf file:

input {

  mongodb {

        uri => 'mongodb://localhost:27017/PNAD'

        placeholder_db_dir => 'C:\Users\Administrator'

        placeholder_db_name => 'logstash_sqlite.db'

        collection => 'PNAD_'

        batch_size => 5000

  }

}

output {

        stdout {

                codec => rubydebug

        }

        elasticsearch {

                action => "index"

                index => "mongo_log_data"

                hosts => ["localhost:9200"]

        }

}

And the following is returning:

D, [2020-05-29T06:12:49.740000 #5896] DEBUG -- : MONGODB | [167] localhost:27017 #1 | PNAD.listCollections | STARTED | {"listCollections"=>1, "cursor"=>{}, "nameOnly"=>true, "$db"=>"PNAD", "lsid"=>{"id"=><BSON::Binary:0x2064 type=uuid data=0xd0e9ee3087f84957...>}}
D, [2020-05-29T06:12:49.747000 #5896] DEBUG -- : MONGODB | [167] localhost:27017 | PNAD.listCollections | SUCCEEDED | 0.003s

It creates the index but does not insert the collection data
PNAD is the name of collection and database.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.