In logstash configuration file, changed the JVM configuration to 512m and 8g. But after restating, logstash giving the same error.
Please help me to resolve the issue.
Have you tried to temporarily increase the HEAP to at least the size of the database? Or probably incorporate the queries with some where statement to reduce the output size so it fit into the HEAP size
You should be able to assign that size... as long as you have enough free memory. Anyway, probably you would be too near from the limit system began to swap which obviously is not a good thing at all.
So we'll be waiting for you test using where statements, to check whether it might help.
Case the machine were a VM I absolutely would go for increasing RAM and increasing the max heap size to the whole size of the database.
Case machine is physical, you could stop any non needed service and check how much RAM you have available (with LS stopped) If then you have more than those 13gb free, then I would go for temporarily increase the Heap size.
Case RAM were not enough I would try doing the migration by parts, this is to run first with a particular select .. where, then with other.. and so on .
Another possibility so this might not take as much time would be to define several jdbc input entries, but I'm not sure this would work or not (depends on if LS processes those entries in a serialized mode - it should work - or in a paralellized mode - then java would run out of memory again). This being said, most probable thing is that inputs are handled in parallel so this might not be the solution (the single WHERE'd statements should)
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.