Does the community suggest running java garbage collection after executing subjobs or is there a better way to deal with those errors when its not possible to add more hardware to the solution?
Did you get "java.lang.OutOfMemoryError: GC overhead limit" issue?
The problem is that when failing with the “java.lang.OutOfMemoryError: GC overhead limit exceeded” error JVM is signalling that your application is spending too much time in garbage collection with little to show for it.
Usually, increasing the heap size with -Xmx parameter can give a quick fix for this kind of issue.
It is recommended that you check or profile the job design. Performance issue usually cause by the DB connection or the job design.
Hope this link will help:https://plumbr.eu/outofmemoryerror
Talend named a Leader.
Kickstart your first data integration and ETL projects.
Watch the recorded webinar!
Part 2 of a series on Context Variables
Learn how to do cool things with Context Variables
Find out how to migrate from one database to another using the Dynamic schema