Does the community suggest running java garbage collection after executing subjobs or is there a better way to deal with those errors when its not possible to add more hardware to the solution?
Did you get "java.lang.OutOfMemoryError: GC overhead limit" issue?
The problem is that when failing with the “java.lang.OutOfMemoryError: GC overhead limit exceeded” error JVM is signalling that your application is spending too much time in garbage collection with little to show for it.
Usually, increasing the heap size with -Xmx parameter can give a quick fix for this kind of issue.
It is recommended that you check or profile the job design. Performance issue usually cause by the DB connection or the job design.
Hope this link will help:https://plumbr.eu/outofmemoryerror
Watch the recorded webinar!
Create systems and workflow to manage clean data ingestion and data transformation.
Introduction to Talend Open Studio for Data Integration.
Test drive Talend's enterprise products.