Does the community suggest running java garbage collection after executing subjobs or is there a better way to deal with those errors when its not possible to add more hardware to the solution?
Did you get "java.lang.OutOfMemoryError: GC overhead limit" issue?
The problem is that when failing with the “java.lang.OutOfMemoryError: GC overhead limit exceeded” error JVM is signalling that your application is spending too much time in garbage collection with little to show for it.
Usually, increasing the heap size with -Xmx parameter can give a quick fix for this kind of issue.
It is recommended that you check or profile the job design. Performance issue usually cause by the DB connection or the job design.
Hope this link will help:https://plumbr.eu/outofmemoryerror
Talend named a Leader.
Kickstart your first data integration and ETL projects.
Watch the recorded webinar!
Pick up some tips and tricks with Context Variables
Learn how media organizations have achieved success with Data Integration
Introduction to Talend Open Studio for Data Integration.