We are facing a problem with the huge data load in Redshift database.
There is a job which loads the data from Redshift stage1 table to stage2 table with all the transformations given. From all the lookups which are used in this job are loading fine. But the main flow is not able to fetch the huge amount of data(10 million). After some amount of time job gets failed with error "Exception in thread "Thread-0" java.lang.OutOfMemoryError: Java heap space".
Have tried the below options to execute the job:
1. Increasing the JVM parameters
2. Disk storage option in the tmap.
Using the Talend version 6.5.1. Please find the attachment of the job.
Solved! Go to Solution.
Could you please let us know if this online KB article helps?
Talend named a Leader.
Kickstart your first data integration and ETL projects.
Part 2 of a series on Context Variables
Learn how to do cool things with Context Variables
Find out how to migrate from one database to another using the Dynamic schema