During a Job execution, an OutOfMemory Java exception may occur. This exception means that the Job ran out of memory. This article explains the common reasons for memory overflow that leads to the OutOfMemory Java exception, and offers some solutions and best practice.
There are several possible reasons for an OutOfMemory Java exception to occur. Most common reasons for it include:
The error displays on the console as shown below:
Exception in thread "Thread-0" java.lang.OutOfMemoryError: Java heap space at java.util.LinkedList.listIterator(Unknown Source) at java.util.AbstractList.listIterator(Unknown Source) at java.util.AbstractSequentialList.iterator(Unknown Source) at routines.system.RunStat.sendMessages(RunStat.java:261) at routines.system.RunStat.run(RunStat.java:225) at java.lang.Thread.run(Unknown Source)
This issue can usually be solved by optimizing the job as described in the following section. However, if the Job cannot be optimized or if the problem still exists despite the Job having been optimized, then you need to allocate more memory for the Job to be able to process large amounts of data. This article also provides some advice on how to allocate more memory. However note that the default memory setting should be sufficient for all normal Job executions.
There are different ways of optimizing your Job Designs in order to improve the performance. A number of components include some native parameters to help you optimize your Jobs.
In Jobs that contain buffer components such as tSortRow as well as tMap, you can change the basic configuration to store temporary data on disk rather than in memory. For example, on tMap, click the tMap settings panel of Lookup flow and select the Store temp data option as true.
Then, set a directory path for temp data in the Basic settings panel.
You can also optimize your Jobs that use a tMysqlInput component. In this case, select the option Enable stream in the Advanced settings panel to replace the usual buffering processing with a stream processing that allows the code to read from a large table without consuming a large amount of memory. This will optimize the performance.
If you cannot optimize the Job design, you can at least allocate more memory to the Job. The following sections describe how to allocate more memory to one Job via the Studio, to all Jobs via the Studio, or to a Job script outside the Studio.
Allocate more memory to the active Job by double-clicking the default JVM arguments and editing them:
Note: This change only applies for the active Job. The JVM settings will persist in the job script, and take effect when the job is exported and is executed outside Talend Studio.
Allocate more memory by editing the JVM parameters in the Job Run VM arguments table:
Note: This change is effective for all Jobs.
The default setting should be sufficient for all normal Job executions, therefore this global change of JVM arguments is not recommended.
If you have exported the Job as a Job script, or you only have access to the Job script, you can allocate more memory by modifying the script file. To do so:
Modify the JVM arguments (-Xms and -Xmx) as follows:
%~d0 cd %~dp0 java -Xms256M -Xmx2048M -cp classpath.jar; shong.test_0_1.test --context=Default %*