During a Job execution, an outOfMemory Java exception may occur. This exception means that the Job ran out of memory. This article explains the common reasons for memory overflow that leads to the outOfMemory Java exception and offers some solutions and best practice.
There are several possible reasons for an outOfMemory Java exception to occur. Most common reasons for it include:
The error displays on the console as below:
Usually this issue can be solved by optimizing the job as described in the following section. However if the Job can not be optimized or if the problem still exists despite the Job has been optimized, then, you need to allocate more memory for the Job to be able to process large amounts of data. This article provides also some advice on how to allocate more memory. However note that the default memory setting should be sufficient for all normal Job executions.
There are different ways of optimizing your Job Designs in order to improve the performance. A number of components include some natively parameters to help you optimize your Jobs.
In Jobs that contain buffer components such as tSortRow as well as tMap, you can change the basic configuration to store temporary data on disk rather than in memory. For example, on tMap, click the tMap settings panel of Lookup flow and select the Store temp data option as true,
And then, browse a directory path for temp data in the Basic settings panel.
You can also optimize your Jobs that use a tMysqlInput components. In this case, select the option Enable stream in the Advanced settings panel to replace the usual buffering processing with a stream processing which allows the code to read from a large table without consuming a large amount of memory. This will optimize the performance.
If you cannot optimize the Job design, you can at least allocate more memory to the Job. The following sections describe respectively how to allocate more memory to one Job via the Studio, to all Jobs via the Studio or to a Job script outside the Studio.
This change is effective for all Jobs.
Note that the default setting should be sufficient for all normal Job executions, therefore this global change of JVM arguments is not recommended.
If you have exported the Job as a Job script or you only have access to the Job script, you can allocate more memory by modifying the script file. To do so:
Modify the JVM arguments (-Xms and -Xmx) as follows: