Connection to TAC from Studio fails with the error Connect to server timeout, you can change the timeout value in Studio performance preference.
This error occurs when the value of Connection timeout with Administration center in Studio is set to a low value.
In Studio, navigate to Window > Preferences > Talend > Performance and increase the parameter Connection timeout with Administration center.
... View more
When connecting to a MapReduce cluster and validating MapR Tickets connectivity, you may get the following error:
org.talend.designer.hdfsbrowse.exceptions.HadoopServerException: org.talend.designer.hdfsbrowse.exceptions.HadoopServerException: java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException at org.talend.designer.hdfsbrowse.hadoop.service.check.AbstractCheckedServiceProvider.checkService
Caused by: org.talend.designer.hdfsbrowse.exceptions.HadoopServerException: java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException
Caused by: java.util.concurrent.ExecutionException: java.lang.reflect.InvocationTargetException
Caused by: java.lang.reflect.InvocationTargetException
Caused by: java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.fs.viewfs.ViewFileSystem could not be instantiated
Caused by: java.io.IOException: failure to login: No LoginModules configured for hadoop_default
The MapR distribution for Hadoop uses the Java Authentication and Authorization Service (JAAS) to control security features. The /opt/mapr/conf/mapr.login.conf file specifies configuration parameters for JAAS. When the error shown above occurs in Studio, it means Studio is unable to find this file.
To resolve the error, add the following line to the studio.ini file in the Studio installation directory or in the JVM arguments for the connection component.
... View more
Spark Jobs fail with the errors: Diagnostics: Container killed on request. Exit code is 143 and Lost executor 3.
Cores, Memory, and MemoryOverhead are three things that you can tune to make a Job succeed in this case. Changing a few parameters in the Spark configuration file helps to resolve the issue.
The number of cores you configure (four or eight) is very significant, as it affects the number of concurrent tasks you can run. With four cores, you can run four tasks in parallel; this affects the amount of execution memory being used. The Spark executor memory is shared between these tasks. Here are the two relevant parameters:
Memory is important too. The number of cores, and the heap memory available, contribute to this parameter. Here are the two relevant properties:
memoryOverhead allows the container (the driver or executor(s)) to run until its memory reaches the memoryOverhead limit. Once the container exceeds the limit, it is generally killed by YARN. Here are the two relevant properties:
Increase the value of the following parameters according to your system capability:
... View more