Receiving a "bad substitution" error when running Job

Talend Version          6.1.1

Summary

Receiving a "bad substitution" error when running a Job, and also exit code 11 in logs.
Additional Versions 6.0.1, 5.6.2, 5.6.1,6.4.1, 6.3.1
Key words Hadoop, bad substitution, hdp, spark
Product Talend Big Data
Component Studio
Article Type Configuration
Problem Description

You are receiving the following errors on your Hadoop cluster in the HUE logs:

framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/${hdp.version}/hadoop/lib/hadoop-lzo-0.6.0.${hdp.version}.jar:/etc/hadoop/conf/secure: bad substitution
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
    at org.apache.hadoop.util.Shell.run(Shell.java:487)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

 

Second error in logs:

016-04-12 13:58:16,093 WARN nodemanager.DefaultContainerExecutor (DefaultContainerExecutor.java:launchContainer(230)) - Exception from container-launch with container ID: container# and exit code: 11
ExitCodeException exitCode=11:
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:576)
    at org.apache.hadoop.util.Shell.run(Shell.java:487)
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:753)
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
    at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Problem root cause The root cause of the issue is that a configuration was not filled out completely in the spark-defaults.conf file. The hdp.version property is not getting substituted correctly.
Solution or Workaround

Solution1

You will need to set the hdp.version in the filejava-opts in the $SPARK_HOME/conf/spark-defaults.conf file:

spark.driver.extraJavaOptions -Dhdp.version={hadoop version}
spark.yarn.am.extraJavaOptions -Dhdp.version={hadoop version}

 

Solution2

Add the JVM setting -Dhdp.version={hadoop version} in the Advanced Settings for the Run properties of the Spark Job in Studio.

JIRA ticket number  
Version history
Revision #:
6 of 6
Last update:
‎12-07-2017 12:42 PM
Updated by:
 
Contributors