tPigLoad component issue with Cloudera 5

One Star

tPigLoad component issue with Cloudera 5

Hi Gurus.
Quick question is had anyone ever get the tPigload component working with cloudera5 
I am using cloudera 5 
talend 5.4 and 5.5 <none worked>
where ever there is a scope of map reduce talend is complaining.
Following is the error description:
Exception in component tPigLoad_1
org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1000: Error during parsing. Unable to check name hdfs://txwlcloud2:8020/user/thor
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1608)
at org.apache.pig.PigServer$Graph.registerQuery(PigServer.java:1547)
at org.apache.pig.PigServer.registerQuery(PigServer.java:518)
at org.apache.pig.PigServer.registerQuery(PigServer.java:531)
at bigdata.testpig_0_1.testpig.tPigLoad_1Process(testpig.java:348)
at bigdata.testpig_0_1.testpig.runJobInTOS(testpig.java:599)
at bigdata.testpig_0_1.testpig.main(testpig.java:458)
Caused by: Failed to parse: Pig script failed to parse: 
<line 1, column 21> pig script failed to validate: org.apache.pig.backend.datastorage.DataStorageException: ERROR 6007: Unable to check name hdfs://txwlcloud2:8020/user/thor
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:191)
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1600)
... 6 more
Caused by: 
<line 1, column 21> pig script failed to validate: org.apache.pig.backend.datastorage.DataStorageException: ERROR 6007: Unable to check name hdfs://txwlcloud2:8020/user/thor
at org.apache.pig.parser.LogicalPlanBuilder.buildLoadOp(LogicalPlanBuilder.java:835)
at org.apache.pig.parser.LogicalPlanGenerator.load_clause(LogicalPlanGenerator.java:3236)
at org.apache.pig.parser.LogicalPlanGenerator.op_clause(LogicalPlanGenerator.java:1315)
at org.apache.pig.parser.LogicalPlanGenerator.general_statement(LogicalPlanGenerator.java:799)
at org.apache.pig.parser.LogicalPlanGenerator.statement(LogicalPlanGenerator.java:517)
at org.apache.pig.parser.LogicalPlanGenerator.query(LogicalPlanGenerator.java:392)
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:184)
... 7 more
Caused by: org.apache.pig.backend.datastorage.DataStorageException: ERROR 6007: Unable to check name hdfs://txwlcloud2:8020/user/thor
at org.apache.pig.backend.hadoop.datastorage.HDataStorage.isContainer(HDataStorage.java:207)
at org.apache.pig.backend.hadoop.datastorage.HDataStorage.asElement(HDataStorage.java:128)
at org.apache.pig.backend.hadoop.datastorage.HDataStorage.asElement(HDataStorage.java:138)
at org.apache.pig.parser.QueryParserUtils.getCurrentDir(QueryParserUtils.java:91)
at org.apache.pig.parser.LogicalPlanBuilder.buildLoadOp(LogicalPlanBuilder.java:827)
... 13 more
Caused by: java.io.IOException: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Message missing required fields: callId, status; Host Details : local host is: "TXWLHPW295/10.215.206.241"; destination host is: "txwlcloud2":8020; 
Moderator

Re: tPigLoad component issue with Cloudera 5

Hi,
ERROR 6007: Unable to check name hdfs://txwlcloud2:8020/user/thor

Make sure your input parameter is totally correct. Have you checked component reference TalendHelpCenter:tPigLoad?
Best regards
Sabrina
--
Don't forget to give kudos when a reply is helpful and click Accept the solution when you think you're good with it.
One Star

Re: tPigLoad component issue with Cloudera 5

Hi Team, Am getting the following error while storing data into HBase using tPigStoreResult (store function : HBaseStorage)
Please to the attachment for job design and complete stacktrace below.
Starting job Main_e at 15:15 28/10/2014.
connecting to socket on port 3342
connected
: org.apache.hadoop.conf.Configuration.deprecation - mapred.job.queue.name is deprecated. Instead, use mapreduce.job.queuename
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
: org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://localhost.localdomain:8022
: org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: FILTER
: org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=}
: org.apache.pig.newplan.logical.rules.ColumnPruneVisitor - Columns pruned for tPigLoad_4_row7_RESULT: $3
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.5-cdh5.0.0--1, built on 03/28/2014 04:21 GMT
: org.apache.zookeeper.ZooKeeper - Client environment:host.name=localhost.localdomain
: org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.7.0_45
: org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation
: org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/java/jdk1.7.0_45-cloudera/jre
: org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-mapreduce-client-core-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/htrace-core-2.01.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hbase-protocol-0.96.1.1-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-yarn-api-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/commons-codec-1.4.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-yarn-common-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-hdfs-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/commons-logging-1.1.3.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/protobuf-java-2.5.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/snappy-java-1.0.5.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/talendcsv.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-conf.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-auth-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-common-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/log4j-1.2.17.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/slf4j-api-1.7.5.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/slf4j-log4j12-1.7.5.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/commons-collections-3.2.1.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hbase-common-0.96.1.1-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/zookeeper-3.4.5-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/commons-cli-1.2.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/commons-configuration-1.6.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/talend_file_enhanced_20070724.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/guava-12.0.1.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/dom4j-1.6.1.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/pig-0.12.0-cdh5.0.0-withouthadoop.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/jaxen-1.1.1.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hbase-client-0.96.1.1-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/avro-1.7.5-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/netty-3.6.6.Final.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-yarn-client-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/commons-lang-2.6.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-mapreduce-client-common-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hbase-server-0.96.1.1-cdh5.0.0.jar:.:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/classes::/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib:
: org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
: org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
: org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
: org.apache.zookeeper.ZooKeeper - Client environmentSmiley Surpriseds.name=Linux
: org.apache.zookeeper.ZooKeeper - Client environmentSmiley Surpriseds.arch=amd64
: org.apache.zookeeper.ZooKeeper - Client environmentSmiley Surpriseds.version=2.6.32-220.el6.x86_64
: org.apache.zookeeper.ZooKeeper - Client environment:user.name=cloudera
: org.apache.zookeeper.ZooKeeper - Client environment:user.home=/home/cloudera
: org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/opt/talend/Talend-Tools-Studio-r118616-V5.5.1
: org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=localhost.localdomain:2181 sessionTimeout=90000 watcher=hconnection-0x4cfe7e93, quorum=localhost.localdomain:2181, baseZNode=/hbase
: org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper - Process identifier=hconnection-0x4cfe7e93 connecting to ZooKeeper ensemble=localhost.localdomain:2181
: org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
: org.apache.zookeeper.ClientCnxn - Socket connection established to localhost.localdomain/127.0.0.1:2181, initiating session
: org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost.localdomain/127.0.0.1:2181, sessionid = 0x14958bf635b0013, negotiated timeout = 60000
: org.apache.hadoop.hbase.mapreduce.TableOutputFormat - Created table instance for M7CmpnEventsTable
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
: org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at localhost.localdomain/127.0.0.1:8032
: org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
: org.apache.hadoop.conf.Configuration.deprecation - mapred.job.reduce.markreset.buffer.percent is deprecated. Instead, use mapreduce.reduce.markreset.buffer.percent
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
: org.apache.hadoop.conf.Configuration.deprecation - mapred.output.compress is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job3683028916991293111.jar
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job3683028916991293111.jar created
: org.apache.hadoop.conf.Configuration.deprecation - mapred.jar is deprecated. Instead, use mapreduce.job.jar
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
: org.apache.pig.data.SchemaTupleFrontend - Key is false, will not generate code.
: org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cache
: org.apache.pig.data.SchemaTupleFrontend - Setting key with classes to deserialize []
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
: org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker.http.address is deprecated. Instead, use mapreduce.jobtracker.http.address
: org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at localhost.localdomain/127.0.0.1:8032
: org.apache.hadoop.conf.Configuration.deprecation - mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.hbase.mapreduce.TableOutputFormat - Created table instance for M7CmpnEventsTable
: org.apache.hadoop.mapreduce.JobSubmitter - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
: org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 2
: org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 21
: org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
: org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
: org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1414533216854_0001
: org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/temp222487979/tmp896164830/zookeeper-3.4.5-cdh5.0.0.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/hadoop-yarn/staging/talend/.staging/job_1414533216854_0001/libjars/zookeeper-3.4.5-cdh5.0.0.jar This will be an error in Hadoop 2.0
: org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/temp222487979/tmp1258536381/guava-12.0.1.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/hadoop-yarn/staging/talend/.staging/job_1414533216854_0001/libjars/guava-12.0.1.jar This will be an error in Hadoop 2.0
: org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/temp222487979/tmp-913424322/hbase-common-0.96.1.1-cdh5.0.0.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/hadoop-yarn/staging/talend/.staging/job_1414533216854_0001/libjars/hbase-common-0.96.1.1-cdh5.0.0.jar This will be an error in Hadoop 2.0
: org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/temp222487979/tmp-890186032/htrace-core-2.01.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/hadoop-yarn/staging/talend/.staging/job_1414533216854_0001/libjars/htrace-core-2.01.jar This will be an error in Hadoop 2.0
: org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/temp222487979/tmp1335906876/hbase-client-0.96.1.1-cdh5.0.0.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/hadoop-yarn/staging/talend/.staging/job_1414533216854_0001/libjars/hbase-client-0.96.1.1-cdh5.0.0.jar This will be an error in Hadoop 2.0
: org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/temp222487979/tmp-1626527593/hbase-server-0.96.1.1-cdh5.0.0.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/hadoop-yarn/staging/talend/.staging/job_1414533216854_0001/libjars/hbase-server-0.96.1.1-cdh5.0.0.jar This will be an error in Hadoop 2.0
: org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/temp222487979/tmp-1377727885/hbase-protocol-0.96.1.1-cdh5.0.0.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/hadoop-yarn/staging/talend/.staging/job_1414533216854_0001/libjars/hbase-protocol-0.96.1.1-cdh5.0.0.jar This will be an error in Hadoop 2.0
: org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted application application_1414533216854_0001
: org.apache.hadoop.mapreduce.Job - The url to track the job:
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_1414533216854_0001
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases tPigFilterRow_1_row11_RESULT,tPigLoad_4_row7_RESULT,tPigMap_10_out_RESULT,tPigMap_5_CNRS_RESULT,tPigMap_6_row8_RESULT,tPigMap_8_row31_RESULT
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: tPigLoad_4_row7_RESULT,tPigLoad_4_row7_RESULT,tPigFilterRow_1_row11_RESULT,tPigMap_6_row8_RESULT,tPigMap_8_row31_RESULT,tPigMap_5_CNRS_RESULT,tPigMap_5_CNRS_RESULT,tPigMap_10_out_RESULT C:  R: 
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 50% complete
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_1414533216854_0001 has failed! Stop running all dependent jobs
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - Backend error message
AttemptID:attempt_1414533216854_0001_m_000000_0 Info:Error: java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:916)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - Backend error message
AttemptID:attempt_1414533216854_0001_m_000000_0 Info:Container killed by the ApplicationMaster.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - Backend error message
AttemptID:attempt_1414533216854_0001_m_000000_1 Info:Error: java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:916)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - Backend error message
AttemptID:attempt_1414533216854_0001_m_000000_1 Info:Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - Backend error message
AttemptID:attempt_1414533216854_0001_m_000000_2 Info:Error: java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:916)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - Backend error message
AttemptID:attempt_1414533216854_0001_m_000000_2 Info:Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - Backend error message
AttemptID:attempt_1414533216854_0001_m_000000_3 Info:Error: java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:916)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - Backend error message
AttemptID:attempt_1414533216854_0001_m_000000_3 Info:Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:916)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:916)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:916)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:916)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
: org.apache.pig.tools.pigstats.SimplePigStats - ERROR: java.lang.String cannot be cast to java.util.Map
: org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
: org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics: 
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.3.0-cdh5.0.0 0.12.0-cdh5.0.0 cloudera 2014-10-28 15:15:28 2014-10-28 15:19:47 FILTER
Failed!
Failed Jobs:
JobId Alias Feature Message Outputs
job_1414533216854_0001 tPigFilterRow_1_row11_RESULT,tPigLoad_4_row7_RESULT,tPigMap_10_out_RESULT,tPigMap_5_CNRS_RESULT,tPigMap_6_row8_RESULT,tPigMap_8_row31_RESULT MAP_ONLY Message: Job failed! M7CmpnEventsTable,
Input(s):
Failed to read data from "/user/talend/test/Output"
Output(s):
Failed to produce result in "M7CmpnEventsTable"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_1414533216854_0001

: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
disconnected
Job Main_e ended at 15:19 28/10/2014.
Hadoop_Error.txt.txtHadoop_Error.txt_20141029-0029.txt
One Star

Re: tPigLoad component issue with Cloudera 5

Hi Team, Am getting the following error while storing data into HBase using tPigStoreResult (store function : HBaseStorage)
Please to the attachment for job design and complete stacktrace below.
Starting job Main_e at 15:15 28/10/2014.
connecting to socket on port 3342
connected
: org.apache.hadoop.conf.Configuration.deprecation - mapred.job.queue.name is deprecated. Instead, use mapreduce.job.queuename
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
: org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: hdfs://localhost.localdomain:8022
: org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.pig.tools.pigstats.ScriptState - Pig features used in the script: FILTER
: org.apache.pig.newplan.logical.optimizer.LogicalPlanOptimizer - {RULES_ENABLED=}
: org.apache.pig.newplan.logical.rules.ColumnPruneVisitor - Columns pruned for tPigLoad_4_row7_RESULT: $3
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.zookeeper.ZooKeeper - Client environment:zookeeper.version=3.4.5-cdh5.0.0--1, built on 03/28/2014 04:21 GMT
: org.apache.zookeeper.ZooKeeper - Client environment:host.name=localhost.localdomain
: org.apache.zookeeper.ZooKeeper - Client environment:java.version=1.7.0_45
: org.apache.zookeeper.ZooKeeper - Client environment:java.vendor=Oracle Corporation
: org.apache.zookeeper.ZooKeeper - Client environment:java.home=/usr/java/jdk1.7.0_45-cloudera/jre
: org.apache.zookeeper.ZooKeeper - Client environment:java.class.path=/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-mapreduce-client-core-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/htrace-core-2.01.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hbase-protocol-0.96.1.1-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-yarn-api-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/commons-codec-1.4.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-yarn-common-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-hdfs-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/commons-logging-1.1.3.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/protobuf-java-2.5.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/snappy-java-1.0.5.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/talendcsv.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-conf.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-auth-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-common-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/log4j-1.2.17.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/slf4j-api-1.7.5.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-mapreduce-client-jobclient-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/slf4j-log4j12-1.7.5.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/commons-collections-3.2.1.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hbase-common-0.96.1.1-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/zookeeper-3.4.5-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/commons-cli-1.2.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/commons-configuration-1.6.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/talend_file_enhanced_20070724.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/guava-12.0.1.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/dom4j-1.6.1.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/pig-0.12.0-cdh5.0.0-withouthadoop.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/jaxen-1.1.1.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hbase-client-0.96.1.1-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/avro-1.7.5-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/netty-3.6.6.Final.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-yarn-client-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/commons-lang-2.6.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hadoop-mapreduce-client-common-2.3.0-cdh5.0.0.jar:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib/hbase-server-0.96.1.1-cdh5.0.0.jar:.:/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/classes::/opt/talend/Talend-Tools-Studio-r118616-V5.5.1/workspace/.Java/lib:
: org.apache.zookeeper.ZooKeeper - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
: org.apache.zookeeper.ZooKeeper - Client environment:java.io.tmpdir=/tmp
: org.apache.zookeeper.ZooKeeper - Client environment:java.compiler=<NA>
: org.apache.zookeeper.ZooKeeper - Client environmentSmiley Surpriseds.name=Linux
: org.apache.zookeeper.ZooKeeper - Client environmentSmiley Surpriseds.arch=amd64
: org.apache.zookeeper.ZooKeeper - Client environmentSmiley Surpriseds.version=2.6.32-220.el6.x86_64
: org.apache.zookeeper.ZooKeeper - Client environment:user.name=cloudera
: org.apache.zookeeper.ZooKeeper - Client environment:user.home=/home/cloudera
: org.apache.zookeeper.ZooKeeper - Client environment:user.dir=/opt/talend/Talend-Tools-Studio-r118616-V5.5.1
: org.apache.zookeeper.ZooKeeper - Initiating client connection, connectString=localhost.localdomain:2181 sessionTimeout=90000 watcher=hconnection-0x4cfe7e93, quorum=localhost.localdomain:2181, baseZNode=/hbase
: org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper - Process identifier=hconnection-0x4cfe7e93 connecting to ZooKeeper ensemble=localhost.localdomain:2181
: org.apache.zookeeper.ClientCnxn - Opening socket connection to server localhost.localdomain/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
: org.apache.zookeeper.ClientCnxn - Socket connection established to localhost.localdomain/127.0.0.1:2181, initiating session
: org.apache.zookeeper.ClientCnxn - Session establishment complete on server localhost.localdomain/127.0.0.1:2181, sessionid = 0x14958bf635b0013, negotiated timeout = 60000
: org.apache.hadoop.hbase.mapreduce.TableOutputFormat - Created table instance for M7CmpnEventsTable
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MRCompiler - File concatenation threshold: 100 optimistic? false
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size before optimization: 1
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MultiQueryOptimizer - MR plan size after optimization: 1
: org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at localhost.localdomain/127.0.0.1:8032
: org.apache.pig.tools.pigstats.ScriptState - Pig script settings are added to the job
: org.apache.hadoop.conf.Configuration.deprecation - mapred.job.reduce.markreset.buffer.percent is deprecated. Instead, use mapreduce.reduce.markreset.buffer.percent
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - mapred.job.reduce.markreset.buffer.percent is not set, set to default 0.3
: org.apache.hadoop.conf.Configuration.deprecation - mapred.output.compress is deprecated. Instead, use mapreduce.output.fileoutputformat.compress
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - creating jar file Job3683028916991293111.jar
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - jar file Job3683028916991293111.jar created
: org.apache.hadoop.conf.Configuration.deprecation - mapred.jar is deprecated. Instead, use mapreduce.job.jar
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler - Setting up single store job
: org.apache.pig.data.SchemaTupleFrontend - Key is false, will not generate code.
: org.apache.pig.data.SchemaTupleFrontend - Starting process to move generated code to distributed cache
: org.apache.pig.data.SchemaTupleFrontend - Setting key with classes to deserialize []
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 1 map-reduce job(s) waiting for submission.
: org.apache.hadoop.conf.Configuration.deprecation - mapred.job.tracker.http.address is deprecated. Instead, use mapreduce.jobtracker.http.address
: org.apache.hadoop.yarn.client.RMProxy - Connecting to ResourceManager at localhost.localdomain/127.0.0.1:8032
: org.apache.hadoop.conf.Configuration.deprecation - mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.conf.Configuration.deprecation - fs.default.name is deprecated. Instead, use fs.defaultFS
: org.apache.hadoop.hbase.mapreduce.TableOutputFormat - Created table instance for M7CmpnEventsTable
: org.apache.hadoop.mapreduce.JobSubmitter - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
: org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 2
: org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths to process : 21
: org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths (combined) to process : 1
: org.apache.hadoop.mapreduce.JobSubmitter - number of splits:1
: org.apache.hadoop.mapreduce.JobSubmitter - Submitting tokens for job: job_1414533216854_0001
: org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/temp222487979/tmp896164830/zookeeper-3.4.5-cdh5.0.0.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/hadoop-yarn/staging/talend/.staging/job_1414533216854_0001/libjars/zookeeper-3.4.5-cdh5.0.0.jar This will be an error in Hadoop 2.0
: org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/temp222487979/tmp1258536381/guava-12.0.1.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/hadoop-yarn/staging/talend/.staging/job_1414533216854_0001/libjars/guava-12.0.1.jar This will be an error in Hadoop 2.0
: org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/temp222487979/tmp-913424322/hbase-common-0.96.1.1-cdh5.0.0.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/hadoop-yarn/staging/talend/.staging/job_1414533216854_0001/libjars/hbase-common-0.96.1.1-cdh5.0.0.jar This will be an error in Hadoop 2.0
: org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/temp222487979/tmp-890186032/htrace-core-2.01.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/hadoop-yarn/staging/talend/.staging/job_1414533216854_0001/libjars/htrace-core-2.01.jar This will be an error in Hadoop 2.0
: org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/temp222487979/tmp1335906876/hbase-client-0.96.1.1-cdh5.0.0.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/hadoop-yarn/staging/talend/.staging/job_1414533216854_0001/libjars/hbase-client-0.96.1.1-cdh5.0.0.jar This will be an error in Hadoop 2.0
: org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/temp222487979/tmp-1626527593/hbase-server-0.96.1.1-cdh5.0.0.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/hadoop-yarn/staging/talend/.staging/job_1414533216854_0001/libjars/hbase-server-0.96.1.1-cdh5.0.0.jar This will be an error in Hadoop 2.0
: org.apache.hadoop.mapreduce.v2.util.MRApps - cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/temp222487979/tmp-1377727885/hbase-protocol-0.96.1.1-cdh5.0.0.jar conflicts with cache file (mapreduce.job.cache.files) hdfs://localhost.localdomain:8022/tmp/hadoop-yarn/staging/talend/.staging/job_1414533216854_0001/libjars/hbase-protocol-0.96.1.1-cdh5.0.0.jar This will be an error in Hadoop 2.0
: org.apache.hadoop.yarn.client.api.impl.YarnClientImpl - Submitted application application_1414533216854_0001
: org.apache.hadoop.mapreduce.Job - The url to track the job:
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - HadoopJobId: job_1414533216854_0001
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Processing aliases tPigFilterRow_1_row11_RESULT,tPigLoad_4_row7_RESULT,tPigMap_10_out_RESULT,tPigMap_5_CNRS_RESULT,tPigMap_6_row8_RESULT,tPigMap_8_row31_RESULT
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - detailed locations: M: tPigLoad_4_row7_RESULT,tPigLoad_4_row7_RESULT,tPigFilterRow_1_row11_RESULT,tPigMap_6_row8_RESULT,tPigMap_8_row31_RESULT,tPigMap_5_CNRS_RESULT,tPigMap_5_CNRS_RESULT,tPigMap_10_out_RESULT C:  R: 
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 0% complete
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 50% complete
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Ooops! Some job has failed! Specify -stop_on_failure if you want Pig to stop immediately on failure.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - job job_1414533216854_0001 has failed! Stop running all dependent jobs
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - 100% complete
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - Backend error message
AttemptID:attempt_1414533216854_0001_m_000000_0 Info:Error: java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:916)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - Backend error message
AttemptID:attempt_1414533216854_0001_m_000000_0 Info:Container killed by the ApplicationMaster.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - Backend error message
AttemptID:attempt_1414533216854_0001_m_000000_1 Info:Error: java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:916)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - Backend error message
AttemptID:attempt_1414533216854_0001_m_000000_1 Info:Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - Backend error message
AttemptID:attempt_1414533216854_0001_m_000000_2 Info:Error: java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:916)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - Backend error message
AttemptID:attempt_1414533216854_0001_m_000000_2 Info:Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - Backend error message
AttemptID:attempt_1414533216854_0001_m_000000_3 Info:Error: java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:916)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - Backend error message
AttemptID:attempt_1414533216854_0001_m_000000_3 Info:Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:916)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:916)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:916)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - There is no log file to write to.
: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.Launcher - java.lang.ClassCastException: java.lang.String cannot be cast to java.util.Map
at org.apache.pig.backend.hadoop.hbase.HBaseStorage.putNext(HBaseStorage.java:916)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:139)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:98)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:635)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:284)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:277)
at org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:64)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
: org.apache.pig.tools.pigstats.SimplePigStats - ERROR: java.lang.String cannot be cast to java.util.Map
: org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
: org.apache.pig.tools.pigstats.SimplePigStats - Script Statistics: 
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.3.0-cdh5.0.0 0.12.0-cdh5.0.0 cloudera 2014-10-28 15:15:28 2014-10-28 15:19:47 FILTER
Failed!
Failed Jobs:
JobId Alias Feature Message Outputs
job_1414533216854_0001 tPigFilterRow_1_row11_RESULT,tPigLoad_4_row7_RESULT,tPigMap_10_out_RESULT,tPigMap_5_CNRS_RESULT,tPigMap_6_row8_RESULT,tPigMap_8_row31_RESULT MAP_ONLY Message: Job failed! M7CmpnEventsTable,
Input(s):
Failed to read data from "/user/talend/test/Output"
Output(s):
Failed to produce result in "M7CmpnEventsTable"
Counters:
Total records written : 0
Total bytes written : 0
Spillable Memory Manager spill count : 0
Total bags proactively spilled: 0
Total records proactively spilled: 0
Job DAG:
job_1414533216854_0001

: org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher - Failed!
disconnected
Job Main_e ended at 15:19 28/10/2014.
Hadoop_Error.txt.txtHadoop_Error.txt_20141029-0029.txt