MapReduce Job - Getting server.namenode.LeaseExpiredException Error

One Star

MapReduce Job - Getting server.namenode.LeaseExpiredException Error

I run ten bz2 files which is each 200mb, My map reduce job failed with error below. My job is manage to run with one or 2 files. Any idea what settings am i missing ?
No lease on /user/cloudera/Messenger_Demo/DS7/20150731200055919/SUCCESS/part-00000.avro (inode 24546): File does not exist.
15/07/31 21:53:51 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
15/07/31 21:53:51 INFO client.RMProxy: Connecting to ResourceManager at localhost/127.0.0.1:8032
15/07/31 21:53:57 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
15/07/31 21:53:57 INFO mapred.FileInputFormat: Total input paths to process : 10
15/07/31 21:53:57 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
15/07/31 21:53:57 INFO net.NetworkTopology: Adding a new node: /default/127.0.0.1:50010
15/07/31 21:53:57 INFO mapreduce.JobSubmitter: number of splits:10
15/07/31 21:53:57 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1438398605238_0010
15/07/31 21:53:58 INFO impl.YarnClientImpl: Submitted application application_1438398605238_0010
15/07/31 21:53:58 INFO mapreduce.Job: The url to track the job:
15/07/31 21:53:58 INFO Configuration.deprecation: jobclient.output.filter is deprecated. Instead, use mapreduce.client.output.filter
Running job: job_1438398605238_0010
 map 0% reduce 0%
 map 10% reduce 0%
 map 30% reduce 0%
 map 40% reduce 0%
 map 50% reduce 0%
 map 100% reduce 0%
Job complete: job_1438398605238_0010
Counters: 32
    File System Counters
        FILE: Number of bytes read=0
        FILE: Number of bytes written=789970
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=1431459923
        HDFS: Number of bytes written=5565119895
        HDFS: Number of read operations=25
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=20
    Job Counters
        Failed map tasks=1
        Killed map tasks=5
        Launched map tasks=11
        Data-local map tasks=11
        Total time spent by all maps in occupied slots (ms)=2338547
        Total time spent by all reduces in occupied slots (ms)=0
        Total time spent by all map tasks (ms)=2338547
        Total vcore-seconds taken by all map tasks=2338547
        Total megabyte-seconds taken by all map tasks=2394672128
    Map-Reduce Framework
        Map input records=20000000
        Map output records=0
        Input split bytes=1698
        Spilled Records=0
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=14717
        CPU time spent (ms)=936940
        Physical memory (bytes) snapshot=1091555328
        Virtual memory (bytes) snapshot=4501733376
        Total committed heap usage (bytes)=986185728
    File Input Format Counters
        Bytes Read=0
    File Output Format Counters
        Bytes Written=0
Job Failed: Task failed task_1438398605238_0010_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0
java.io.IOException: Job failed
    at org.talend.hadoop.mapred.lib.MRJobClient.runJob(MRJobClient.java:154)
    at ds7.ds7_mapreduce_test_0_1.DS7_MapReduce_Test.runMRJob(DS7_MapReduce_Test.java:2029)
    at ds7.ds7_mapreduce_test_0_1.DS7_MapReduce_Test.access$1(DS7_MapReduce_Test.java:2019)
    at ds7.ds7_mapreduce_test_0_1.DS7_MapReduce_Test$1.run(DS7_MapReduce_Test.java:1854)
    at ds7.ds7_mapreduce_test_0_1.DS7_MapReduce_Test$1.run(DS7_MapReduce_Test.java:1)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
    at ds7.ds7_mapreduce_test_0_1.DS7_MapReduce_Test.tHDFSInput_1Process(DS7_MapReduce_Test.java:1748)
    at ds7.ds7_mapreduce_test_0_1.DS7_MapReduce_Test.run(DS7_MapReduce_Test.java:1997)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at ds7.ds7_mapreduce_test_0_1.DS7_MapReduce_Test.runJobInTOS(DS7_MapReduce_Test.java:1955)
    at ds7.ds7_mapreduce_test_0_1.DS7_MapReduce_Test.main(DS7_MapReduce_Test.java:1940)
disconnected

When i checked my Job history it gives an error below :
Error: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /user/cloudera/Messenger_Demo/DS7/20150731200055919/SUCCESS/part-00000.avro (inode 24546): File does not exist. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3319) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3407) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3377) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:673) at
One Star

Re: MapReduce Job - Getting server.namenode.LeaseExpiredException Error

Ok. I think I solved this issue by unticking the option "Compress immediate map output to reduce network traffic." at Hadoop Configuration of the MapReduce job. After this i get the following error instead :
Error: Java heap space Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143
How to solve this ?
One Star

Re: MapReduce Job - Getting server.namenode.LeaseExpiredException Error

Ok, I found out the root cause.
I am using a tmap in my MapReduce Job. When i use one output link it is working fine. but when i am using 2 output links, this error occurs. Any setting i need to set ?
One Star

Re: MapReduce Job - Getting server.namenode.LeaseExpiredException Error

Hello feige82

 

What was the final solution for this issue ?

Ok, I found out the root cause. 
I am using a tmap in my Map-reduce Job. When i use one output link it is working fine. but when i am using 2 output links, this error occurs. Any setting i need to set ?

 

 

 

2019 GARNER MAGIC QUADRANT FOR DATA INTEGRATION TOOL

Talend named a Leader.

Get your copy

OPEN STUDIO FOR DATA INTEGRATION

Kickstart your first data integration and ETL projects.

Download now

What’s New for Talend Summer ’19

Watch the recorded webinar!

Watch Now

Put Massive Amounts of Data to Work

Learn how to make your data more available, reduce costs and cut your build time

Watch Now

How OTTO Utilizes Big Data to Deliver Personalized Experiences

Read about OTTO's experiences with Big Data and Personalized Experiences

Blog

Talend Integration with Databricks

Take a look at this video about Talend Integration with Databricks

Watch Now