Talend Job failing with 4 million records

One Star

Talend Job failing with 4 million records

Hi Experts,
Need your urgent help. We have a simple standard job which is reading data from hive and loading into Hana.
Record count is almost 4 million. 
The job is continuously failing after running for 12 hours.
Error MEssage:
: Hive_to_Hana_LoadtHiveInput_8 java.io.IOException: java.io.IOException: Error reading file: hdfs://**/**/hive/**.db/testtable/ingested_date=2017-02-27/000000_0
We have checked and this file exists
We have 3 components:
Hiveinput ->tmap->Hanaoutput
We have provided "temp Data directory path " of the server where job is getting executed. we have also increased the maximum buffer size to 5 million rows in tmap component.
But the job is still failing.
We have another job which is reading 2 million records but that executed fine, although it also took many hours.
Please suggest what we can do?
Also there is Big Data Batch job , I tired that but there is no component to read from hive and load into hana in this job template.
Any suggestion would be great help!

Re: Talend Job failing with 4 million records


Could you please clarify in which Talend version/edition you are? Would you mind posting the full stack trace on forum?

Best regards


Don't forget to give kudos when a reply is helpful and click Accept the solution when you think you're good with it.

What’s New for Talend Spring ’19

Watch the recorded webinar!

Watch Now

Put Massive Amounts of Data to Work

Learn how to make your data more available, reduce costs and cut your build time

Watch Now

How OTTO Utilizes Big Data to Deliver Personalized Experiences

Read about OTTO's experiences with Big Data and Personalized Experiences


Talend Integration with Databricks

Take a look at this video about Talend Integration with Databricks

Watch Now