Talend Job failing with 4 million records

Highlighted
One Star

Talend Job failing with 4 million records

Hi Experts,
Need your urgent help. We have a simple standard job which is reading data from hive and loading into Hana.
Record count is almost 4 million. 
The job is continuously failing after running for 12 hours.
Error MEssage:
: Hive_to_Hana_LoadtHiveInput_8 java.io.IOException: java.io.IOException: Error reading file: hdfs://**/**/hive/**.db/testtable/ingested_date=2017-02-27/000000_0
We have checked and this file exists
We have 3 components:
Hiveinput ->tmap->Hanaoutput
We have provided "temp Data directory path " of the server where job is getting executed. we have also increased the maximum buffer size to 5 million rows in tmap component.
But the job is still failing.
We have another job which is reading 2 million records but that executed fine, although it also took many hours.
Please suggest what we can do?
Also there is Big Data Batch job , I tired that but there is no component to read from hive and load into hana in this job template.
Any suggestion would be great help!
Moderator

Re: Talend Job failing with 4 million records

Hello,

Could you please clarify in which Talend version/edition you are? Would you mind posting the full stack trace on forum?

Best regards

Sabrina

--
Don't forget to give kudos when a reply is helpful and click Accept the solution when you think you're good with it.

Cloud Free Trial

Try Talend Cloud free for 30 days.

Tutorial

Introduction to Talend Open Studio for Data Integration.

Definitive Guide to Data Integration

Practical steps to developing your data integration strategy.

Definitive Guide to Data Quality

Create systems and workflow to manage clean data ingestion and data transformation.