Designing Spark batch Job for implementing ETL on Hive tables

One Star

Designing Spark batch Job for implementing ETL on Hive tables

Hi Everyone,

 

I am converting a legacy ETL(C++,Shell Script) logic into Talend and Big data environment.

I have imported the data in Hive tables from Sybase and now I want to read the data, perfom transformations and load in target hive tables. Primary challenge is data volume and the complex transformation logics. Which of the below approaches would give better performance and why:

 

1. Creating a standard job using ELT Hive components 

2. Creating spark batch job 

 

or if there is any other way then please share.

 

Thanks,

Rohini

Employee

Re: Designing Spark batch Job for implementing ETL on Hive tables

Hi Rohini,

 

     I believe Spark Batch job will be better in your case and you can use tHiveInput and tHiveOutput components. For all Sybase DB related operations you can use Standard job. So its all about synchronization of your jobs one after another. 

 

     Perform your normal tasks with Standard job and any big data related activities using Bigdata job.

 

Warm Regards,
Nikhil Thampi

Please appreciate our Talend community members by giving Kudos for sharing their time for your query. If your query is answered, please mark the topic as resolved :-)

2019 GARTNER MAGIC QUADRANT FOR DATA INTEGRATION TOOL

Talend named a Leader.

Get your copy

OPEN STUDIO FOR DATA INTEGRATION

Kickstart your first data integration and ETL projects.

Download now

Put Massive Amounts of Data to Work

Learn how to make your data more available, reduce costs and cut your build time

Watch Now

How OTTO Utilizes Big Data to Deliver Personalized Experiences

Read about OTTO's experiences with Big Data and Personalized Experiences

Blog

Talend Integration with Databricks

Take a look at this video about Talend Integration with Databricks

Watch Now