Oracle to Hadoop Data Migration - Best/Efficient Way to do Onetime/Delta(Ongoing) Load - Please guide ?

Four Stars

Oracle to Hadoop Data Migration - Best/Efficient Way to do Onetime/Delta(Ongoing) Load - Please guide ?

How to build a Data Warehouse from current Operational Data Systems stored in Oracle and move the data  to hadoop cluster and

then implementing processes that allow to update the Data Warehouse with source systems daily (or periodically).

Please guide using Talend how one can  do the following in a best/efficient way please - ?

 

1. Initial load into Data Warehouse ( Source - Oracle Target - Hadoop cluster - hdfs ) ?  please tell the detailed steps ?

 

2. Periodical delta loads into Data Warehouse ( Source - Oracle Target - Hadoop cluster - hdfs ) ? please tell the detailed steps ?


Accepted Solutions
Highlighted
Employee

Re: Oracle to Hadoop Data Migration - Best/Efficient Way to do Onetime/Delta(Ongoing) Load - Please guide ?

Hi,

 

    If you have to extract data from a source table without any joins or sub queries, you can directly do it in a Bigdata Batch job as shown below.

image.png

 

But, if you have join conditions or complex queries, I would recommend to use a Talend standard job to extract the data and use HDFS components in Standard job to load the data to target Hadoop cluster.

 

The difference between one off and daily will be the difference in data volume or filter condition. You need to also make sure that you are having adequate memory allocated for the job by changing the memory parameters of the job.

 

Warm Regards,
Nikhil Thampi

Please appreciate our Talend community members by giving Kudos for sharing their time for your query. If your query is answered, please mark the topic as resolved

 

View solution in original post


All Replies
Highlighted
Employee

Re: Oracle to Hadoop Data Migration - Best/Efficient Way to do Onetime/Delta(Ongoing) Load - Please guide ?

Hi,

 

    If you have to extract data from a source table without any joins or sub queries, you can directly do it in a Bigdata Batch job as shown below.

image.png

 

But, if you have join conditions or complex queries, I would recommend to use a Talend standard job to extract the data and use HDFS components in Standard job to load the data to target Hadoop cluster.

 

The difference between one off and daily will be the difference in data volume or filter condition. You need to also make sure that you are having adequate memory allocated for the job by changing the memory parameters of the job.

 

Warm Regards,
Nikhil Thampi

Please appreciate our Talend community members by giving Kudos for sharing their time for your query. If your query is answered, please mark the topic as resolved

 

View solution in original post

2019 GARNER MAGIC QUADRANT FOR DATA INTEGRATION TOOL

Talend named a Leader.

Get your copy

OPEN STUDIO FOR DATA INTEGRATION

Kickstart your first data integration and ETL projects.

Download now

What’s New for Talend Summer ’19

Watch the recorded webinar!

Watch Now

Self-service Talend Migration: Moving from On-Premises to the Cloud

Move from On-Premises to the Cloud by following the advice of experts

Read Now

Put Massive Amounts of Data to Work

Learn how to make your data more available, reduce costs and cut your build time

Watch Now

How to deploy Talend Jobs as Docker images to Amazon, Azure and Google Cloud reg...

Learn how to deploy Talend Jobs as Docker images to Amazon, Azure and Google Cloud registries

Blog