Process based on source count data

Four Stars

Process based on source count data

Hi,

 

We are using Talend Data Integration 7.x. I have an issue with one of my requirement.

 

My pipeline consists of components tDBInput -> tMap -> tDBOutput.  I have a huge data coming in from source tDBInput(around 10 million) , so I had to filter the data for example (1 to 1,000,000 & 1,000,001 to 2,000,000). 

Can I process each filter sequentially such that it will not fail.

I tried to use row-->Iterate: unable to use row(iterate) from tDBInput to tMap. 

 

Please suggest me on the better approach, I cant run in parallel as the data is huge and it might affect the server capacity.

Four Stars

Re: Process based on source count data

@xdshi  any suggestions ??

Fifteen Stars TRF
Fifteen Stars

Re: Process based on source count data

You may have tDBInput -> tFileOutputDelimited (with the number of lines for each files definied on tFileOutputDelimited Advanced settings).
This will create from 1 to n files, then you can iterate over these files and do what you want.

TRF
Fifteen Stars TRF
Fifteen Stars

Re: Process based on source count data

@srkalakonda, does this help?

If so, thank's to mark your case as solved.


TRF

Calling Talend Open Studio Users

The first 100 community members completing the Open Studio survey win a $10 gift voucher.

Start the survey

2019 GARNER MAGIC QUADRANT FOR DATA INTEGRATION TOOL

Talend named a Leader.

Get your copy

OPEN STUDIO FOR DATA INTEGRATION

Kickstart your first data integration and ETL projects.

Download now

What’s New for Talend Summer ’19

Watch the recorded webinar!

Watch Now

Best Practices for Using Context Variables with Talend – Part 4

Pick up some tips and tricks with Context Variables

Blog

How Media Organizations Achieved Success with Data Integration

Learn how media organizations have achieved success with Data Integration

Read

Definitive Guide to Data Quality

Create systems and workflow to manage clean data ingestion and data transformation.

Download