How to handle structure of source tables/files dynamically to load the data into a file format target S3?

Highlighted
Six Stars

How to handle structure of source tables/files dynamically to load the data into a file format target S3?

Hi, 

I want to achieve a situation where I have different sources(tables and files) all are having the different structure. So, need a way to import these sources dynamically or with a metadata driven approach and able to transform the columns as well by applying any business logic before loading into target.

 

For example:

Source table-1(structure gets changed on daily basis) -->  target file 1

Source file-1(structure gets changed on daily basis) --> target file 2

 

Kindly suggest how we can achieve this in Talend.

 

NOTE: I'm aware about the Dynamic column functionality in various Talend components but:

1. How can we transform the columns getting in dynamic column

2. And I'm creating a Spark job and it doesn't support this functionality.

Moderator

Re: How to handle structure of source tables/files dynamically to load the data into a file format target S3?

Hello,

Dynamic schema type can be used with components in  a DI standard job. But, it is not available with a Big Data Spark job in Talend Studio designer.

Here exists a new feature jira issue:https://jira.talendforge.org/browse/TBD-3302

Best regards

Sabrina

--
Don't forget to give kudos when a reply is helpful and click Accept the solution when you think you're good with it.

2019 GARNER MAGIC QUADRANT FOR DATA INTEGRATION TOOL

Talend named a Leader.

Get your copy

OPEN STUDIO FOR DATA INTEGRATION

Kickstart your first data integration and ETL projects.

Download now

What’s New for Talend Summer ’19

Watch the recorded webinar!

Watch Now

Best Practices for Using Context Variables with Talend – Part 1

Learn how to do cool things with Context Variables

Blog

Migrate Data from one Database to another with one Job using the Dynamic Schema

Find out how to migrate from one database to another using the Dynamic schema

Blog

Put Massive Amounts of Data to Work

Learn how to make your data more available, reduce costs and cut your build time

Watch Now