Four Stars

How build schemas at the runtime when write data from Oracle tables into csv files?

Hi everyone,

 

I have used TOS product to write data from Oracle tables into csv files.

I have a input csv file with schema name and table name as bellow:

 

Schema_01,table_01

Schema_02,table_02

...

My job read records from this file one by one, and then it will read Oracle tables to get data for writing to csv files. Finally, the result I expected that I will have output csv files look like:

Schema_01_table_01_data.csv

Schema_02_table_02_data.csv

...

But after accessing and reading correct tables, how Talend can understand and build schemas at the runtime? I have used tOracleOutputBulk component to write data into csv files, and I know I have to define these schemas manually at the design step in TOS. 

I have found a useful link at http://bigdatadimension.com/writing-arbitrary-database-tables-file-without-dynamic-schema-talend/. The author used Object type at the defining schema and Java API ResultSetMetaData to write data into correct csv format. However, this only work fine with small tables and take time, memory for large tables.

Do we have other solutions for this situation?

 

Thank you very much.

 

1 REPLY
Employee

Re: How build schemas at the runtime when write data from Oracle tables into csv files?

My 1st  recommendation will be to use the enterprise version of Talend to get access to dynamic schema functionality and much more.  

 

Another way will be to build a job with the complete input schema of 1 table and the target components.

Then try to understand how the .item file is generated and use Talend itself to generate the .item file.  This way you create a job generator.  However, the effort you will spend designing this job generator (without support and the risk of not being able to do it properly), you can easily save time and money by just going enterprise and have a product that you can use for much more than just this 1 use case.