Scenario: Migration Project from Database 1 to Database 2
Use one talend job to load variety of tables from database 1 to database 2.
example: i want to use the same talend job repetitively for different tables who has different schema to load from database 1 to database2.
This is possible informatica tool as the mapping provided as assumed as logical and the source and target details are mentioned at the session level and thus one can use the same mapping for different table of different schema, but want to know weather is it possible in talend. if Yes, please let me know the procedure to do such jobs.
To achieve the same.
Define context variables for database,connection details,table and schema details.
Instead of hard-coding the details of database, tables,schema ,use the defined the context variables..
In talend , we have context variables / global variables..
Hope, this helps.
Better refer the talend documentation for detailed information ..
Thank you for the quick reply.
Can you please detail it.
i understand there are context Variables/ Global to define dynamically through some source.
if you can help me out with an example will be much appreciated.
Here is a related scenario:TalendHelpCentercenario: Reading data from different MySQL databases using dynamically loaded connec...
Feel free to let us know if it is OK with you.
The different tables, and even databases side of things isn't a problem, and you could use the external text file approach suggests in the article @xdshi linked to in her post, or assuming this is a one-off and not something you'll be modifying after deployment, a tFixedRowInput in "Inline Table" mode fed through a tRowToIterate which would achieve the same thing, but keep everything within your job.
However, you're really not going to be able to do the dynamic schema thing, which is key to what you're trying to achieve.
This is because Talend is actually a code generator, and when you run/build a job, it generates Java code from the code templates for each component, combined with context variables, input and output schema row names and individual column names. The individual schema columns are hard-coded as variables (actually properties on objects for each row within ArrayLists, the class for which are also defined at run/build time based on the schema) and referenced directly in the code, and so they need to be defined at design/compile time. This has many advantages, but unfortunately, it makes the task you're trying to achieve all but impossible.
In order to do any manipulation on the data in your job you'll need to have a schema defined, as all of the components working with flows require this, and if you think about it, without knowing the columns you'll have flowing through a job, there's very little that can be done even if it were possible to use a dynamic schema, except pipe the input directly to the output, and so...
...if the databases are on the same DBMS and server, or they could be moved there temporarily for the purposes of this migration, then I'd suggest just using a series of SQL INSERT INTOs (or the equivalent for your DBMS) to move the data between your databases, which would be straightforward, and about as efficient as it gets.
This would allow you to implement any basic logic and carry out manipulation on the columns in the source SELECT before they are inserted into the destination table.