Hello Guys, For ETL we have a Java based app which reads database info from a configuration file. So we just have to update the DB credentials , name,server only in that configuration file when we develop , test or deploy the Java app. We are trying to replace this Java app by a Talend job.So we are trying to find the best way of creating a talend job which accepts database credentials as parameter. I have checked many forums but either the forums are very old or the information given is very difficult to follow. To summarize, 1.I know context can be used for passing the parameter values to a job dynamically. So i referred below link.The example used here is what exactly i want TalendOpenStudioComponentsReferenceGuide521EN/16.5+tContextLoad But the problem is that i am not able to understand below A. How values from tfileinputdelimited are passed to tcontext Load. where are the context defined? B. How tMYSQLInput is able to read the values passed??? Not sure if the example given in the link is complete. Can anybody please help by providing the list of components which are required for implementing my requirement? Also please let me know if there is a better way to read parameter dynamically at runtime? Thanks, EVC
There are many choices for achieve same goal 2 variant which I use: 1) - define 3 context group (or more) - dev, qual, prod - for each context group Talend have only 2 hardcoded value (in my case) - group (same - dev, qual, prod) and path for config files (if it not .) so if Job build for use prod context, it will search .\connection_prod.csv if Job build for use qual context, it will search .\connection_qual.csv 2) - define 1 context group only - define system environment TALEND_RUN - it for each server could be dev, qual, prod - context include hardcoded only path for config files (for example .) Talend looking for .\connection_$TALEND_RUN.csv
all context variables MUST be defined in Talend Repository (or Job build in repository) - just KEY Name, and could include any value csv file include simple KEY, VALUE pair You can define what to do if csv cover not all KEYs from context - it could stop the Job or take default value (NULL if nothing) in tMySQLInput You just use context key name for example: database host - context.db_host Same You can do and with global variables as well You can redefine this values at any time of Job, like You can create loop over XX similar servers: - store XX rows in csv for each server - read value from row, store to global variable - connect to server, run all other steps - read next
I think your second option is the best Vapukov and I use it all the time (but with a database instead of flat files). Using multiple contexts just leads to issues with testing and means that you still need to open and recompile your jobs if a context variable within a particular context changes.