I am new to Talend and trying to figure out a best approach in defining context variables . I am using Big data edition with TAC . I am planning to define all hadoop cluster parameters & connection details through contexts . Dilemma is around whether to use implicit context or repository context ( Dev , Test , Prod )
Here are some of the questions :
a) If I am using external or implicit context , how to to run the big data job as stand alone ? Because big data jobs does not have "Extra" tab to load implicit context . I am using a standard job --> using implicit context load in the job --> pass those variables to child big data job
b) How to secure password if using implicit context ? Any user that has access to file on Talend server would be able to see the password
c) If we go with repository context - DEV / TEST / PROD - Do I need to change anything when I deploy the job in different environment & try to run ? I want it to be seamless and don't want to touch the job
d) If we go with repository context - DEV / TEST / PROD - Is password secured & encrypted ? Looking at some of the posts , it seems like password will get stored in some file . Please provide file name if it's true
e) what is the best practice & pros & cons ?
Thanks in advance
Watch the recorded webinar!
Introduction to Talend Open Studio for Data Integration.
Test drive Talend's enterprise products.
Practical steps to developing your data integration strategy.