I am new to Talend and trying to figure out a best approach in defining context variables . I am using Big data edition with TAC . I am planning to define all hadoop cluster parameters & connection details through contexts . Dilemma is around whether to use implicit context or repository context ( Dev , Test , Prod )
Here are some of the questions :
a) If I am using external or implicit context , how to to run the big data job as stand alone ? Because big data jobs does not have "Extra" tab to load implicit context . I am using a standard job --> using implicit context load in the job --> pass those variables to child big data job
b) How to secure password if using implicit context ? Any user that has access to file on Talend server would be able to see the password
c) If we go with repository context - DEV / TEST / PROD - Do I need to change anything when I deploy the job in different environment & try to run ? I want it to be seamless and don't want to touch the job
d) If we go with repository context - DEV / TEST / PROD - Is password secured & encrypted ? Looking at some of the posts , it seems like password will get stored in some file . Please provide file name if it's true
e) what is the best practice & pros & cons ?
Thanks in advance
Talend named a Leader.
Kickstart your first data integration and ETL projects.
Watch the recorded webinar!
Read about OTTO's experiences with Big Data and Personalized Experiences
Pick up some tips and tricks with Context Variables
Take a look at this video about Talend Integration with Databricks