I'm converting this job to a "big data batch job." It also has a context load job from which it's reading in context variables, but here is the main job.
While converting the DI job to Bigdata job, please note that all the components in Standard job will not be present in BD job. For example, file manipulation processes like tFileDelete. So you will have to orchestrate your flow in such a way that a series of DI and BD jobs can fulfill your use case.
All the processes which need compute poser of Spark can be converted to BD format. But file processing and normal ETL steps can be retained in DI job and these steps need to be called before or after the BD stage according to your use case.
Please appreciate our Talend community members by giving Kudos for sharing their time for your query. If your query is answered, please mark the topic as resolved :-)
Talend named a Leader.
Kickstart your first data integration and ETL projects.
Learn how to make your data more available, reduce costs and cut your build time
Read about OTTO's experiences with Big Data and Personalized Experiences
Take a look at this video about Talend Integration with Databricks