I have a data in coming form a file.
Now I wanted to store "RawDataX" in corresponding X value location
I don't want to create 26 tFileOutputDelimited in job
Is there any possible way where i can use single tFileOutputDelimited for all records
In DI, we can use tFlowtoIterate and context variable in tFileOutputDelimited to generate above requirement
Can anyone give some ideas how to implement same thing in spark or map-reduce job ?
So far, tflowtoIterate is available in Standard ETL only.
Here is a KB article about:https://community.talend.com/t5/Architecture-Best-Practices-and/Spark-Dynamic-Context/ta-p/33038.
Hope it will help.