I am running a spark job in hadoop cluster.
I have tRowgenerator and tJavarow and tfileoutputdelimited.
I want to pass all the rows to delimited file as well as print the values of all fields of each row in tJavarow.
I want to see the values which are printed in log of Spark application(job)
Is this possible in big data spark job using tJavarow.
I am not able to print.
PS: I know this is possible in standard jobs
I replied to your other subject which is the same but a little more detailed. Let me know if that answer helps: