I am working on the big data spark requirement where I have to use tCacheOut and tCacheIn. Attached job screen shot is working fine but in one scenario when tCacheout has nothing to store i.e. filter is not allowing to flow any row to next component, it throws null pointer exception.
I know, there are other alternatives like write output in disk and read again at the next step but I don't want to do that because disk read and write is always an overhead.
How we can handle null pointer exception in this case?
Watch the recorded webinar!
Introduction to Talend Open Studio for Data Integration.
Test drive Talend's enterprise products.
Practical steps to developing your data integration strategy.