I also have same scenario where i need to write data into files as per the data value of some column like
where i am getting 2015 and 2016 in my data as a column value .
I am able to achieve this with standard job .
But I am facing the problem doing this by Bigdata Batch job (Spark ).
or suggest any other optimize way using spark batch job
If you just want to split the input file based on row count. Then Use tFileOutputDelimited.
Go to Advanced setting and tick (Split output in several files) and you can mention number of row you need in each output file.