You have a Spark Job with two subflows, which are linked by OnSubjobOk.
At execution time, the second tJavaRow component finds the value of the context variable to be NULL instead of the value set in the first tJavaRow.
In a standard Job, you can update context variables in one flow and use them in another because all the code is executed on the same Java virtual machine (JVM). However, you can't do this in a Spark Job, where the execution is done from several executors on separate JVMs.
When a function passed to a Spark operation is executed on a remote cluster node, it works on separate copies of all the variables used in the function. These variables are copied to each machine, and no updates to the variables on the remote machine are propagated back to the driver program.
Instead of storing data in context variables, consider using the tCacheOut and tCacheIn components.