I was trying to consume kafka message in big data streaming job. But i want to pass connection and other keystore details from a file as we do in standard job. But there is no context load component , also we can't run subjob in a big stream job. So how to achieve this ?
Here is a new feature jira issue about "Create tContextLoad for Big Data Batch and Spark Streaming"