We are passing both 'spark.yarn.queue' and 'mapred.job.queue.name' with in the 'hive.execution.engine' parameter string.
All the Hive on Spark jobs land on the correct queue as expected.
However, All the Mapreduce task getting trigger by the particular Talend job ends up in the root.default queue !!
I have attached the screen shot for your reference, if you notice these are TestExportMapper.
It is critical for our environment to have all related task form a job to land on dedicated pool.
Need you help on getting this sorted.
Thanks in advance
We need more information w
Could you please create a support case on talend support portal? Our colleagues from support team will help you.
Talend named a Leader.
Kickstart your first data integration and ETL projects.
Part 2 of a series on Context Variables
Learn how to do cool things with Context Variables
Read about some useful Context Variable ideas