I'm trying to load from 3 different Hive tables which are in DB1 to target table which is in DB2. Source and target are Hive.
I'm unable to load using thiveload component after doing some joins with tmap. I tried it with ELT components but it works only when my source and target are in same DB. My target tables in Hive DB are internal tables and in ORC format.
My current job design is
I guess this is not the optimized way of designing my job. I'm developing a Big Data standard job. Can anyone provide me a way of designing a better job design in Big Data standard job.
For Example: toracleinput->tmap->toracleoutput. Is there any way of doing it in talend data fabric 6.3.
I found Hive input and output are available only in Big Data Batch jobs.
ELT components have only one connection that means ELT component can only be used in the same database connection.
Can you find Hive input in a Big Data standard job?
Thanks for your reply.
I mentioned my job design in my earlier post. I'm didnt find thiveoutput component in a standard job rather it is available in a batch job. So technically in a standard job i need to push to hdfs and after that to hive using a thdfsoutput component and(oncomponentok) a thiveload.