This article explains how to use Talend Hive components in MapR Spark Batch Jobs to read from Hive MapR-DB tables. As MapR provides the ability to query MapR-DB tables through a Hive View, this article also covers how to set up Talend Jobs to read from a Hive View of MapR-DB table.
After setting up the MapR Client, generate a MapR ticket that your Job can utilize to communicate with the cluster:
Ensure that your Studio has access to all of the Cluster nodes, and that they can reach back to your Studio per the Spark Security documentation, since Talend utilizes the YARN-Client paradigm that has the Spark driver spun up at the same location as the Job it is run from.
Configure the Hadoop Cluster connection in metadata in Studio.
Right-click Hadoop Cluster, then click Create Hadoop Cluster.
Select the distribution and version of your Hadoop cluster, then select Import configuration from local files. Click Next.
Ensure your system has a local copy of the hive-site.xml, mapred-site.xml and yarn-site.xml files to import in to the Hadoop metadata wizard.
Import the cluster configuration files.
Notice that after the configuration files are imported, not all of the information on the next screen is populated, and it gives you a warning that the Resource Manager needs to be specified. This is because there are no specific hostnames included in the configuration files for the Resource Manager and CLDB nodes. You need them though later in this article, as they contain properties that will help with utilizing the Resource Manager HA.
To fully utilize the CLDB and Resource Manager HA, complete the wizard as shown below:
Once the cluster information is populated, click Check Services to ensure that Studio can connect successfully to the cluster.
Right-click Job Designs, click Create Big Data Batch Job, then give it a name.
From the Hadoop Cluster connection you created earlier, drag the HDFS connection to the canvas, then select to enter a tHDFSConfiguration component. Notice that it populates in right away, and in the Run tab, the Spark Configuration information is completed for you. This information tells the Job how to communicate with Spark.
Again, using the Hadoop Cluster connection you created earlier, drag the Hive Connection to the canvas, then select to enter a tHiveConfiguration component.
For each of the following libraries, use a tLibraryLoad component referencing each one. The Hive components use these libraries to retrieve the data from the Hive view of your MapR-DB table:
Add a tHiveInput component and configure it to read from the Hive View of your MapR-DB table.
Configure this component to output the values of the table to a tLogRow to ensure you can successfully read the table.
The complete Job should look like this:
Run the Job to see if you successfully connected to the Hive View, and can read the MapR-DB table data.
The same Job design will work for MapR 5.2.0 and above.
You can utilize MapR 6.0.1 in Talend 6.5.1 through a patch, available from Talend Support, that adds it as a supported version.