How to connect to ADLS Gen2 using Azure Databricks


This article shows you how to design a Talend Spark Databricks Job to interact with and connect securely to Azure Data Lake Storage (ADLS) Gen2.



  • Talend Studio 7.2.1
  • Databricks 5.4
  • ADLS Gen2 Storage




Configuring Azure


Create a service principal


  1. Create a service principal from the Azure Portal by navigating to Azure Active Directory and selecting App Registrations, then select New registration.



  2. Provide the name of the registration, then select the Accounts in this organizational directly only (Talend only - Single tenant) radio button under Supported account types. For a service principal, the Redirect URI is optional, so leave it blank. Click Register.



  3. Record the Application (client) ID information. You'll need it later.



  4. Create a key for your service principal, by selecting Certificates & secrets from the menu on the left, then select New client secret.



  5. Enter a description and select an expiration. Click Add.



  6. Record the key value. Important: this is the only opportunity you'll have to capture the key value. Make sure you save this information, as you'll need it later.


Capture the OAuth 2.0 token endpoint

  1. On the Overview menu, select Endpoints.

    endpoints (1).JPG


  2. After the Endpoints window opens, use the copy button next to OAuth 2.0 token endpoint to capture the information, you'll need it in the Databricks Job.



Access ADLS Gen2 storage

This section shows you how to use the Azure CLI and Azure Storage Explorer applications to verify that your service principal has permission to access the ADLS Gen2 storage.

  1. In Azure CLI, pass the Application (client) ID you captured in the Creating a service principal section of this article, to get the Object ID that you'll use for your service principal to set permissions on the ADLS Gen2 storage, by entering the following command:
    az ad sp show --id

    The return output looks like this:



  2. Using the Object ID captured in Step 1, open Azure Storage Explorer, and locate your ADLS Gen2 storage. You'll find the Blob Containers under your ADLS Gen2 storage. You'll use it in your Talend Job.



  3. Right-click the container and select Manage Access. In the new window that opens, add your Object ID (captured in Step 1), then set the permissions to meet your requirements.



Creating Databricks secrets

This section shows you how to use Databricks secrets to store the credentials for the ADLS Gen2 storage, and reference them in your Jobs.

  1. Create a new secret scope.



  2. Add a secret to the scope, by running the following command:



  3. After the Notepad window opens, add and save your service principal key.



  4. Repeat the process to create secrets for the Client ID and the Endpoint.



  5. Following the instructions in the Process data stored in Azure Data Lake Store with Databricks using Talend, article, complete the steps in the Create a Cluster section to create a Databricks cluster. The KB uses a Databricks 3.5LTS cluster example, but the same steps apply when creating a 5.4 cluster.

  6. After the cluster is created and running, navigate to the main Azure Databricks Workspace page, then select Create a Blank Notebook.



  7. Name the Notebook, select Scala on the Language pull-down list, then select the 5.4 cluster you created in Step 5, on the Cluster pull-down list. Click Create.



  8. To leverage the Databricks secrets you created to mount the ADLS Gen2 storage in DBFS and validate that you can read from it, add the following Scala code:

    val configs = Map(
     "" -> "OAuth",
     "" -> "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider",
     "" -> dbutils.secrets.get(scope = "talendadlsgen2", key = "adlsclientid"),
     "" -> dbutils.secrets.get(scope = "talendadlsgen2", key = "adlscredentials"),
     "" -> dbutils.secrets.get(scope = "talendadlsgen2", key = "adlsendpoint")
    // Optionally, you can add  to the source URI of your mount point.
     source = "abfss://",
     mountPoint = "/mnt/adls",
     extraConfigs = configs);
    // val df ="/mnt/adls/databrickstest2/part-00000")
    //Optionally, you can unmount the mount point from DBFS
  9. Run the Notebook. Notice that the ADLS Gen2 storage is mounted in DBFS, and is accessible by your Databricks Cluster.


    Optional: consider adding the following before building your Talend Studio Job:

    • To mount and unmount the ADLS Gen2 storage from DBFS, or to verify that it is mounted in your Talend pipeline, in a DI Job, you can leverage a tRESTClient component, to call the Notebook using the Databricks Jobs API as defined on the Databricks, Runs submit page.

    • Leverage the Databricks Jobs API to call the Notebook you created, by adding a tJavaRow component with the following JSON request:

      row8.string = new String("{\"run_name\": \"sparkadlsgen2\", \"new_cluster\": {\"spark_version\": \""+ context.Notebook_Spark_Version + "\",\"node_type_id\": \"" + context.Node_Type + "\",\"num_workers\": " + context.databricks_Workers +"},\"notebook_task\": { \"notebook_path\":\"/Users/" + context.databricks_user + "/adlsgen2mount\"}}");



    • Connect the tRESTClient component to the tJavaRow component using a Row > Main.



    • Configure the tRESTClient component, as shown below:



Building a Talend Studio Job

  1. In the Repository, right-click Job designs, click Create Big Data Batch Job, then name the Job. Click Finish.



  2. Add a tFileInputDelimited component to the designer. In the Folder/File path, enter the path to the file using the mountpoint you created in DBFS. Notice that the Define a storage configuration component check box is not selected, because if one is not defined, by default, the component tries to read from DBFS Filesystem.



  3. Click the [...] button next to Edit schema and define the schema.



  4. Add a tFileOutputDelimited component and connect it to the tFileInputDelimited using the Row > Main connection. In the Folder path, enter the path to the file using the DBFS mountpoint and select the folder in ADLS Gen2 where the output will be written. Again, don't select the Define a storage configuration component check box because, by default, it uses DBFS.



  5. Click the Run tab and select Spark Configuration, then using the information you collected during the creation of the Databricks Cluster, configure the connection to your Databricks cluster.

    Note: you can leave the DBFS dependencies folder blank, or if you want the Job dependencies to be uploaded to a specific path, you can set the path. Talend 7.2.1, offers a patch that adds support for Databricks 5.4 cluster if those dependencies are needed, you can request the patch from Talend Support.



  6. Your Job should look like this:



  7. Run the Job.

  8. Review the output and verify that you have successfully connected to ADLS Gen2 using your Databricks cluster.



  9. Open Azure Storage Explorer and verify that the folder exists and that the output is correct.




This article showed you how to use Azure and Databricks secrets to design a Talend Spark Databricks Job that securely interacts with Azure Data Lake Storage (ADLS) Gen2.

Version history
Revision #:
16 of 16
Last update:
‎11-04-2019 05:34 AM
Updated by: