I am currently trying to understand how to use Talend to ingest CSV files on Azure Data Lake (Gen2) into a Staging Delta Lake Table on Databricks.
I am new to Talend and trying to understand all the components that can be used but to be honest I am overwhelmed with the amount of options that I can take...
As far as I could research, I think the first step would be to have a tJDBCConfiguration configured to connect to the databricks cluster, but I don't know what driver to install or choose for this to happen. Basically what you should configure on the "drivers" option. I have tried downloading the Databricks Simba 4.1 driver but apparently does not seem to work.
Second, I would like to know if anyone already worked with Delta Lake, and how it would be possible to have an incremental approach... Are there any components for that or it should be scripted?
Any help would be greatly appreciated.
Talend named a Leader.
Kickstart your first data integration and ETL projects.
Part 2 of a series of blogs on showing you how to generate a Heat Map with Pipeline Designer
Part 1 of a series of blogs on showing you how to generate a Heat Map with Pipeline Designer
Create Avro schemas with Pipeline Designer