I'm not able to read ORC files using hdfs components.
Can someone please help me understand, which components to be used to read
ORC, AVRO, Parquet files for processing.
Sample hive command to read ORC file from HDFS -
Read ORC file data:
hive –orcfiledump -d HDFS Path to the file > orcFileDump.txt
Now to run hive command or script through Talend you need to use tSystem and a local installation of Hive will be required - from where you are running your Talend jobs as you will call the hive CLI, which indeed requires hive to be installed.
Thanks for the reply @iamabhishek.
The second part of your answer isn't clear, can you please elaborate more.
Btw can't i use Hiveinput component to read orc data?
@Lucifer_18 what I meant was to run the hive command you have to tSystem component and make sure that Hive is installed in the server where you are running your jobs from as hive commands needs HIve CLI to be present.
No, we can't use tHiveInput to read from orc file - as tHiveInput is the dedicated component to the Hive database (the Hive data warehouse system). It can execute a given HiveQL query in order to extract the data from Hive. If there is any external table created pointing to the orc file location then you can use tHiveInput to query from the table.
Talend named a Leader.
Kickstart your first data integration and ETL projects.
Watch the recorded webinar!
Read about OTTO's experiences with Big Data and Personalized Experiences
Take a look at this video about Talend Integration with Databricks
Find out how Forrester rate different iPaaS and Hybrid Integration Platforms