Six Stars

Passing Context Variables in BigData Spark Job

Hi Team

 

I have designed a Job to convert csv file to parquet format as shown in pic

1.PNG

In tfileinputdelimited i am using context variable to define file path

i am getting following error stating No input path specified

cant we parameterise BigDataSpark Job.

How this can be resolved. Do we have any other approch 

please help me out.

 

ErrorMessage

par_dir_name...HR Services Action Report 02.07.2016.csv
[ERROR]: org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand - Aborting job.
java.io.IOException: No input paths specified in job
at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:201)
at org.talend.hadoop.mapred.lib.file.TDelimitedFileInputFormat.listStatus(TDelimitedFileInputFormat.java:70)
at org.talend.hadoop.mapred.lib.file.TDelimitedFileInputFormat.getSplits(TDelimitedFileInputFormat.java:96)

 

 

Thanks

1 ACCEPTED SOLUTION

Accepted Solutions
Moderator

Re: Passing Context Variables in BigData Spark Job

Hello,

Can you successfully execute your spark job when use your file path directly in tfileinputdelimited component without context variables?

Best regards

Sabrina

--
Don't forget to give kudos when a reply is helpful and click Accept the solution when you think you're good with it.
1 REPLY
Moderator

Re: Passing Context Variables in BigData Spark Job

Hello,

Can you successfully execute your spark job when use your file path directly in tfileinputdelimited component without context variables?

Best regards

Sabrina

--
Don't forget to give kudos when a reply is helpful and click Accept the solution when you think you're good with it.