We have a use case in which we have to read AWS S3 contents and pass it to context variables. in spark job, I have configured S3 details using tS3Configuration and tried to read the S3 file using tFileInputDelimited but job is getting failed for invalid Host name in S3 URI.
Is there any better way to read S3 data? most of the s3 talend components are used for file operations (move/copy) but not reading data inside S3 file. please suggest.
The workflow: tS3Configuration-->tFileInputDelimited-->output is correct flow. With your 'failed for invalid Host name in S3 URI.' error, could you please also post your job setting screenshots into forum? Which will be helpful for us to address your issue.