Writing from tOracleInput to tHDFSOutput Issue

One Star

Writing from tOracleInput to tHDFSOutput Issue

I am attempting to write from an Oracle table to HDFS directly. Right now when I run the job it aborts after writing 7 records to HDFS with the following error....
Starting job PIG_JOB at 09:34 12/10/2013.

connecting to socket on port 3447
connected
Configuration WARN fs.default.name is deprecated. Instead, use fs.defaultFS
Exception in component tHDFSOutput_1
java.lang.NullPointerException
at demo.pig_job_0_1.PIG_JOB.tOracleInput_1Process(PIG_JOB.java:14718)
at demo.pig_job_0_1.PIG_JOB.runJobInTOS(PIG_JOB.java:15595)
at demo.pig_job_0_1.PIG_JOB.main(PIG_JOB.java:15461)
disconnected
Job PIG_JOB ended at 09:34 12/10/2013.
I have also attempted to first write the file to a delimeted file and then write to HDFS. I receive the same issue. Am I missing something?
Moderator

Re: Writing from tOracleInput to tHDFSOutput Issue

Hi,
Did you use any tHDFSConnenction component in your job design? Did you set connector in it (OnSubjobOK, OnComponentOK), if not, please upload your job screenshots into forum so that we can address your issue more quickly.
Best regards
Sabrina
--
Don't forget to give kudos when a reply is helpful and click Accept the solution when you think you're good with it.
One Star

Re: Writing from tOracleInput to tHDFSOutput Issue

I have set it up both ways, using tHDFSConnection along with just setting up my connections through Cloudera in my tHDFSOutput.
If i limit the rownum < 14 records I complete successfully and I can see all 13 records within HDFS. But if i set the rownum higher or attempt to extract all records the job aborts with the above error. Any thoughts on why volume of data would matter? Curious if I am just having a connection issue with my edge node.
I did not set (OnSubjobOK, OnComponentOK), i will look into that. Right now I am going to attempt to deploy the job to Hadoop and use Oozie to run. I have had connection issues in the past and want to verify that is not the issue.