JDBCOutput - Die on error - continues processing

One Star

JDBCOutput - Die on error - continues processing

I've got a flow configured...
tJDBCInput --> tMap --> tJDBCOutput -- (on component error) --> tAssert
and "die on error" set to true for tJDBCOutput. However, when a PK or FK constraint error occurs during an insert, the process continues running.  The error is reported at the end, once all input rows have been processed, but if I have 1,000,000 records and the first output (insert) fails I would like it to quit immediately -- currently the UI seems to hang while I imagine it continues to process the remaining 999,999 rows.
Is this possible, or am I using the wrong components?  Thanks!
Moderator

Re: JDBCOutput - Die on error - continues processing

Hi,
The 'die on error' option will stop to run once there is an error.
Do you want your job to continue running, until its completed?
You can implement a "Reject" row on your output component
JDBCInput --> tMap --> tJDBCOutput -- --rejects-----> fileoutput(record the rejected rows by DB).
Best regards
Sabrina
--
Don't forget to give kudos when a reply is helpful and click Accept the solution when you think you're good with it.
One Star

Re: JDBCOutput - Die on error - continues processing

Thanks, to use that feature I had to disable both Die on Error and the batch functionality.  Since that isn't really an option for me I'll have to stick with the Die on Error approach.  I'm still not sure why it doesn't die right away on the first error though and continues the rest of the input query and tMap...
Five Stars

Re: JDBCOutput - Die on error - continues processing

Hello 
I came across with a similar Problem.
Is it possilbe in any way to hold back the commit in an cache until the process is "proofed"  and when it gives an error the process will be deleted in the cache and otherwise the process will start.
This function will be great.
Is there any way to do this ? 
regards john s. 
Seventeen Stars

Re: JDBCOutput - Die on error - continues processing

The problem here is the batch mode. The batch mode collects the statement values locally and send it as batch when the batch size is reached.
Therefore you see thousands of records processing and perhaps the one of the first ones are erroneous. 
The database receives all at once.
The commit size should have the same value as the batch size. In this case the commit happens for exactly one batch of records and this is what you usually want.
Five Stars

Re: JDBCOutput - Die on error - continues processing

Hello jlolling 
I'm not sure that i have understood this rightly.
Maybe an example will be good that I'm able to understand the whole thing.
But..... do i understood it right that is not possible to hold the commit back until the processes is proofed on errors cause the batch size is alway different and will be execute at in one completely process.
regards  fireskyer 
Seventeen Stars

Re: JDBCOutput - Die on error - continues processing

Think about a source which provides 1 million rows. Imagine you have an tJDBCOutput and have configured the batch size to 10,000 and the same for the commit size.
In this case the tJDBCOutput takes the first 10,000 records and builds a local batch. So far the database has received nothing.
After receiving the 10,000 records the batch (containing all values of the 10,000 records) will be send to the database and right after that tJDBCOutput sends a commit.
This repeats 100 times until all input records are proceeded.
At the end the tJDBCOutput checks if there is still an not fully filled batch (because typically the inout rows are not integer multiple from the batch size ;-) and sends the rest batch as last to the database also with a commit.
What happens if records cannot be proceeded within the database (constraint violation etc). In this case the whole affected batch will be rejected from the database.
Five Stars

Re: JDBCOutput - Die on error - continues processing

Hello Jlolling 
It seems that the topic is a little bit off from my case ( or maybe my knowledge cut's my understanding for it)
My Constellation is:
Database-Oracle --> Treplace --> tconvert ---> tmap --> Salesforce Instance
What i want i would like to hold the process back until the error routine has taken place. So when now errors occurs the Process will 
be executed. When an error occurs no action will take place.
I'm not sure if it works cause i'm not so familar with database mechanismen
sorry for that 
regards fireskyer 
Five Stars

Re: JDBCOutput - Die on error - continues processing

deleted
Five Stars

Re: JDBCOutput - Die on error - continues processing

Hello Again 
So i've read a litte bite more and so let check i have understood this correctly.
I would like to pick up your example again.
I have 1 Million Rows batch size 10000.
Every batch size either the last one has the same input and the same output. 
So i think 99 Batch Rows are stilll comitted in the Database. 
And on the last one, the ouput rows differs from the input ones. 
So the last Batch won't take action cause the in/out row  ratio is not equal. 
But thats it's not what i want. I want to make an Rollback to all to the rows
So, when i set the batch size to 1 million rows i have the chance to prevent the whole  commit or ?  
That is what i want achieve. 
In short:  one error in the whole process  nothing should be commited. 
Am I right or at least on thr right way ? 
Additional:
I've seen an Rollback component for SAP some Rollback Function for Salesforce or some Construct which i could emulate this behaviour will be fine, 

regards john