Five Stars

Problem with tsalesforceoutput main|reject flow when Commit level >1

Hello, 

I have an issue with tsalesforceoutput in talend 6.4.1.

While using the tsalesforceoutput with a commit level set above 1, the first batch (successfully updated within the salesforce org) isn't logged into the output files.

Commit lvl : 200 | Records updated : 2513Commit lvl : 200 | Records updated : 2513

 

As you can see, 199 records (the first ones to be updated) were lost in the execution, but a quick check in salesforce shows that those records were successfully updated.

 

Could you point me onto the right direction to resolve this issue, or at least tell me which version of talend can be used without this error ?

Thanks.

 

10 REPLIES
Moderator

Re: Problem with tsalesforceoutput main|reject flow when Commit level >1

Hi,

Have you tried to clear "Extend Insert" check box to see if rejected rows can be logged into your excel file? on ? Did you get NULL values?

 

Best regards

Sabrina

--
Don't forget to give kudos when a reply is helpful and click Accept the solution when you think you're good with it.
Five Stars

Re: Problem with tsalesforceoutput main|reject flow when Commit level >1

Hi Sabrina, 

 

I've tried it, all the records are succesfully updated and logged in the excel file (no null value) but the time to do so is nearly 9 min, which is way too long.

 

Best regards, 

Nicolas

 

Five Stars

Re: Problem with tsalesforceoutput main|reject flow when Commit level >1

Hi, 

I've tried to resolve my problem for 15 days, but i am still stuck to the same point, have you found something ?

 

Thanks,

Nicolas

Five Stars

Re: Problem with tsalesforceoutput main|reject flow when Commit level >1

Hello,

 

Try to add a tLogCatcher component to retrieve errors.

You can put the information in a file or sent them by email.

I think that it will help you.

Moreover, you can add a tLog component between the component tSalesforceOutput and the error file.

 

Best regards,

 

Marine

Moderator

Re: Problem with tsalesforceoutput main|reject flow when Commit level >1

Hi,

Please make sure that you have selected 'Cease on Error' check box to stop the execution of the Job when an error occurs in advanced settings of tSalesforceOutput.

Best regards

Sabrina

--
Don't forget to give kudos when a reply is helpful and click Accept the solution when you think you're good with it.
Five Stars

Re: Problem with tsalesforceoutput main|reject flow when Commit level >1


Hi,

@MarineTiphon : I tried your method, it successfully logged all the records onto the excel (which is a good thing), but didn't separate the wrong ones from the good ones. It makes me wonder if the error is only on the Reject row ?

 

@xdshi: I selected the 'Cease on error' checkbox, it stops the job at the first wrong record, but doesn't tell me at exactly which row the error occurs, although it gives me the error's information


It kinda get closer from what i expect this component to do, but i would like (if possible) to get the import job
to go through the all of my records, then separate into two distincts files all the success / errors.

 

Best regards 

 

Nicolas

Thirteen Stars TRF
Thirteen Stars

Re: Problem with tsalesforceoutput main|reject flow when Commit level >1

Hi,

@ndemoulin, the following will not solve your case but may help.

 

It sounds like a bug or at least a strange behaviour from the tSalesforceOutput component or the salesforce API.

I've realized the following tests with 253 records (plus 1 header line) from an input file with 1 expected rejected record (due to blank name).

Here are the results:

  1. "Extend Insert" checked and "Commit level 200" : 54 record into the success flow and 54 into the error flow. Always the same record in this error flow but not the expected record and always the same error message! 
    252 records inserted into Salesforce as expected.
    (200 - 1) + 54 = 253 ==> which corresponds to the number of input records.
  2. "Extend Insert" checked and "Commit level 50" : 204 record into the success flow and 50 into the error flow. Always the same record in this error flow but not the expected record and always the same error message
    (50 - 1) + 204 = 253 ==> which corresponds to the number of input records.
  3. "Extend Insert" checked and "Commit level 1" :  252 record to the success flow and 1 to the error flow (as expected).
    Records are inserted 1 by 1 so response time is very bad and API consumption is high.
  4. "Extend Insert" unchecked and "Retrieve Id" checked or not : 252 records to the success flow and 1 to the error flow!
    Records are inserted 1 by 1 so response time is very bad and API consumption is high.

My conclusions:

When "Extend Insert" option is ticked, we know records are sended to the API by batches of n records where n depends on the value of "Commit Level" parameter (refer to Salesforce documentation).

With this option, as soon as an error occurs, result for success and reject flows is wrong (except when "Commit Level" = 1) but the operation on Salesforce side is rigth.

My question:

Where is this f**k**g bug?

My advices:

If you care about error messages, avoid options 1 and 2.

If you care about response time (with low data volume), choose option 1.

If you care about error messages and response time, consider Bulk option (should be reserved for high data volume, but sometimes...).

 

Feel free to complete or correct if necessary.


TRF
Thirteen Stars TRF
Thirteen Stars

Re: Problem with tsalesforceoutput main|reject flow when Commit level >1

@ndemoulin, does this help or not?

Please let us know and if it does, don't forget to give Kudo and accept the solution if you have solved your case thanks to the proposed solution.


TRF
Four Stars

Re: Problem with tsalesforceoutput main|reject flow when Commit level >1

Hello,

 

 

I also faced this problem. What I noticed is that the header of the source file is not removed before the content is split into batches, but after. Consequently, the first batch of data to be sent to Salesforce will have number of data records = commit level -1. Probably an event is not triggered in this case because neither the main, nor the rejected flow continues. It is more obvious if you execute the job with a file that has less records than the batch size. You will see the status "Starting" on both flows.

 

 By analyzing your 1st example:

"

  1. "Extend Insert" checked and "Commit level 200" : 54 record into the success flow and 54 into the error flow. Always the same record in this error flow but not the expected record and always the same error message! 
    252 records inserted into Salesforce as expected.
    (200 - 1) + 54 = 253 ==> which corresponds to the number of input records.

"

Commit level 200

Input file total number of records: 254 = 253 data records + 1 header record;

Batch 1: 200 data records - 1 header records = 199 (below commit level, so the records are not transferred to the success/rejected files);

Batch 2: 54 remaining data records. Probably there is a special behaviour implemented for the last batch since the total number of records will not always be a multiplier of the batch size. However, this behaviour does not apply if the file has ONLY one incomplete batch.

 

Please try to execute the job with a file that does not have a header and let me know if it works. For me it did.

 

Have fun,

Florentina

Five Stars

Re: Problem with tsalesforceoutput main|reject flow when Commit level >1

 Hello, first, i'd like to apologize for the time i took to respond, i was away frome the office. Then i'd like to thank you all for your response, although it didn't resolve my issue, it did help me figure out other tests to do.

 

Debugging :

Input : 2521 records

Commit Level : 200

Cf : debugOver200.png, debugUnder200.png
With this, i tried to understand where exactly the component was bugged. I put a duplicate record into the first batch. Before the row 200, no records go beyond the SalesforceComponent. After, they are put inside the success files the way i expected it to (But starting at the record 200, the 199 first one are lost). The error (always the same error message, the first one encountered) just add up to the size of the batch (200 lines).

 

debugUnder200.PNGdebugOver200.PNG

 

Execution without header and without error :

Input : 2521 records

Commit Level : 200

Cf : executionWithoutHeaderAndErrorFree.png

I tried different version without header, like @Florentina said. In this one, we see 2322 rows in the success file.

2521 - 2322 = 199 records lost.

 executionWithoutHeaderAndErrorFree.PNG

 

 

Execution with header and Error in the first batch :

Input : 2521 records

Commit Level : 200

Cf : executionWithHeaderAndErrorFirstBatch.png

 With the header on, and a duplicate record in the first batch, we get 200 lines in the rejected (expected : 2), and 2322 in the success.

200+2322=2522. The error message are always the same.

 

executionWithHeaderAndErrorFirstBatch.PNG

 

Execution without header and Error in the first batch :

Input : 2521 records

Commit Level : 200

Cf : executionWithoutHeaderAndErrorFirstBatch.png

Same execution, but without header. The result is exactly the same than before.

The log can be seen in logExecutionWithoutHeaderAndErrorFirstBatch.png

 

 executionWithoutHeaderAndErrorFirstBatch.PNG

logExecutionWithoutHeaderAndErrorFirstBatch.PNG

 

Execution without header and Errors in multiple batches :

Input : 2521 records

Commit Level : 200

Cf : executionWithoutHeaderAndErrorsInSameBatchMultipleTimes.png

In this test, i tried to put errors in multiple batches, so i duplicated records in different batches. That's where it gets interesting. For every batch containing flawed records, you get 200 errors messages printed out. No matter how many error a single batch contains, it will always print the first occurence, 200 times. Plus, errors isn't separated from success. In the exemple i provided, you can see 400 errors + 2322 success = 2722 records (?!?). But it is only with 2 different batches flawed. I tried with more errors in more batches, and went up to 1200 records in the errors, and still 2322 in the success. Why ? Absolutely no idea.

 

executionWithoutHeaderAndErrorsInSameBatchMultipleTimes.PNG

 

Execution with small dataset :

No header, No error :

 

Input : 179 records

Commit Level : 200

Cf : executionWithoutHeaderWithoutErrorDatasetSmallerThanBatch.png

I tried with smaller dataset to see if i could reproduce the results of @Florentina. No success, No errors,  nothing.

 

executionWithoutHeaderWithoutErrorDatasetSmallerThanBatch.PNG

 

No header, Error :

Input : 179 records

Commit Level : 200

Cf:executionWithoutHeaderWithErrorDatasetSmallerThanBatch.png,

 executionWithoutHeaderWithErrorDatasetSmallerThanBatchCeaseOnError.png

Same thing than previous one. I tried a cease on error to see if the job saw the error, and it did. But without it, no success, no errors.

 

executionWithoutHeaderWithErrorDatasetSmallerThanBatch.PNGexecutionWithoutHeaderWithErrorDatasetSmallerThanBatchCeaseOnError.PNG

 

@TRF

Your tests were really helpful, and your advices where pretty much the same thing i came up with on my own. The matter is that we need both the logs and the response time, but we can't use bulk and its asynchronous feature.

Let me be clearer. We use those talend jobs in workflows inside salesforce, meaning that we import data, then operate on it. We need a synchrone way to load them, because we need to be sure that all the records will be loaded before the operations on them start.

 

Thank you for your time

 

Nico