I know this thread is a bit old, but wanted to give some updates on the same issue we are facing and our solution. We spent days researching and trying many different things, such as adding BT ET statements, changing JDBC parameters in our Teradata connection, changing auto-commit settings, etc. All with no luck!
Someone on my team opened a ticket in the Talend Bug Tracker and we were told that this is not a Talend bug, but rather a limitation with the Teradata JDBC driver (see ticket TDI-40898).
Here are our findings:
We have a requirement to execute a large numbers queries consecutively in our jobs, so breaking up our queries into separate tTeradataRow components to run 1 by 1 was very cumbersome and will not work for us.
I have therefore gotten creative and coded my own solution to this issue, which I hope others can copy and benefit from, as I found this very frustrating!
Here is how my solution works:
Query File: Our set of SQL queries is stored in a text file in a shared folder. All queries end with a semicolon in the file (except for the last query in the file). I read in the file using a tFileInputRaw component which will store the entire contents of the file as a long text string into a variable in the job. The variable that will store the string by default is row1.content.
Parse Queries: Next, I added a tJavaRow component with some custom Java code. You can see what the code does in my comments in the code.
Steps 3 - 4.
Loop: Loop through the variables and execute the teradata command for each one, passing in the query text of the current iteration into the tTeradataRow component.
And there you have it!
This has proven to work very well for us. It is a shame that I had to jump through hoops and take matters into my own hands to custom code this basic functionality. Although we were told that this is a Teradata JDBC driver limitation and not a Talend bug, I don't see why Talend wouldn't program it to work behind the scenes as I have programmed above.
Hope someone else finds this helpful!