Using TAC (Big Data Edition), I have a job that needs to be deployed to many Job Servers in a cluster. The first few generate, deploy and run just fine, but then a job will get stuck in the Generating step and never complete. Even after stopping/restarting TAC (on linux), stopping/restarting the job servers, it's still stuck. In the CommandLine view of TAC, the step that is in the running phase is ExportJobCommand. Even if I kill this job and create a duplicate, the duplicate becomes hung as well.
Hi Barry, I had a similar problem about a year ago. The jobs had many context variables and they exceeded the length of the db field in the H2 database running TAC. http://www.talendforge.org/bugs/view.php?id=17860 I ran these alter table statements: alter table "executiontaskjobprm" alter column "label" VARCHAR(510); alter table "executiontaskjobprm" alter column "originalvalue" CLOB(4147483647); alter table "executiontaskjobprm" alter column "defaultvalue" CLOB(4147483647); alter table "taskexecutionhistory" alter column "contextvalues" CLOB(4147483647); You might not have the same problem I had, but it certainly sounds like similar symptoms. Thanks, Ben