Hi, Iam new to talend.i have created a job that transfers input file data into mysql database. But,the transfer speed rate is very less it is showing 2.14 rows/s.which is very low. i have 3lakhs of records in my input file. i have changed tmap tempdata on to the disk location and also increased the jvm setting on run tab to 2048 max.still the issue is same. can any one suggest me how to improve the perfomance of my job.and can i trace the log file of my job like in which component it is lacking back.if yes please share me the details.
Does the job run faster without the tMysqlOutput? If so, see if increasing the "Number of rows per insert" or "Commit every" values in the tMysqlOutput advanced settings improves throughput. If the database is not the culprit, you'll need to isolate the problem component and see if there are any settings changes or alternate components that might improve performance.
Hi cterenzi, Thank you for the reply, yes,i have changed the number of both "Number of rows per insert" and "commit every" to 10,000.the performance got increased to 170 rows/sec if i remove tmysqloutput component and inserting into a excel file am getting 2000 rows/sec Can you please tell me how to isolate the problem component. Regards, Rekha
Hi , When i kept "Number Of rows per Insert" value as 10,000 and run the job at starting it was fine records are not inserting in the database. It is showing the same constant number of records.(76200) the same issue when iam running in the server also.
- Have you tried doing a manual insert from the machine where your job is executing? - Have you tried executing this job directly from the SQL server? - Have you tried replacing tMSSQLoutput with tLogRow?
Have you cleared the DB configuration or is that how it is? Also, you have set the component to truncate your data. You say in a previous post that the component is not inserting as a constant number of records are in the table. If you truncate each time, you will always see the same number of records in the table after you have run if you are testing against the same data.
Hi rhall, yes am performing test on same input data.once i tested with batch processing like 80,000 per batch and commit to 20000.then am getting some x records if i disable extend insert mode and then am able to get more records then previous (X+) As speed increase the number of records are missing . And one more clarity i need if i use batch mode then i should use auto commit (or) commit (or) any one is okay.
This is very strange. Are you getting any errors when records go missing? Can you prove that they are going missing? For example, you have rows rejected from the source files and duplicates rejected. The number of rows that end up in these locations and the final db table should be the same for each run unless you are doing some sort of dynamic filtering in tMap component.