Hi, After execution the complex job in talend etl tool, we got the below are the tmap and lookup temp files
EQX_SDE_ORA_SalesOrderLinesFact_tMapData_Svra5G_row1_TEMP_0 - 871,658 KB BIN format EQX_SDE_ORA_SalesOrderLinesFact_tMapData_Svra5G_row1_TEMP_1 - 871,650 KB BIN format EQX_SDE_ORA_SalesOrderLinesFact_tMapData_Svra5G_Lookup_row18_ValuesData_0 - 12,138 KB EQX_SDE_ORA_SalesOrderLinesFact_tMapData_Svra5G_Lookup_row18_KeysData_0 - 1,157 KB ..... number of lookup cache files.
clarifcation for : EQX_SDE_ORA_SalesOrderLinesFact_tMapData_Svra5G_row1_TEMP_0 - 871,658 KB BIN format EQX_SDE_ORA_SalesOrderLinesFact_tMapData_Svra5G_row1_TEMP_1 - 871,650 KB BIN format why this tmap genrate the huge size generate the cache file? because this cache size file is very huge. The LOOKUP cahe files generated size is good. but generating the huge size tmap cache file is advisable ot not? what is the purpose of generating the tmap cahec files? how to avoid the tmap cahce file? Thanks, Srinivasan
Hi Srinivasan This is because you check "Store temp data" true in tMap. Files are huge and memory is limited. tMap has to save temp data into disk for calculation. All these temp files will be deleted when the job complete. Costing disk space for better performance is acceptable. Regards, Pedro
Hi Pedro, Thanks for the explanation. For improving the performance, we have given option "Store temp data" true in tMap. the lookup table creating cache files size is enough good. But in this tmap creating temp multiple file is very huge size and its too slow creating temp files.
I desinged job mentod: 1. Source and lookup table required column only. 2. used all the tmap component "store temp date" option to True. 3. Tmap Default buffer size : 2000000 . can i reduce the buffer size. if yes. how much should reduce the size? 4. Xms256M , Xmx1536M - memory setup. Server configuration : 4GB RAM, WINDOWS mode 32 bit configuration.
please suggest me which are the methods to follow to improve the speed the tmap creating temp files(disk for calculation) ? or any other method?
Hi Srinivasan S For No.1, No.2, No.4, I'm sure these methods will improve performance. For No.3, this is related to current system situation(e.g memory). Another advice is that you can try to reduce look-up times and optimize your job logic. Regards, Pedro
Hi Pedro, Thanks for the valuable reply. I tried to tune reduce lookup table data volume. still we facing temp file size is very huge. (the lookup table data volume around 1 billion in each lookup). in each lookup table used 3 column only.
do we have any compress option for creating the temp files? because temp file size going on very huge size (like 1GB of each files. totally created 6 files for the single job). also its takes so much time to creating the temp files. we planning to execute concurrenlty all the jobs. (almost 20 jobs). The temp files size issue may occured in all the jobs. please suggest me. how to proceeed to tune.