I have requirement from client like 3 sources (txt and csv files)we have.
After few transformation and filtration I have to join and map these sources together.
In the final output that file conatin 1000 fields.
But when I'm trying to mention these 1000 fields in tMap then it got hang and its very difficult to map over there.
Is any one have any solution for that?
1000 fields is a lot of fields! Do you really need all of them? If so, you should be able to do this, but there are no tricks. How much RAM does your machine have? Have you tried editing the Studio .ini file to increase the RAM used by the Studio?
I have already allocated enough memory to talend studio and updated the .ini file. But still I'm struggling to map and provide the transformation logic in tMap. Is any alternative method to solve this kind of issue. I dont want to use any databases related stuff like big SQL query or any stored procedure.
I need to know a bit more about this. How is the job crashing? Does it just hang or does it close? Does the tMap actually show all of the columns you want to map?
It sounds like a memory issue. How much memory does your machine have? Keep in mind that 16GB RAM is what I would recommend as a minimum for a machine running the Studio.
Talend named a Leader.
Kickstart your first data integration and ETL projects.
Read about OTTO's experiences with Big Data and Personalized Experiences
Pick up some tips and tricks with Context Variables
Take a look at this video about Talend Integration with Databricks