I have a file that contains 4920 lines among the fields there is name that repeats itself 4920. I have to eliminate the redundancy by the component tuniqrow and get the value of this field with tflowtoiterate called "NameFile". Than i will use it in my query :
["select Trends2.chrono,Trends2.name,Trends2.value,Trends2.quality from Trends2
where Trends2.name='"+(String)globalMap.get("NameFile")+"'"] with composant tMSSqlInput in order to do lookup and innerjoin.
Here is my job :
I would like that before inserting the rows in the OutputTrend, apply my query in tMSSqlInput first to compare the lines in table with the name wich i get in tflowToiterate.
My problem that the rows retrieve from the table is always 0.
Thanks in advance.
1) You do not need use tUniq if as You wrote - all rows have same value. Just read file twice - first time only 1 row, 2nd all - it will be faster and exactly use less resources
2) Use next sequence
Lecture_DBF -> READ)1st -> tFlowToIterate -> ITERATE -> read 2nd -> all other
in this case You variable will work
First Thank you for you replay as usual !
Second what de you mean by read file twice !
To clarify my job, i reads several files with tsystem(Lecture_DBF) then I extract the fields of files, after that i transform them with tmap.
So,I would get each file name for example :
I have 2630 file ---- so I want to read 2630 filename in order to use it in my query select ..... and after that in the inner join
(I can retrieve file names only with textractDelimetedfields or Tmap because filename in Tmap is differents to filename in tfileList)
In addition, each file name is repeated according to the number of lines.
For example :
file that contains 2000 line ---- contains 2000 name
The final need is instead of doing an internal join with the table and reading all the lines in table I want to read only the lines that match with the filename for this reason i wrote [select ...where Trends2.name='"+(String)globalMap.get("NameFile")]
Introduction to Talend Open Studio for Data Integration.
Practical steps to developing your data integration strategy.
Create systems and workflow to manage clean data ingestion and data transformation.