I am trying build a job where a list of tables names are loaded from a csv file and for each loaded row a tMap component determines which database and schema to redirect.
as you can see, the table names are loaded from the csv file, however the tMap is not performing the re-direction / filtering .
So inside the tmap, I am trying to say:
If Schema equals 'IBCDM' AND Has_PK_Value equals "NO" then select the row which effectively will be the IBCDM output coming out of the tmap.
However the tMap component is not filtering the rows that match that condition.
I checked the data in the csv file and they do exist and also I debugged the job with tlogrow and I can see the data being loaded.
Am I doing something wrong ?
PS I also tried the following expressions
row11.Schema.equals("IBCDM") && row11.Has_PK_Value.equals("NO") == 0 This gives syntax error
row11.Schema.equals("IBCDM") && row11.Has_PK_Value.equals("NO") == true. This still does not filter rows
Solved! Go to Solution.
Set the "Catch output reject" to false. There is no point to this setting since you only have one output. This option will work if you have more than 1 output and all it does is catch the records that were not output by the other output.
Edit: I see @TRF posted this.
My other guess would be that your expression....
row11.Schema.equals("IBCDM") && row11.Has_PK_Value.equals("NO") == true
...is failing because of case sensitivity of your data. What does "Has_PK_Value" hold? What does "Schema" hold?
Thanks Guys, looks like the format of my CSV got screwed up somehow and it was not loading the data correctly into the schema.
Corrected the data and re-run and now I am able to load the whole file and filter works in the tMap:
So one question, if I want to add additional outputs to the tmap that are based on a different conditions, then I will need to set the 'catch output reject' to TRUE for every output ?
Try Talend Cloud free for 30 days.
Introduction to Talend Open Studio for Data Integration.
Practical steps to developing your data integration strategy.
Create systems and workflow to manage clean data ingestion and data transformation.