I have a requirement to validate the incoming message with the schema and one on the column has a decimal element.
My design has tfileInputdelimited --> tuniqrow-> tschemacompliancecheck ->tmap->toracleoutput
Initially, I defined my schema in tfileInputdelimited with length and defined the decimal column as float with length and precision. And i tried to validate all the columns in tschemacompliancecheck. It is not validating the decimal. My precision is 2, but i sent a data with 10 digits after decimal point and it still passes sucessfully.
Then i tried to validate the data at the tfileinputDelimited by defining the column as bigdecimal, but even that didn't work. Whatever number of digits i send after point, is treated sucessfully.
I played with option like (Advanced seprator for number, Double data format etc) but nothing is validating the precision.
Could you guys share your thoughts on this point.
Format (length and precision) doesn't matter for content validation.
However, you should use a tFilterRow component to control which values are valid and which aren't.
Excuse-me, but i am not able to understand, what could be done with tfilterRow here. Use the Advanced option and parse the incoming as a doube with precision and then route accordingly?
tschemacompliancecheck would not validate the length for decimals ? because, i will have add a tfilterRow just for this decimal column..
Yes, This is what i want to do.
You suggest to do this advanced mode in filterrow using java program ? because lets say for example if I receive a decimal 55.182048495449, and if i validate if its less than 9999999.99 , it would pass.
I don't see an option in normal mode to count the number of digits after the decimal point and validate if they are greater than 0. Just wanted to check if i am missing something, else would write a java code in advanced mode.
Try Talend Cloud free for 30 days.
Introduction to Talend Open Studio for Data Integration.
Practical steps to developing your data integration strategy.
Create systems and workflow to manage clean data ingestion and data transformation.