Seven Stars

[resolved] Records with data truncation error not treated as rejected.

Hi,
I have designed a job that reads data from a CSV file and outputs it into a database table using tMap.
In the schema defined for the CSV file the data type for all fields is String and their length is not specified.
However the schema of the database table is defined with accurate field lengths.
In the tMap, I have added variables for database fields whose datatype is not VARCHAR.
I have also unchecked the Die on error setting in the tMap configuration and the ErrorReject output is directed to a different CSV file.
When I execute the job, if there are any records that fail in the tMap transformation (String to Date or int or float or some other datatype) then those records are marked as rejected and are transferred to the CSV output file.
However if the string value read for a field is longer than the specified database length then an error message is displayed but the record is not marked as rejected.
How can I mark these records are rejected and move them to the output CSV file?
4 REPLIES
Four Stars

Re: [resolved] Records with data truncation error not treated as rejected.

Can you show the screenshot of tMap with configuration used?
String value which is having higher length is expected to be caught at input component, and from there you can have reject link...
Vaibhav
Seven Stars

Re: [resolved] Records with data truncation error not treated as rejected.

I have attached screenshots of the job, tMap mapping and tMap schema of the input and output row.
Four Stars

Re: [resolved] Records with data truncation error not treated as rejected.

Following is not a solution to your problem, but have you checked following tSchemaComplianceCheck component
https://help.talend.com/search/all?query=tSchemaComplianceCheck&content-lang=en
Vaibhav
One Star

Re: [resolved] Records with data truncation error not treated as rejected.

You can
In tFileInputDelimited
Advanced settings: use - Check each row structure against schema
Will rejected the data at source itself - will save time and memory