Dynamic schema for multiple positional file inputs

One Star

Dynamic schema for multiple positional file inputs

Hello!
I have multiple (13 to be exact) positional file inputs, and they all have different schema (variation in column numbers and field lengths (patterns are different).
Is it possible to create a loop in integration studio (Enterprise DQ professional) to process each of the files, dynamically/automatically change the schema to match the file and write the contents to a oracleDB table?
Maybe it is doable via the tSetDynamicSchema component? My goal is to create a job that doesn't have a tFileInputPositionalt/DB write component for each file.
So far i have dabbled with tFileList component that locates the files, iterates through them and outputs them to a temporary tables with filename (picked from globalMap) as name.
I also have the pattern's for each of the files in globalmap, but i haven't been able to create a check to match the positional file pattern to a filename (the tFileList doesn't connect to tJavaRow).
Basically what i need is a dynamic file input that iterates through every file (.txt) in a folder, creates a dynamic schema based on the file content and write the content to a Oracle database tables.

Re: Dynamic schema for multiple positional file inputs

i may be misunderstanding you.
but you seem to have 13 files with different Schemas
which implies that each file has a its own schema.
and that you do *not* one file containing 2 or more schemas
could you please clarify that?
thanks,
One Star

Re: Dynamic schema for multiple positional file inputs

Hi,
Yes, i have 13 different positional files with 13 different schemas. Each file has it's own.
I am trying to find a reusable and more dynamic solution to this, compared to having 13 different input/mapping/output components with schema in metadata.
Br,
jm

Re: Dynamic schema for multiple positional file inputs

you would need to load these files as tFileRow (load full rows)
if you have a header line on these files then use that to define which schema to use and then try parsing these in that way.
i am just thinking aloud - but it is probably doable.
in a similar circumstance we have used file headers to define their 'file type' and then call a 'child' job to process each.
the idea you had seems tempting at first but you probably find that each file will have a quirk - and these are best dealt in a separated process; aka job.
regards