Data Mapping: XML to CSV with tHMapFile throws SAXParseException

One Star

Data Mapping: XML to CSV with tHMapFile throws SAXParseException

Hello Community,
i am trying to perform a simple XML to CSV mapping on Data Mapping and use tHMapFile. XML used for build structure very simple as below.
<?xml version="1.0" encoding="UTF-8"?>
<Customers xmlns:xsi="">
                <Customer id="1">
                                <CustomerName>Griffith Paving and Sealcoatin</CustomerName>
                                <LabelState>Connecticut</LabelState>
                                <RegTime>03-11-2006</RegTime>
                                <Fresh>67852.0</Fresh>
                                <Frozen>61521.4852</Frozen>
                </Customer>
                <Customer id="2">
                                <CustomerName>Bill's Dive Shop</CustomerName>
                                <LabelState>zona</LabelState>
                                <RegTime>19-11-2004</RegTime>
                                <Fresh>88792.0</Fresh>
                                <Frozen>15434.1</Frozen>
                </Customer>
</Customers>
My CSV Structure is as follows:
id,CustomerName,LabelState,RegTime,Fresh,Frozen 
I am using 6.2.1 Studio to run a Big Data batch job using Spark (in local mode)
 

I am seeing the following error. I have validated the XML with validators and is fine. The XML file works fine with tFileInputXML and tXMLMap and tHMap. Any help is appreciated.
========================================
Starting job XMLToCSV_Spark at 14:00 25/07/2016.
connecting to socket on port 3809
connected
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
SLF4J: Found binding in
SLF4J: See  for an explanation.
SLF4J: Actual binding is of type
: org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
: org.apache.spark.SparkConf - In Spark 1.0 and later spark.local.dir will be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone and LOCAL_DIRS in YARN).
: org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory - The short-circuit local reads feature cannot be used because UNIX Domain sockets are not available on Windows.
: com.oaklandsw.util.XmlUtils - XML parser exception: 
org.xml.sax.SAXParseException; Premature end of file.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
at com.oaklandsw.transform.runtime.MapInputReader.createReaderAndParse(MapInputReader.java:496)
at com.oaklandsw.transform.runtime.MapInputReader.parse(MapInputReader.java:350)
at com.oaklandsw.transform.runtime.AbstractAdaptorImpl.parseDocument(AbstractAdaptorImpl.java:157)
at com.oaklandsw.transform.runtime.MapExecutionContextImpl.parseDocument(MapExecutionContextImpl.java:1162)
at com.oaklandsw.transform.runtime.xquery.saxon9.Saxon9AdaptorImpl.storeInput(Saxon9AdaptorImpl.java:434)
at com.oaklandsw.transform.runtime.xquery.XQueryAdaptorImpl.storeInput(XQueryAdaptorImpl.java:259)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runQuery(StandardMapRuntimeImpl.java:374)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runSubclass(StandardMapRuntimeImpl.java:297)
at com.oaklandsw.transform.runtime.MapRuntimeImpl$1.run(MapRuntimeImpl.java:401)
at com.oaklandsw.transform.runtime.RuntimeEngineImpl.runSansEditor(RuntimeEngineImpl.java:1137)
at com.oaklandsw.transform.runtime.MapRuntimeImpl.runMap(MapRuntimeImpl.java:392)
at com.oaklandsw.transform.runtime.MapOrStructRuntimeImpl.run(MapOrStructRuntimeImpl.java:805)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:49)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
: com.oaklandsw.transform.runtime.MapInputReader - Reader IO Reader: Map Element: <root>  <Unknown function> Rep: XML namespace: URL: <map/struct source> exception, closing files: javax.xml.transform.stream.StreamSource@2b07a17a
org.xml.sax.SAXParseException; Premature end of file.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
at com.oaklandsw.transform.runtime.MapInputReader.createReaderAndParse(MapInputReader.java:496)
at com.oaklandsw.transform.runtime.MapInputReader.parse(MapInputReader.java:350)
at com.oaklandsw.transform.runtime.AbstractAdaptorImpl.parseDocument(AbstractAdaptorImpl.java:157)
at com.oaklandsw.transform.runtime.MapExecutionContextImpl.parseDocument(MapExecutionContextImpl.java:1162)
at com.oaklandsw.transform.runtime.xquery.saxon9.Saxon9AdaptorImpl.storeInput(Saxon9AdaptorImpl.java:434)
at com.oaklandsw.transform.runtime.xquery.XQueryAdaptorImpl.storeInput(XQueryAdaptorImpl.java:259)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runQuery(StandardMapRuntimeImpl.java:374)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runSubclass(StandardMapRuntimeImpl.java:297)
at com.oaklandsw.transform.runtime.MapRuntimeImpl$1.run(MapRuntimeImpl.java:401)
at com.oaklandsw.transform.runtime.RuntimeEngineImpl.runSansEditor(RuntimeEngineImpl.java:1137)
at com.oaklandsw.transform.runtime.MapRuntimeImpl.runMap(MapRuntimeImpl.java:392)
at com.oaklandsw.transform.runtime.MapOrStructRuntimeImpl.run(MapOrStructRuntimeImpl.java:805)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:49)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
: org.apache.spark.executor.Executor - Exception in task 0.0 in stage 0.0 (TID 0)
java.io.IOException: Failed to transform the record start=0 length=38
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:62)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: /LOCAL_PROJECT/Maps/SimpleXmlMap.xml execution error
Overall: Fatal
1: Info - Executing map. (328) 
  Map: /LOCAL_PROJECT/Maps/SimpleXmlMap  Properties: {}
2: Fatal - An error parsing the input was detected for this reader. (217) 
  IO Reader: Map Element: <root>  <Unknown function> Rep: XML namespace: URL: <map/struct source>
  Exception: org.xml.sax.SAXParseException; Premature end of file.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
at com.oaklandsw.transform.runtime.MapInputReader.createReaderAndParse(MapInputReader.java:496)
at com.oaklandsw.transform.runtime.MapInputReader.parse(MapInputReader.java:350)
at com.oaklandsw.transform.runtime.AbstractAdaptorImpl.parseDocument(AbstractAdaptorImpl.java:157)
at com.oaklandsw.transform.runtime.MapExecutionContextImpl.parseDocument(MapExecutionContextImpl.java:1162)
at com.oaklandsw.transform.runtime.xquery.saxon9.Saxon9AdaptorImpl.storeInput(Saxon9AdaptorImpl.java:434)
at com.oaklandsw.transform.runtime.xquery.XQueryAdaptorImpl.storeInput(XQueryAdaptorImpl.java:259)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runQuery(StandardMapRuntimeImpl.java:374)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runSubclass(StandardMapRuntimeImpl.java:297)
at com.oaklandsw.transform.runtime.MapRuntimeImpl$1.run(MapRuntimeImpl.java:401)
at com.oaklandsw.transform.runtime.RuntimeEngineImpl.runSansEditor(RuntimeEngineImpl.java:1137)
at com.oaklandsw.transform.runtime.MapRuntimeImpl.runMap(MapRuntimeImpl.java:392)
at com.oaklandsw.transform.runtime.MapOrStructRuntimeImpl.run(MapOrStructRuntimeImpl.java:805)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:49)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
3: Fatal - The input document failed to parse. (203) 
  Exception: org.xml.sax.SAXParseException; Premature end of file.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
at com.oaklandsw.transform.runtime.MapInputReader.createReaderAndParse(MapInputReader.java:496)
at com.oaklandsw.transform.runtime.MapInputReader.parse(MapInputReader.java:350)
at com.oaklandsw.transform.runtime.AbstractAdaptorImpl.parseDocument(AbstractAdaptorImpl.java:157)
at com.oaklandsw.transform.runtime.MapExecutionContextImpl.parseDocument(MapExecutionContextImpl.java:1162)
at com.oaklandsw.transform.runtime.xquery.saxon9.Saxon9AdaptorImpl.storeInput(Saxon9AdaptorImpl.java:434)
at com.oaklandsw.transform.runtime.xquery.XQueryAdaptorImpl.storeInput(XQueryAdaptorImpl.java:259)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runQuery(StandardMapRuntimeImpl.java:374)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runSubclass(StandardMapRuntimeImpl.java:297)
at com.oaklandsw.transform.runtime.MapRuntimeImpl$1.run(MapRuntimeImpl.java:401)
at com.oaklandsw.transform.runtime.RuntimeEngineImpl.runSansEditor(RuntimeEngineImpl.java:1137)
at com.oaklandsw.transform.runtime.MapRuntimeImpl.runMap(MapRuntimeImpl.java:392)
at com.oaklandsw.transform.runtime.MapOrStructRuntimeImpl.run(MapOrStructRuntimeImpl.java:805)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:49)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

at org.talend.transform.dataflow.runtime.AbstractTransformRuntime.checkStatus(AbstractTransformRuntime.java:158)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:50)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
... 17 more
: org.apache.spark.scheduler.TaskSetManager - Lost task 0.0 in stage 0.0 (TID 0, localhost): java.io.IOException: Failed to transform the record start=0 length=38
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:62)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: /LOCAL_PROJECT/Maps/SimpleXmlMap.xml execution error
Overall: Fatal
1: Info - Executing map. (328) 
  Map: /LOCAL_PROJECT/Maps/SimpleXmlMap  Properties: {}
2: Fatal - An error parsing the input was detected for this reader. (217) 
  IO Reader: Map Element: <root>  <Unknown function> Rep: XML namespace: URL: <map/struct source>
  Exception: org.xml.sax.SAXParseException; Premature end of file.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
at com.oaklandsw.transform.runtime.MapInputReader.createReaderAndParse(MapInputReader.java:496)
at com.oaklandsw.transform.runtime.MapInputReader.parse(MapInputReader.java:350)
at com.oaklandsw.transform.runtime.AbstractAdaptorImpl.parseDocument(AbstractAdaptorImpl.java:157)
at com.oaklandsw.transform.runtime.MapExecutionContextImpl.parseDocument(MapExecutionContextImpl.java:1162)
at com.oaklandsw.transform.runtime.xquery.saxon9.Saxon9AdaptorImpl.storeInput(Saxon9AdaptorImpl.java:434)
at com.oaklandsw.transform.runtime.xquery.XQueryAdaptorImpl.storeInput(XQueryAdaptorImpl.java:259)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runQuery(StandardMapRuntimeImpl.java:374)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runSubclass(StandardMapRuntimeImpl.java:297)
at com.oaklandsw.transform.runtime.MapRuntimeImpl$1.run(MapRuntimeImpl.java:401)
at com.oaklandsw.transform.runtime.RuntimeEngineImpl.runSansEditor(RuntimeEngineImpl.java:1137)
at com.oaklandsw.transform.runtime.MapRuntimeImpl.runMap(MapRuntimeImpl.java:392)
at com.oaklandsw.transform.runtime.MapOrStructRuntimeImpl.run(MapOrStructRuntimeImpl.java:805)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:49)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
3: Fatal - The input document failed to parse. (203) 
  Exception: org.xml.sax.SAXParseException; Premature end of file.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
at com.oaklandsw.transform.runtime.MapInputReader.createReaderAndParse(MapInputReader.java:496)
at com.oaklandsw.transform.runtime.MapInputReader.parse(MapInputReader.java:350)
at com.oaklandsw.transform.runtime.AbstractAdaptorImpl.parseDocument(AbstractAdaptorImpl.java:157)
at com.oaklandsw.transform.runtime.MapExecutionContextImpl.parseDocument(MapExecutionContextImpl.java:1162)
at com.oaklandsw.transform.runtime.xquery.saxon9.Saxon9AdaptorImpl.storeInput(Saxon9AdaptorImpl.java:434)
at com.oaklandsw.transform.runtime.xquery.XQueryAdaptorImpl.storeInput(XQueryAdaptorImpl.java:259)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runQuery(StandardMapRuntimeImpl.java:374)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runSubclass(StandardMapRuntimeImpl.java:297)
at com.oaklandsw.transform.runtime.MapRuntimeImpl$1.run(MapRuntimeImpl.java:401)
at com.oaklandsw.transform.runtime.RuntimeEngineImpl.runSansEditor(RuntimeEngineImpl.java:1137)
at com.oaklandsw.transform.runtime.MapRuntimeImpl.runMap(MapRuntimeImpl.java:392)
at com.oaklandsw.transform.runtime.MapOrStructRuntimeImpl.run(MapOrStructRuntimeImpl.java:805)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:49)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

at org.talend.transform.dataflow.runtime.AbstractTransformRuntime.checkStatus(AbstractTransformRuntime.java:158)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:50)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
... 17 more
: org.apache.spark.scheduler.TaskSetManager - Task 0 in stage 0.0 failed 1 times; aborting job
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.io.IOException: Failed to transform the record start=0 length=38
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:62)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: /LOCAL_PROJECT/Maps/SimpleXmlMap.xml execution error
Overall: Fatal
1: Info - Executing map. (328) 
  Map: /LOCAL_PROJECT/Maps/SimpleXmlMap  Properties: {}
2: Fatal - An error parsing the input was detected for this reader. (217) 
  IO Reader: Map Element: <root>  <Unknown function> Rep: XML namespace: URL: <map/struct source>
  Exception: org.xml.sax.SAXParseException; Premature end of file.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
at com.oaklandsw.transform.runtime.MapInputReader.createReaderAndParse(MapInputReader.java:496)
at com.oaklandsw.transform.runtime.MapInputReader.parse(MapInputReader.java:350)
at com.oaklandsw.transform.runtime.AbstractAdaptorImpl.parseDocument(AbstractAdaptorImpl.java:157)
at com.oaklandsw.transform.runtime.MapExecutionContextImpl.parseDocument(MapExecutionContextImpl.java:1162)
at com.oaklandsw.transform.runtime.xquery.saxon9.Saxon9AdaptorImpl.storeInput(Saxon9AdaptorImpl.java:434)
at com.oaklandsw.transform.runtime.xquery.XQueryAdaptorImpl.storeInput(XQueryAdaptorImpl.java:259)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runQuery(StandardMapRuntimeImpl.java:374)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runSubclass(StandardMapRuntimeImpl.java:297)
at com.oaklandsw.transform.runtime.MapRuntimeImpl$1.run(MapRuntimeImpl.java:401)
at com.oaklandsw.transform.runtime.RuntimeEngineImpl.runSansEditor(RuntimeEngineImpl.java:1137)
at com.oaklandsw.transform.runtime.MapRuntimeImpl.runMap(MapRuntimeImpl.java:392)
at com.oaklandsw.transform.runtime.MapOrStructRuntimeImpl.run(MapOrStructRuntimeImpl.java:805)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:49)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
3: Fatal - The input document failed to parse. (203) 
  Exception: org.xml.sax.SAXParseException; Premature end of file.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
at com.oaklandsw.transform.runtime.MapInputReader.createReaderAndParse(MapInputReader.java:496)
at com.oaklandsw.transform.runtime.MapInputReader.parse(MapInputReader.java:350)
at com.oaklandsw.transform.runtime.AbstractAdaptorImpl.parseDocument(AbstractAdaptorImpl.java:157)
at com.oaklandsw.transform.runtime.MapExecutionContextImpl.parseDocument(MapExecutionContextImpl.java:1162)
at com.oaklandsw.transform.runtime.xquery.saxon9.Saxon9AdaptorImpl.storeInput(Saxon9AdaptorImpl.java:434)
at com.oaklandsw.transform.runtime.xquery.XQueryAdaptorImpl.storeInput(XQueryAdaptorImpl.java:259)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runQuery(StandardMapRuntimeImpl.java:374)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runSubclass(StandardMapRuntimeImpl.java:297)
at com.oaklandsw.transform.runtime.MapRuntimeImpl$1.run(MapRuntimeImpl.java:401)
at com.oaklandsw.transform.runtime.RuntimeEngineImpl.runSansEditor(RuntimeEngineImpl.java:1137)
at com.oaklandsw.transform.runtime.MapRuntimeImpl.runMap(MapRuntimeImpl.java:392)
at com.oaklandsw.transform.runtime.MapOrStructRuntimeImpl.run(MapOrStructRuntimeImpl.java:805)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:49)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

at org.talend.transform.dataflow.runtime.AbstractTransformRuntime.checkStatus(AbstractTransformRuntime.java:158)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:50)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
... 17 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1922)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1213)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1156)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1156)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1156)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:1060)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1026)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1026)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1026)
at org.apache.spark.api.java.JavaPairRDD.saveAsHadoopFile(JavaPairRDD.scala:780)
at org.talend.transform.dataflow.api.TalendSparkMapTransform.writeOutputData(TalendSparkMapTransform.java:114)
at org.talend.transform.dataflow.api.TalendSparkMapTransform.run(TalendSparkMapTransform.java:94)
at org.talend.transform.dataflow.thmapfile.THMapFileDataFlowBuilder.build(THMapFileDataFlowBuilder.java:116)
at local_project.xmltocsv_spark_0_1.XMLToCSV_Spark.tHMapFile_1Process(XMLToCSV_Spark.java:744)
at local_project.xmltocsv_spark_0_1.XMLToCSV_Spark.run(XMLToCSV_Spark.java:1082)
at local_project.xmltocsv_spark_0_1.XMLToCSV_Spark.runJobInTOS(XMLToCSV_Spark.java:967)
at local_project.xmltocsv_spark_0_1.XMLToCSV_Spark.main(XMLToCSV_Spark.java:857)
Caused by: java.io.IOException: Failed to transform the record start=0 length=38
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:62)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: /LOCAL_PROJECT/Maps/SimpleXmlMap.xml execution error
Overall: Fatal
1: Info - Executing map. (328) 
  Map: /LOCAL_PROJECT/Maps/SimpleXmlMap  Properties: {}
2: Fatal - An error parsing the input was detected for this reader. (217) 
  IO Reader: Map Element: <root>  <Unknown function> Rep: XML namespace: URL: <map/struct source>
  Exception: org.xml.sax.SAXParseException; Premature end of file.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
at com.oaklandsw.transform.runtime.MapInputReader.createReaderAndParse(MapInputReader.java:496)
at com.oaklandsw.transform.runtime.MapInputReader.parse(MapInputReader.java:350)
at com.oaklandsw.transform.runtime.AbstractAdaptorImpl.parseDocument(AbstractAdaptorImpl.java:157)
at com.oaklandsw.transform.runtime.MapExecutionContextImpl.parseDocument(MapExecutionContextImpl.java:1162)
at com.oaklandsw.transform.runtime.xquery.saxon9.Saxon9AdaptorImpl.storeInput(Saxon9AdaptorImpl.java:434)
at com.oaklandsw.transform.runtime.xquery.XQueryAdaptorImpl.storeInput(XQueryAdaptorImpl.java:259)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runQuery(StandardMapRuntimeImpl.java:374)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runSubclass(StandardMapRuntimeImpl.java:297)
at com.oaklandsw.transform.runtime.MapRuntimeImpl$1.run(MapRuntimeImpl.java:401)
at com.oaklandsw.transform.runtime.RuntimeEngineImpl.runSansEditor(RuntimeEngineImpl.java:1137)
at com.oaklandsw.transform.runtime.MapRuntimeImpl.runMap(MapRuntimeImpl.java:392)
at com.oaklandsw.transform.runtime.MapOrStructRuntimeImpl.run(MapOrStructRuntimeImpl.java:805)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:49)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
3: Fatal - The input document failed to parse. (203) 
  Exception: org.xml.sax.SAXParseException; Premature end of file.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
at com.oaklandsw.transform.runtime.MapInputReader.createReaderAndParse(MapInputReader.java:496)
at com.oaklandsw.transform.runtime.MapInputReader.parse(MapInputReader.java:350)
at com.oaklandsw.transform.runtime.AbstractAdaptorImpl.parseDocument(AbstractAdaptorImpl.java:157)
at com.oaklandsw.transform.runtime.MapExecutionContextImpl.parseDocument(MapExecutionContextImpl.java:1162)
at com.oaklandsw.transform.runtime.xquery.saxon9.Saxon9AdaptorImpl.storeInput(Saxon9AdaptorImpl.java:434)
at com.oaklandsw.transform.runtime.xquery.XQueryAdaptorImpl.storeInput(XQueryAdaptorImpl.java:259)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runQuery(StandardMapRuntimeImpl.java:374)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runSubclass(StandardMapRuntimeImpl.java:297)
at com.oaklandsw.transform.runtime.MapRuntimeImpl$1.run(MapRuntimeImpl.java:401)
at com.oaklandsw.transform.runtime.RuntimeEngineImpl.runSansEditor(RuntimeEngineImpl.java:1137)
at com.oaklandsw.transform.runtime.MapRuntimeImpl.runMap(MapRuntimeImpl.java:392)
at com.oaklandsw.transform.runtime.MapOrStructRuntimeImpl.run(MapOrStructRuntimeImpl.java:805)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:49)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

at org.talend.transform.dataflow.runtime.AbstractTransformRuntime.checkStatus(AbstractTransformRuntime.java:158)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:50)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
... 17 more
: org.apache.spark.SparkEnv - Exception while deleting Spark temp dir: C:\tmp\spark-ee671411-722d-41ab-8878-a9868e3e2f2f\userFiles-2df22ede-d85e-4672-b813-678ebd0759ed
java.io.IOException: Failed to delete: C:\tmp\spark-ee671411-722d-41ab-8878-a9868e3e2f2f\userFiles-2df22ede-d85e-4672-b813-678ebd0759ed
at org.apache.spark.util.Utils$.deleteRecursively(Utils.scala:928)
at org.apache.spark.SparkEnv.stop(SparkEnv.scala:119)
at org.apache.spark.SparkContext$$anonfun$stop$12.apply$mcV$sp(SparkContext.scala:1756)
at org.apache.spark.util.Utils$.tryLogNonFatalError(Utils.scala:1229)
at org.apache.spark.SparkContext.stop(SparkContext.scala:1755)
at org.apache.spark.api.java.JavaSparkContext.stop(JavaSparkContext.scala:643)
at local_project.xmltocsv_spark_0_1.XMLToCSV_Spark.run(XMLToCSV_Spark.java:1090)
at local_project.xmltocsv_spark_0_1.XMLToCSV_Spark.runJobInTOS(XMLToCSV_Spark.java:967)
at local_project.xmltocsv_spark_0_1.XMLToCSV_Spark.main(XMLToCSV_Spark.java:857)
disconnected
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): java.io.IOException: Failed to transform the record start=0 length=38
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:62)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: /LOCAL_PROJECT/Maps/SimpleXmlMap.xml execution error
Overall: Fatal
1: Info - Executing map. (328) 
  Map: /LOCAL_PROJECT/Maps/SimpleXmlMap  Properties: {}
2: Fatal - An error parsing the input was detected for this reader. (217) 
  IO Reader: Map Element: <root>  <Unknown function> Rep: XML namespace: URL: <map/struct source>
  Exception: org.xml.sax.SAXParseException; Premature end of file.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
at com.oaklandsw.transform.runtime.MapInputReader.createReaderAndParse(MapInputReader.java:496)
at com.oaklandsw.transform.runtime.MapInputReader.parse(MapInputReader.java:350)
at com.oaklandsw.transform.runtime.AbstractAdaptorImpl.parseDocument(AbstractAdaptorImpl.java:157)
at com.oaklandsw.transform.runtime.MapExecutionContextImpl.parseDocument(MapExecutionContextImpl.java:1162)
at com.oaklandsw.transform.runtime.xquery.saxon9.Saxon9AdaptorImpl.storeInput(Saxon9AdaptorImpl.java:434)
at com.oaklandsw.transform.runtime.xquery.XQueryAdaptorImpl.storeInput(XQueryAdaptorImpl.java:259)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runQuery(StandardMapRuntimeImpl.java:374)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runSubclass(StandardMapRuntimeImpl.java:297)
at com.oaklandsw.transform.runtime.MapRuntimeImpl$1.run(MapRuntimeImpl.java:401)
at com.oaklandsw.transform.runtime.RuntimeEngineImpl.runSansEditor(RuntimeEngineImpl.java:1137)
at com.oaklandsw.transform.runtime.MapRuntimeImpl.runMap(MapRuntimeImpl.java:392)
at com.oaklandsw.transform.runtime.MapOrStructRuntimeImpl.run(MapOrStructRuntimeImpl.java:805)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:49)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
3: Fatal - The input document failed to parse. (203) 
  Exception: org.xml.sax.SAXParseException; Premature end of file.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
at com.oaklandsw.transform.runtime.MapInputReader.createReaderAndParse(MapInputReader.java:496)
at com.oaklandsw.transform.runtime.MapInputReader.parse(MapInputReader.java:350)
at com.oaklandsw.transform.runtime.AbstractAdaptorImpl.parseDocument(AbstractAdaptorImpl.java:157)
at com.oaklandsw.transform.runtime.MapExecutionContextImpl.parseDocument(MapExecutionContextImpl.java:1162)
at com.oaklandsw.transform.runtime.xquery.saxon9.Saxon9AdaptorImpl.storeInput(Saxon9AdaptorImpl.java:434)
at com.oaklandsw.transform.runtime.xquery.XQueryAdaptorImpl.storeInput(XQueryAdaptorImpl.java:259)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runQuery(StandardMapRuntimeImpl.java:374)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runSubclass(StandardMapRuntimeImpl.java:297)
at com.oaklandsw.transform.runtime.MapRuntimeImpl$1.run(MapRuntimeImpl.java:401)
at com.oaklandsw.transform.runtime.RuntimeEngineImpl.runSansEditor(RuntimeEngineImpl.java:1137)
at com.oaklandsw.transform.runtime.MapRuntimeImpl.runMap(MapRuntimeImpl.java:392)
at com.oaklandsw.transform.runtime.MapOrStructRuntimeImpl.run(MapOrStructRuntimeImpl.java:805)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:49)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

at org.talend.transform.dataflow.runtime.AbstractTransformRuntime.checkStatus(AbstractTransformRuntime.java:158)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:50)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
... 17 more
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1922)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1213)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1156)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1156)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1156)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:1060)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1026)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1026)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1026)
at org.apache.spark.api.java.JavaPairRDD.saveAsHadoopFile(JavaPairRDD.scala:780)
at org.talend.transform.dataflow.api.TalendSparkMapTransform.writeOutputData(TalendSparkMapTransform.java:114)
at org.talend.transform.dataflow.api.TalendSparkMapTransform.run(TalendSparkMapTransform.java:94)
at org.talend.transform.dataflow.thmapfile.THMapFileDataFlowBuilder.build(THMapFileDataFlowBuilder.java:116)
at local_project.xmltocsv_spark_0_1.XMLToCSV_Spark.tHMapFile_1Process(XMLToCSV_Spark.java:744)
at local_project.xmltocsv_spark_0_1.XMLToCSV_Spark.run(XMLToCSV_Spark.java:1082)
at local_project.xmltocsv_spark_0_1.XMLToCSV_Spark.runJobInTOS(XMLToCSV_Spark.java:967)
at local_project.xmltocsv_spark_0_1.XMLToCSV_Spark.main(XMLToCSV_Spark.java:857)
Caused by: java.io.IOException: Failed to transform the record start=0 length=38
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:62)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
: local_project.xmltocsv_spark_0_1.XMLToCSV_Spark - TalendJob: 'XMLToCSV_Spark' - Failed with exit code: 1.
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: /LOCAL_PROJECT/Maps/SimpleXmlMap.xml execution error
Overall: Fatal
1: Info - Executing map. (328) 
  Map: /LOCAL_PROJECT/Maps/SimpleXmlMap  Properties: {}
2: Fatal - An error parsing the input was detected for this reader. (217) 
  IO Reader: Map Element: <root>  <Unknown function> Rep: XML namespace: URL: <map/struct source>
  Exception: org.xml.sax.SAXParseException; Premature end of file.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
at com.oaklandsw.transform.runtime.MapInputReader.createReaderAndParse(MapInputReader.java:496)
at com.oaklandsw.transform.runtime.MapInputReader.parse(MapInputReader.java:350)
at com.oaklandsw.transform.runtime.AbstractAdaptorImpl.parseDocument(AbstractAdaptorImpl.java:157)
at com.oaklandsw.transform.runtime.MapExecutionContextImpl.parseDocument(MapExecutionContextImpl.java:1162)
at com.oaklandsw.transform.runtime.xquery.saxon9.Saxon9AdaptorImpl.storeInput(Saxon9AdaptorImpl.java:434)
at com.oaklandsw.transform.runtime.xquery.XQueryAdaptorImpl.storeInput(XQueryAdaptorImpl.java:259)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runQuery(StandardMapRuntimeImpl.java:374)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runSubclass(StandardMapRuntimeImpl.java:297)
at com.oaklandsw.transform.runtime.MapRuntimeImpl$1.run(MapRuntimeImpl.java:401)
at com.oaklandsw.transform.runtime.RuntimeEngineImpl.runSansEditor(RuntimeEngineImpl.java:1137)
at com.oaklandsw.transform.runtime.MapRuntimeImpl.runMap(MapRuntimeImpl.java:392)
at com.oaklandsw.transform.runtime.MapOrStructRuntimeImpl.run(MapOrStructRuntimeImpl.java:805)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:49)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
3: Fatal - The input document failed to parse. (203) 
  Exception: org.xml.sax.SAXParseException; Premature end of file.
at org.apache.xerces.util.ErrorHandlerWrapper.createSAXParseException(Unknown Source)
at org.apache.xerces.util.ErrorHandlerWrapper.fatalError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLErrorReporter.reportError(Unknown Source)
at org.apache.xerces.impl.XMLScanner.reportFatalError(Unknown Source)
at org.apache.xerces.impl.XMLDocumentScannerImpl$PrologDispatcher.dispatch(Unknown Source)
at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
at org.apache.xerces.jaxp.SAXParserImpl$JAXPSAXParser.parse(Unknown Source)
at com.oaklandsw.transform.runtime.MapInputReader.createReaderAndParse(MapInputReader.java:496)
at com.oaklandsw.transform.runtime.MapInputReader.parse(MapInputReader.java:350)
at com.oaklandsw.transform.runtime.AbstractAdaptorImpl.parseDocument(AbstractAdaptorImpl.java:157)
at com.oaklandsw.transform.runtime.MapExecutionContextImpl.parseDocument(MapExecutionContextImpl.java:1162)
at com.oaklandsw.transform.runtime.xquery.saxon9.Saxon9AdaptorImpl.storeInput(Saxon9AdaptorImpl.java:434)
at com.oaklandsw.transform.runtime.xquery.XQueryAdaptorImpl.storeInput(XQueryAdaptorImpl.java:259)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runQuery(StandardMapRuntimeImpl.java:374)
at com.oaklandsw.transform.runtime.StandardMapRuntimeImpl.runSubclass(StandardMapRuntimeImpl.java:297)
at com.oaklandsw.transform.runtime.MapRuntimeImpl$1.run(MapRuntimeImpl.java:401)
at com.oaklandsw.transform.runtime.RuntimeEngineImpl.runSansEditor(RuntimeEngineImpl.java:1137)
at com.oaklandsw.transform.runtime.MapRuntimeImpl.runMap(MapRuntimeImpl.java:392)
at com.oaklandsw.transform.runtime.MapOrStructRuntimeImpl.run(MapOrStructRuntimeImpl.java:805)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:49)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:72)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.call(AbstractDataTransformation.java:31)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:149)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:371)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply$mcV$sp(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$6.apply(PairRDDFunctions.scala:1197)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1250)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1205)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1185)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

at org.talend.transform.dataflow.runtime.AbstractTransformRuntime.checkStatus(AbstractTransformRuntime.java:158)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.execute(MapTransformRuntime.java:50)
at org.talend.transform.dataflow.runtime.MapTransformRuntime.run(MapTransformRuntime.java:27)
at org.talend.transform.dataflow.spark.map.BytesToTextTransformation.transformRecord(BytesToTextTransformation.java:26)
at org.talend.transform.dataflow.spark.AbstractDataTransformation.transformRecord(AbstractDataTransformation.java:60)
... 17 more
Exception in thread "main" java.lang.RuntimeException: TalendJob: 'XMLToCSV_Spark' - Failed with exit code: 1.
at local_project.xmltocsv_spark_0_1.XMLToCSV_Spark.main(XMLToCSV_Spark.java:867)
Moderator

Re: Data Mapping: XML to CSV with tHMapFile throws SAXParseException

Hi,
We can't see the screenshot on our side. Could you attach it on the forum, please? That would be great.
For your subscription product, have you already reported a ticket on Talend Support Portal? In this way, we can give you a remote assistance through support cycle with priority.
Best regards
Sabrina
--
Don't forget to give kudos when a reply is helpful and click Accept the solution when you think you're good with it.
Highlighted
Not applicable

Re: Data Mapping: XML to CSV with tHMapFile throws SAXParseException

Hi,
1-Did you assign this input file as a sample data to your input structure ? If yes, can you read the input file correctly when you look at your input structure ? 
2-How did you create your input structure ? From a XSD file or from your XML file ?
Eric