I know I can download a specific execution log from the bottom of the Task Execution Details popup.
But, sometimes, I would like to be in tail directly on the log (Job server, log directory... which I know, of course), but it is not so immediate to find out which log is the right one (we have many jobs deployed and sometimes 3-4 jobs are running at the same time).
I see the "Internal job id" parameter in the "Advanced Information" tab of the Task Execution Details popup, but it seems useless because it is not logged.
Furthermore, directory name of each job execution log does not help, it seems.
You can find all log files in TAC/configuration/Job conductor tab, see the "Archive and execution logs path" setting, that where TAC have all execution log files. Are you referring to some king of automated log?
if you know the taskId and the execution id then you can generate the URL for it.
However if you have a root_pid then you can only obtain the taksId through the TAC database.
It would be nice if we could download the logs through the metaservlet for a given root_pid. Or even better if we could involve the jobserver through its command port with the right request payload to receive the logfile. I'm not sure how its implemented on TAC, but I can imagine it reads the whole logfile and processes it internally, then stores part of it in the database, from where you can see the updates.
I see that in the jobserver we do have a GET_JOB_OUTPUT feature, but I'm not sure how could we involve it.
Too bad that even if someone sets the log retention policy on the jobserver to be lets say 5 years, it doesn't mean anything because once TAC purges the logs you can't connect the console logs with your log database (AMC) execution (root_pid).
I wonder if there's a way to achieve this without brute forcing, querying the TAC database...
Talend named a Leader.
Kickstart your first data integration and ETL projects.
Watch the recorded webinar!
Look at6 ways to start utilizing Machine Learning with Amazon We Services and Talend
Learn how and why companies are moving to the Cloud
Accelerate your data lake projects with an agile approach