I am new to Talend tool.
I created a demo rule using Drools functionality provided in Talend. It worked fine,
Then I implemented this rule using tBRMS component, under Standard Job Designs, over a small .csv file kept under HDFS in different cluster. It got implemented and gave desired result.
Below are the 3 questions, if anyone can help me with:
1. Does Talend pulls all the data from HDFS cluster to Talend server machine and then applies Drools rule over it? If no, how it works?
2. I tried to use tBRMS component under Bigdata Batch Job Designs, but it’s not available for it. Any way we can implement Drools rule for Big Data Batch jobs?
3. What is the best practice to apply a Drools rule over HDFS data kept on different cluster and why?
Here exists a new feature jira issue about "Add tRules and tBRMS in Spark / Big-data batch"
Please feel free to vote for this jira issue:
Join us at the Community Lounge.
Talend named a Leader.
Kickstart your first data integration and ETL projects.
Watch the recorded webinar!