While the Business Rule Maintenance Tool can not elegantly help in testing mass data to ensure that the important rule logic is complete and works as designed, it can help the designer of the rule logic by testing individual sets of data.
Recently I have implemented the concept of reference fields. It serves the purpose, that the user - a business user maybe - will not have to type in field names and field types. It also documents the interface to the target system where the rules will run.
As the next step I have now implemented a way for the user to run the rules in the web application and receive feedback of which rules and group of rules passed or failed and if the group as such failed or passed.
This will give the rule designer confidence that the rule logic works. But it does not mean that the rules should not be tested using a larger set of data. When using the ruleengine in a production environment to support the separation of responsibilities between IT and business users, for more cleanness of IT code and to generally enhance the quality of the system, then there should also be a a testing concept in place and that includes testing the rule logic.
The rule engine bundles all rules of a project into a single file. I have chosen files as the interface because they have several advantages over handling rules from say a database.
- A project file is a version of the truth for a defined point in time
- Files can easily be exchanged and backuped
- Files can easily be used to import project back into the web application
- Project files can be signed and checksums can be built to ensure integrity
The ruleengine can be run standalone or it can run on a server which expect connections and data from a client. Or it can be used in the Pentaho ETL tool. For this purpose a plugin is available, that can easily be installed with a click of the mouse button from within the ETL tool. Pentaho PDI is open source - so is all of my software, including the plugin - and thus it can be used freely to design ETL workflows.
Such a workflow can read from virtually any data source, route the data through the rule engine plugin and further route or filter the data based on the results of the ruleengine. As Pentaho scales, you can also scale the business rule engine if you have a lot of data. You may also run the ruleengine in server mode in say docker containers.
So when the rules are run against the appropraite data and all tests show that everything is working fine, the project file containing the rule logic can be moved to a production server.
Again - separate your IT logic from the business logic (rules); it will benefit both and establish a clear separation of responsibilities. Your code is cleaner and change to one part will not necessarily affect the other part.
The ruleengine was designed independant of a tool in mind and as such it integrates into any Java application or Java simmilar languages such as Groovy or Beanshell, just to name these two.
Go ahead and give it a try. Everything is available on Github https://github.com/uwegeercken and I welcome your feedback and ideas.