I am using an expert system with an inference engine (forward chaining) and I would like to explain why it is better than a decision tree using very simple concepts. (in one particular situation)
I know there is a similar question on stackoverflow but it's not the answer I'm looking for.
Here is my problem:
For Customer Relation Management, I am using lot of different business rules (that induce dialog rules) to help the customer make a decision on one product.
Note: Rules are added frequently (2 per days).
The customer answers a series of questions before getting his answer. The business rules mixed with the dialog rules makes the resulting questionnaire looks like the one that would be generated by a optimal decision Tree. Even though the hidden reasonning is completely different.
I would like to know what are the main arguments in favor (or maybe against) of the inference engine in terms of scalability, robustness, complexity and efficiency compared to a decision tree in such a case.
I already have some ideas, but since I need to convince someone it's like I never have enough arguments.
Thanks in advance for your ideas and I would be happy if you could advise me good papers to read on this subject.
Forward chaining inference engines support specifications in full first-order logic (translated to if-then rules), while decision trees can only march down a set to a specific subset. If you're using both for, say, determining what car a user wants, then in first-order logic you can say (CHR syntax; <=>
replaces LHS by RHS):
user_likes_color(C), available_color(C) <=> car_color(C).
in addition to all the rules that determine the brand/type of car the user wants, and the inference engine will pick the color as well as the other attributes.
With decision trees, you'd have to set up an extra tree for the color. That's okay as long as color doesn't interact with other properties, but once they do, you're screwed: you may have to replicate the entire tree for every color except those colors that conflict with other properties, where you'd need to also modify the tree.
(I admit color is a very stupid example, but I hope it gets the idea across.)
To say so I have not used inference engines or decision trees in practice. In my point of view you should use decision trees if you to want learn form a given training set and then predict outcomes. As an example, if you have a data set with information which states if you went out for a barbecue given the weather condition (wind, temperatur, rain, ...). With that data set you can build a decision tree. The nice thing about decision tree is that you can use pruning to avoid overfitting and therefore avoid to model noise.
I think inference engines are better than decision trees if you have specific rules, which you can use for reasoning. Larsmans has already provided a good example.
I hope that helps