The most significant disadvantage of a rule-based expert system is its inability to justify a conclusion from a sequence of rules. Due to the expertise required for a solution, the cost of a wrong decision can be costly and understanding the decision making process can be complicated, but worth diagnosing.
If we have a basic understanding of the domain we could possibly see a ‘human’ explanation but these systems are based in a narrow, specific section of the whole domain. It may be possible to attach appropriate fundamental principles of the domain expressed as character strings to each rule. We could attach this value to every rule or at least the high-level rules and store them in the knowledge base. Using the representation of the values as a guide we could closely examine (but possibly not understand) an explanation of fired rules by reviewing the textual lists created.
I gather if we could attach string values to rules, why not attach rules to rules. I may come off sounding redundant as the rules themselves move toward other rules but I am thinking more of validation rules instead of decisions. At each crossroad, the decision could possible check the outcome of the future steps to seek the best destination after X moves. An even better approach could be to spin off multiple threads that engage more rules at once and provides multiple results. If this were possible I would imagine a throttle to set the strictness of the rules allowing a less strict setting the ability to select a rule that was not the best option in the idea where a solution further down the results list. The opposite could be said for stricter results.
I fear this is beginning to sound like conventional programming. Of course all of the elements I listed may not be possible in expert systems but the one vital processing unit is based on a rule. It’s not based on how many times we can go from start to finish, if the overhead is not costly.
Negnevitsky, Michael. Artificial Intelligence: A Guide to Intelligent Systems.. Publisher: Addison-Wesley. Copyright: 2005. 31-35p
http://en.wikipedia.org/wiki/Expert_system
Thursday, January 28, 2010
Discussion - Understanding the explanations of rule-based expert systems.
Posted by Lucas Shaffer at 6:45 PM 0 comments
Labels: Artificial Intelligence, automation, decision, expert systems, intelligent systems, negnevitsky, original rules, rule based systems, semantic rules
Sunday, January 24, 2010
What is an original experience when programming an intelligent system?
A class mate in my Artificial Intelligence class wrote about AI possibly being able to create something unique
in which I responded below.
I agree expectations can be a bit diluted by the fact that a system can only contain rules or “intelligence” from experiences it has consumed and ultimately learned from. Presumption or ‘pre-programming’ of other rules can help but most of the time I feel would prejudge a situation. After all, we sometimes learn more about a situation when we experience it ourselves and we often call this experience life, even if it turns out good or bad. So the program or entity must experience good and BAD.
It lends a hand to the term original. How can something be original and unique to all when the collective of the majority of all rules in the world have already been discovered? For example, humans are not given every rule or boundary when we are born. We are contently rule-less and spend the rest of our lives finding the boundaries in which we can exist in our individual cultures.
If computers are to be truly ‘original’ then they must learn on their own. The combinations of one’s’ experiences are the only unique factor we all have. Computers, because of their rule-based nature, can be programmed with the prejudices and therefore are given an advantage of the outside world but still only begins with the rules learned be the programmer.
Posted by Lucas Shaffer at 10:24 AM 0 comments
Labels: AI, aritificial intelligence, original rules, rule based systems