Theorem proving
Other than model checking entailment can be done by theorem provingβapplying rules of inference directly to the sentences in our knowledge base to construct a proof of the desired sentence without consulting models. If the number of models is large but the length of the proof is short, then theorem proving can be more efficient than model checking.
1) Resolution
Resolution is similar to DPLL (when this is used to establish entailment): it starts from a CNF representation of the premises and the negation of the conclusion
Resolution applies (in all the possible ways) the resolution rule to the clauses (original ones and those derived by the previous applications of the resolution rule)
Suppose πΆ1, πΆ2 are clauses such that literal β1 in πΆ1 and literal β2 in πΆ2 are complementary (same propositional symbol with different sign), then clauses πΆ1, πΆ2 can be resolved into a new clause πΆ called resolvent C = (πΆ1 β {β1}) βͺ (πΆ2 β {β2})
Resolution properties
- Propositional resolution is an extremely powerful rule of inference for PL
- It works by refutation: to prove that KB β¨ πΌ it proves that KB β§ Β¬πΌ is unsatisfiable, namely it builds a refutation for KB β§ Β¬πΌ by looking for the empty clause
- Using propositional resolution, it is possible to build a theorem prover that is sound and complete for PL
- Resolution rule is sound
- Theorem: resolvent πΆ is satisfiable if and only if clauses πΆ1, πΆ2 are simultaneously satisfiable
- Since resolvent is smaller than parent clauses, resolution stops at some point
- Propositional resolution works on any set of sentences in clausal form (CNF)
2) Forward chaining
- Forward chaining considers definite clauses and applies Modus Ponens to generate new facts:
- Modus Ponens: given π" β§ π# β§ β― β§ π$ β π and π", π#, ..., π$, infer π
- Forward chaining keeps applying this rule, adding new facts, until nothing more can be added
Properties of forward chaining
Forward chaining is sound and complete for KBs composed of definite clauses
- Soundness: follows from soundness of Modus Ponens (easy to check)
- Completeness: the algorithm reaches a a fixed point from where no new atomic sentences can be derived
Forward chaining can only derive new positive literals (facts), it cannot derive new rules
3) Backward chaining
Backward chaining works like forward chaining, but backward:
- Start from the goal (query) π, which is a positive literal
- To prove π, check if π is already in the KB, otherwise prove all the premises of a rule concluding π
- Avoid loops: check if new subgoal is already on the goal stack
- Avoid repeated work: check if new subgoal has already been proved true, or has already failed
Backward chaining is sound and complete for KBs composed of definite clauses
Exam questions
2017 02 23 Q3 (7 points)
Consider the following knowledge base (KB) in propositional logic:
- $A \to \lnot B$
- $\lnot A \to \lnot B$
- $A \lor B$
Prove, using resolution with the input resolution strategy, $A \to B$ can be derived from the KB.
Solution
The first step is to trasform in CNF and add the negation of the solution:
- $\lnot A \lor \lnot B$
- $A \lor \lnot B$
- $A \lor B$
- $\lnot (\lnot A \lor B)$, with De Morgan's rules:
- $A$
- $\lnot B$
Then apply resolution:
(1) $\lnot A \lor \lnot B$ and (3) $A \lor B$ become $B \lor \lnot B$ and $A \lor \lnot A$
We have reached a fixed point because any other application of the resolution rule with the input resolution strategy will not produce any new knowledge.
So, $A \to B$ cannot be derived from the KB using resolution with the input resolution strategy.