Tutorial 6. Logical conditionals
Exercise sheet
Boolean conditional
We’ve interpreted the conditional symbol
using the Booleans NOT
and OR. But we could also have directly defined a Boolean function IF
with the following truth-table:

Note that X IF Y
is the conditional from the value of
X
to the value
of Y
to make the reading of the Boolean align with its natural reading.
Find a representation of this Boolean function using only NOT and AND. That is find a Boolean expression exp
in the two variables X
and Y
, which contains only the Boolean functions NOT and AND, and which meets the specification that for all values of X
and Y
, we have:
exp = X IF Y.
Verify your work! That is don’t just provide an expression, but show that for all values of X
and Y
the above equation holds.
Solution
The most straight-forward solution is:
NOT(Y AND (NOT X))
Here’s a truth-table to show that NOT(Y AND (NOT X)) = X IF Y
:

Equivalence
Remember the truth-table for XNOR from the last exercises set:

Find a formula representation of this Boolean truth-function using only the
propositional variables p and q and the connectives
and
! That is, find such formula satisfying these constraints, which meets
the assignments v of truth-values to p and q, we have:
Verify your work! That is don’t just provide a formula, but show that for each assignment the above equation holds.
Solution
One of the conceptually clearest solutions is the formula:
(p
q)
(q
p)
Here’s a truth-table to verify our work:

Conditional inferences

Check the following conditional inferences for deductive validity using
SAT
-solving. You can use truth-tables or resolution, as you prefer.
-
(RAIN
WIND),
RAIN
WIND
-
(RAIN
WIND)
(
WIND
RAIN)
-
(
RAIN
RAIN)
RAIN
-
(RAIN
( SUN
RAINBOW))
((RAIN
SUN)
RAINBOW)
-
(RAIN
WIND)
RAIN
Document your work carefully, that is explain each step you’re carrying out,
and why the work you did shows that the inference in question is valid or invalid.
Solution
-
(RAIN
WIND),
RAIN
WIND. We show this using resolution.
The aim is to show that {(RAIN
WIND),
RAIN ,
WIND } is satisfiable.
First, we transform into CNF. The conditional becomes
RAIN
WIND using r₀, and
WIND } becomes WIND using r₁.
This leaves us with the sets
{No resolution is possible, and we can read off a counter-model where v(RAIN) = 0 and v(WIND) = 1.RAIN, WIND } {
RAIN } { WIND }.
-
(RAIN
WIND)
(
WIND
RAIN). We show this using resolution.
The aim is to show that { (RAIN
WIND),
(
WIND
RAIN) } is unsatisfiable.
First, we transform into CNF, beginning by transforming the conditionals using r₀, giving us
RAIN
WIND and
(
WIND
RAIN).
Applying r₁ and r₃ recursively to the latter, we obtain
WIND
RAIN. This gives us the sets:
{.RAIN, WIND } {
WIND } { RAIN }
We derive the empty set { } in two steps:
-
With {
RAIN, WIND } and {
WIND }, we resolve to {
RAIN}.
-
With {
RAIN} and { RAIN }, we resolve to the empty set { } proving the unsatisfiability of the set.
-
-
(
RAIN
RAIN)
RAIN. We show this using truth-tables.
The aim is to show that {(
RAIN
RAIN),
RAIN } is unsatisfiable. Here’s the truth-table to the effect:
In fact, you can see that
RAIN
RAIN is equivalent to RAIN. In logical theory, this is called Clavius' Law .
-
(RAIN
( SUN
RAINBOW))
((RAIN
SUN)
RAINBOW). We use resolution.
The task is to show that
{ (RAINis not satisfiable.( SUN
RAINBOW)),
((RAIN
SUN)
RAINBOW) }
First, we trans form to CNF. Recursively applying r₀, we get
RAIN
SUN
RAINBOW
RAIN( SUN
RAINBOW).
For the second formula,
((RAIN
SUN)
RAINBOW),
(
(RAIN
SUN)
RAINBOW)
(RAIN
SUN)
RAINBOW)
RAINSUN
RAINBOW
This give us the sets:
{RAIN,
SUN, RAINBOW } { RAIN } { SUN } {
RAINBOW }
The derivation of { } using resolution is a simple, three-step affair:
-
{
RAIN,
SUN, RAINBOW} and { RAIN } give us {
SUN, RAINBOW}.
-
{
SUN, RAINBOW} and { SUN } give us { RAINBOW }
-
{ RAINBOW } and {
RAINBOW } give us { }.
-
-
(RAIN
WIND)
RAIN, which we show using truth-tables.
The aim is to show that {
(RAIN
WIND),
RAIN } is unsatisfiable.
Here’s the table:
Since there’s no row where both
(RAIN
WIND) and
RAIN are
1
, the set is unsatisfiable.
Valid inference and conditionals
There’s a deep connection between deductively valid inference in Boolean logic and material conditionals, which is given by the following important equivalence:





-
A logical formula A is called a logical truth iff for all assignments v of truth-values to its propositional variables, the formula is true, i.e. v(A) = 1. Verify that the simple formula
(RAINis a logical truth in this sense.RAIN)
-
Rephrase the right-hand side of above equivalence in terms of the logical truth rather than unsatisfiability.
-
Give an argument that the above equivalence is true.
Hint: To do so, you need to use the general form of the reduction of valid inference to unsatisfiability, which we’ve discussed in the lecture
P₁, P₂, …Think about what the latter condition means for the truth of the corresponding conditional.C if and only if not-SAT { P₁, P₂, … ,
C }
Solution
-
We could do a truth-table, but let’s do a step-by step calculation, instead, where we go through the two possibilities: v(RAIN) = 1 or v(RAIN) = 0:
- If v(RAIN) = 1, then v(RAIN
RAIN) = v(RAIN) OR v(
RAIN)= …
... = v(RAIN) OR (NOT v(RAIN)) = 1 OR (NOT 1) = 1 OR 0 = 1.- If v(RAIN) = 0, then v(RAIN
RAIN) = v(RAIN) OR v(
RAIN)= …
... = v(RAIN) OR (NOT v(RAIN)) = 0 OR (NOT 0) = 0 OR 1 = 1.So, in all possible cases, we have v(RAIN
RAIN) = 1.
- If v(RAIN) = 1, then
-
First, note that not-SAT{
((P₁
P₂
… )
C)} means that the formula
((P₁
P₂
… )
C) is unsatisfiable, meaning it has value
0
under every valuation. But the formula starts with aand so
v(But if we know that this expression evaluates to((P₁
P₂
… )
C)) = NOT v((P₁
P₂
…)
C)
0
under each valuation, this means that v((P₁P₂
… )
C) = 1 under each valuation. In other words,
(P₁is a logical truth. This gives us an alternative criterion for valid inference according to which:P₂
… )
C
P₁, P₂, …This criterion shows the particularly deep connection between valid inference and conditionals.C if and only if (P₁
P₂
… )
C is a logical truth
-
This is the hardest part and more advanced logical reasoning. One way to proceed is to start from the known criterion that
P₁, P₂, …Let’s think about not-SAT { P₁, P₂, … ,C if and only if not-SAT { P₁, P₂, … ,
C }.
C }. This means that for each valuation, either v(P₁) = 0, v(P₂) = 0, … , or v(
C) = 0. Using transformations, we can see that (P₁
P₂
…)
C is equivalent to
P₁
P₂
…
C. That is:
v((P₁Using the recursive rules, we get:P₂
… )
C) = v(
P₁
P₂
…
C).
v(But if v(P₁) = 0, thenP₁
P₂
…
C) = (NOT v(P₁)) OR (NOT v(P₂)) OR … OR v(C)
(NOT v(P₁)) OR (NOT v(P₂)) OR … OR v(C) = …… = (NOT 0) OR (NOT v(P₂)) OR … OR v(C) = …… = 1 OR OR (NOT v(P₂)) OR … OR v(C) = 1Similarly, if v(P₂) = 0, then(NOT v(P₁)) OR (NOT v(P₂)) OR … OR v(C) = …… = (NOT v(P₂)) OR (NOT 0) OR … OR v(C) = …… = 1 OR OR (NOT v(P₂)) OR … OR v(C) = 1And so on. Finally, if v(C) = 0, then v(C) = 1 and so
(NOT v(P₁)) OR (NOT v(P₂)) OR … OR v(C) = …… = (NOT v(P₁)) OR (NOT v(P₂)) OR … OR 1 = 1.Since these are all the possibilities if not-SAT { P₁, P₂, … ,C }, we know that v((P₁
P₂
… )
C) = 1 for all valuations. By similar reasoning, we can see that if v((P₁
P₂
… )
C) = 1 for all valuations, then not-SAT { P₁, P₂, … ,
C } since otherwise, there would be a valuation v with v((P₁
P₂
… )
C) = 0.
Chaining
Consider the following KB:
- RAIN
CLOUDS
- (CLOUDS
SNOW)
STORM
- RAIN
PUDDLES
- PUDDLES
HUMID
- HUMID
CLOUDS
- SUN
DRY
- (WIND
SNOW)
DRIFTING
We add to this KB the following two facts:
-
Run the forward-chaining and the backward-chaining algorithm to show that we can derive STORM from the KB. That is, describe the steps you’d take for each algorithm one-by-one, and why at some point you hit the termination condition.
-
Use the example to illustrate how forward-chaining can find shorter derivations than backward-chaining.
-
Use both forward and backward-chaining to show that we can’t derive DRIFTING from the KB using the facts. Does one algorithm outperform the other?
Solution
-
Our goal is derive STORM. The facts are RAIN and SNOW. First, we use forward chaining:
-
So, in the first iteration, we run through all the conditionals and see if we can derive anything from those facts using genMP. We come across the two conditionals RAIN
CLOUDS and RAIN
PUDDLES. We derive CLOUDS and PUDDLES and add them two our facts. But our goal is not reached.
-
So, in the second step, the facts are RAIN, SNOW, CLOUDS, and PUDDLES. Again, we check the conditionals for possible MP applications and find (CLOUDS
SNOW)
STORM and PUDDLES
HUMID. We derive both STORM and HUMID. Our goal is reached and we terminate the search.
Next, we use backward chaining:
-
Our goal is STORM, so we inspect the conditionals until we find one that contains STORM as the consequent. We find (CLOUDS
SNOW)
STORM. We recognize that SNOW is already among our facts, so we replace the goal STORM temporarily with CLOUDS. Since we still have goals, we continue.
-
We inspect the rules for one with CLOUDS in the consequent and find RAIN
CLOUDS. Since RAIN is among our facts, we have no goals left and terminate the search.
Both algorithms lead to the same result and, in fact, give the same derivation.
-
-
In the forward-chaining algorithm, there were no choices involved and we simply looked through all chainings of MP by length until we found one. Since we went through the derivations by length starting with the shortest derivations, we were guaranteed to come across the shortest derivation first (if there is one).
For backward-chaining, finding this particular derivation depended on the order in which we looked through the rules. If, for some implementation reason, we would have first come across HUMID
CLOUDS in the second step, we would have added HUMID to our goals rather than RAIN. Then, we’d have continued two more iterations going through PUDDLES
HUMID and RAIN
PUDDLES until we hit a known fact. This would have lead to a much longer derivation. This means that with backward-chaining, whether we come across the shortest derivation first, highly depends on external factors, like the ordering of the conditionals in our KB.
-
To test this with forward-chaining, we go through all possible derivations. We’ve described the first two steps above, which gave us STORM and PUDDLES. Continuing further, we derive HUMID using PUDDLES and PUDDLES
HUMID and then CLOUDS from HUMID and HUMID
CLOUDS. At this point, we have RAIN, SNOW, STORM, PUDDLES, HUMID, and CLOUDS among our facts and can’t apply genMP anymore. Since DRIFTING isn’t among these facts, we conclude it can’t be derived.
With backward-chaining, instead, we check for conditionals involving DRIFTING in the consequent and only find (WIND
SNOW
DRIFTING). This adds WIND to our goals, since SNOW is already a fact. In the second iteration, we can’t find a conditional that has WIND in the consequent, so we terminate our search and conclude that DRIFTING can’t be derived.
Here, backward-chaining was way more efficient. This is because forward-chaining needs to go through all possible derivations to determine whether there is one, which derives our desired goal. Backward-chaining is more “surgical” in that it only looks through promising candidates and terminates earlier because there are none.
Planning
We’ve made things a more difficult for IA by introducing a third block into the puzzle:
le
Adjust our planning solution to accommodate the more complicated setup. That is:
-
Determine how we need to adjust the language to accommodate the third block.
-
Which rules do we need to add to our a KB to accommodate the third block.
-
Represent the initial setup state and the goal state in the language.
-
Find a model that satisfies the KB, as well as the setup and goal state. Then read off a course o action. You don’t need to do this formally—using resolution or chaining—but just find such a model using human intelligence.
Discussion
Check out the Wason selection task on Wikipedia.
Some researchers have argued that the experiment shows that people don’t reason with the material conditional in this case. Do you agree? Why?