Browse course

Tutorial 6. Logical conditionals

Exercise sheet
Boolean conditional  

We’ve interpreted the conditional symbol Implication using the Booleans NOT and OR. But we could also have directly defined a Boolean function IF with the following truth-table:

Note that X IF Y is the conditional from the value of X to the value of Y to make the reading of the Boolean align with its natural reading.

Find a representation of this Boolean function using only NOT and AND. That is find a Boolean expression exp in the two variables X and Y, which contains only the Boolean functions NOT and AND, and which meets the specification that for all values of X and Y, we have:

exp = X IF Y.

Verify your work! That is don’t just provide an expression, but show that for all values of X and Y the above equation holds.

Solution

The most straight-forward solution is:

NOT(Y AND (NOT X))

Here’s a truth-table to show that NOT(Y AND (NOT X)) = X IF Y:

Equivalence  

Remember the truth-table for XNOR from the last exercises set:

Find a formula representation of this Boolean truth-function using only the propositional variables p and q and the connectives Conjunction and Implication ! That is, find such formula satisfying these constraints, which meets the assignments v of truth-values to p and q, we have:

v(A) = v(p) XNOR v(q).

Verify your work! That is don’t just provide a formula, but show that for each assignment the above equation holds.

Solution

One of the conceptually clearest solutions is the formula:

(p Implication q) Conjunction (q Implication p)

Here’s a truth-table to verify our work:

Conditional inferences  

Check the following conditional inferences for deductive validity using SAT-solving. You can use truth-tables or resolution, as you prefer.

  1. (RAIN Implication WIND), Negation RAIN Therefore Negation WIND

  2. (RAIN Implication WIND) Therefore ( Negation WIND Implication Negation RAIN)

  3. ( Negation RAIN Implication RAIN) Therefore RAIN

  4. (RAIN Implication ( SUN Implication RAINBOW)) Therefore ((RAIN Conjunction SUN) Implication RAINBOW)

  5. Negation (RAIN Implication WIND) Therefore RAIN

Document your work carefully, that is explain each step you’re carrying out, and why the work you did shows that the inference in question is valid or invalid.

Solution
  1. (RAIN Implication WIND), Negation RAIN nvDash Negation WIND. We show this using resolution.

    The aim is to show that {(RAIN Implication WIND), Negation RAIN , Negation Negation WIND } is satisfiable.

    First, we transform into CNF. The conditional becomes Negation RAIN Disjunction WIND using r₀, and Negation Negation WIND } becomes WIND using r₁.

    This leaves us with the sets

    { Negation RAIN, WIND }  { Negation RAIN }   { WIND }.
    No resolution is possible, and we can read off a counter-model where v(RAIN) = 0 and v(WIND) = 1.

  2. (RAIN Implication WIND) vDash ( Negation WIND Implication Negation RAIN). We show this using resolution.

    The aim is to show that { (RAIN Implication WIND), Negation ( Negation WIND Implication Negation RAIN) } is unsatisfiable.

    First, we transform into CNF, beginning by transforming the conditionals using r₀, giving us Negation RAIN Disjunction WIND and Negation ( Negation Negation WIND Disjunction Negation RAIN).

    Applying r₁ and r₃ recursively to the latter, we obtain Negation WIND Conjunction RAIN. This gives us the sets:

    { Negation RAIN, WIND }  { Negation WIND }  { RAIN }
    .

    We derive the empty set { } in two steps:

    • With { Negation RAIN, WIND } and { Negation WIND }, we resolve to { Negation RAIN}.

    • With { Negation RAIN} and { RAIN }, we resolve to the empty set { } proving the unsatisfiability of the set.

  3. ( Negation RAIN Implication RAIN) vDash RAIN. We show this using truth-tables.

    The aim is to show that {( Negation RAIN Implication RAIN), Negation RAIN } is unsatisfiable. Here’s the truth-table to the effect:

    In fact, you can see that Negation RAIN Implication RAIN is equivalent to RAIN. In logical theory, this is called Clavius' Law .

  4. (RAIN Implication ( SUN Implication RAINBOW)) vDash ((RAIN Conjunction SUN) Implication RAINBOW). We use resolution.

    The task is to show that

    { (RAIN Implication ( SUN Implication RAINBOW)), Negation ((RAIN Conjunction SUN) Implication RAINBOW) }
    is not satisfiable.

    First, we trans form to CNF. Recursively applying r₀, we get

    Negation RAIN Disjunction Negation SUN Disjunction RAINBOW
    from
    RAIN Implication ( SUN Implication RAINBOW).

    For the second formula,

    Negation ((RAIN Conjunction SUN) Implication RAINBOW),
    we get
    Negation ( Negation (RAIN Conjunction SUN) Disjunction RAINBOW)
    using r₀ and then
    Negation Negation (RAIN Conjunction SUN) Conjunction Negation RAINBOW)
    using r₂ Finally, r₁ give us:

    RAIN Conjunction SUN Conjunction Negation RAINBOW

    This give us the sets:

    { Negation RAIN, Negation SUN, RAINBOW }  { RAIN }   { SUN }  { Negation RAINBOW }

    The derivation of { } using resolution is a simple, three-step affair:

    • { Negation RAIN, Negation SUN, RAINBOW} and { RAIN } give us { Negation SUN, RAINBOW}.

    • { Negation SUN, RAINBOW} and { SUN } give us { RAINBOW }

    • { RAINBOW } and { Negation RAINBOW } give us { }.

  5. Negation (RAIN Implication WIND) vDash RAIN, which we show using truth-tables.

    The aim is to show that { Negation (RAIN Implication WIND), Negation RAIN } is unsatisfiable.

    Here’s the table:

    Since there’s no row where both Negation (RAIN Implication WIND) and Negation RAIN are 1, the set is unsatisfiable.

Valid inference and conditionals  

There’s a deep connection between deductively valid inference in Boolean logic and material conditionals, which is given by the following important equivalence:

P₁, P₂, … vDash C  if and only if   not-SAT{ Negation ((P₁ Conjunction P₂ Conjunction … ) Implication C)}
  1. A logical formula A is called a logical truth iff for all assignments v of truth-values to its propositional variables, the formula is true, i.e. v(A) = 1. Verify that the simple formula

    (RAIN Disjunction Negation RAIN)
    is a logical truth in this sense.

  2. Rephrase the right-hand side of above equivalence in terms of the logical truth rather than unsatisfiability.

  3. Give an argument that the above equivalence is true.

    Hint: To do so, you need to use the general form of the reduction of valid inference to unsatisfiability, which we’ve discussed in the lecture

    P₁, P₂, … vDash C  if and only if   not-SAT { P₁, P₂, … , Negation C }
    Think about what the latter condition means for the truth of the corresponding conditional.

Solution
  1. We could do a truth-table, but let’s do a step-by step calculation, instead, where we go through the two possibilities: v(RAIN) = 1 or v(RAIN) = 0:

    • If v(RAIN) = 1, then
      v(RAIN Disjunction Negation RAIN) = v(RAIN) OR v( Negation RAIN)= …
    ... = v(RAIN) OR (NOT v(RAIN)) = 1 OR (NOT 1) = 1 OR 0 = 1
    .
    • If v(RAIN) = 0, then
      v(RAIN Disjunction Negation RAIN) = v(RAIN) OR v( Negation RAIN)= …
    ... = v(RAIN) OR (NOT v(RAIN)) = 0 OR (NOT 0) = 0 OR 1 = 1
    .

    So, in all possible cases, we have v(RAIN Disjunction Negation RAIN) = 1.

  2. First, note that not-SAT{ Negation ((P₁ Conjunction P₂ Conjunction … ) Implication C)} means that the formula Negation ((P₁ Conjunction P₂ Conjunction … ) Implication C) is unsatisfiable, meaning it has value 0 under every valuation. But the formula starts with a Negation and so

    v( Negation ((P₁ Conjunction P₂ Conjunction … ) Implication C)) = NOT v((P₁ Conjunction P₂ Conjunction …) Implication C)
    But if we know that this expression evaluates to 0 under each valuation, this means that v((P₁ Conjunction P₂ Conjunction … ) Implication C) = 1 under each valuation. In other words,
    (P₁ Conjunction P₂ Conjunction … ) Implication C
    is a logical truth. This gives us an alternative criterion for valid inference according to which:
    P₁, P₂, … vDash C  if and only if (P₁ Conjunction P₂ Conjunction … ) Implication C is a logical truth
    This criterion shows the particularly deep connection between valid inference and conditionals.

  3. This is the hardest part and more advanced logical reasoning. One way to proceed is to start from the known criterion that

    P₁, P₂, … vDash C  if and only if   not-SAT { P₁, P₂, … , Negation C }.
    Let’s think about not-SAT { P₁, P₂, … , Negation C }. This means that for each valuation, either v(P₁) = 0, v(P₂) = 0, … , or v( Negation C) = 0. Using transformations, we can see that (P₁ Conjunction P₂ Conjunction …) Implication C is equivalent to Negation P₁ Disjunction Negation P₂ DisjunctionDisjunction C. That is:
    v((P₁ Conjunction P₂ Conjunction … ) Implication C) = v( Negation P₁ Disjunction Negation P₂ DisjunctionDisjunction C).
    Using the recursive rules, we get:
    v( Negation P₁ Disjunction Negation P₂ DisjunctionDisjunction C) = (NOT v(P₁)) OR (NOT v(P₂)) OROR v(C)
    But if v(P₁) = 0, then
    (NOT v(P₁)) OR (NOT v(P₂)) OROR v(C) = …
    … = (NOT 0) OR (NOT v(P₂)) OROR v(C) = …
    … = 1 OR OR (NOT v(P₂)) OROR v(C) = 1
    Similarly, if v(P₂) = 0, then
    (NOT v(P₁)) OR (NOT v(P₂)) OROR v(C) = …
    … = (NOT v(P₂)) OR (NOT 0) OROR v(C) = …
    … = 1 OR OR (NOT v(P₂)) OROR v(C) = 1
    And so on. Finally, if v( Negation C) = 0, then v(C) = 1 and so
    (NOT v(P₁)) OR (NOT v(P₂)) OROR v(C) = …
    … = (NOT v(P₁)) OR (NOT v(P₂)) OROR 1 = 1.
    Since these are all the possibilities if not-SAT { P₁, P₂, … , Negation C }, we know that v((P₁ Conjunction P₂ Conjunction … ) Implication C) = 1 for all valuations. By similar reasoning, we can see that if v((P₁ Conjunction P₂ Conjunction … ) Implication C) = 1 for all valuations, then not-SAT { P₁, P₂, … , Negation C } since otherwise, there would be a valuation v with v((P₁ Conjunction P₂ Conjunction … ) Implication C) = 0.

Chaining  

Consider the following KB:

  • RAIN Implication CLOUDS
  • (CLOUDS Conjunction SNOW) Implication STORM
  • RAIN Implication PUDDLES
  • PUDDLES Implication HUMID
  • HUMID Implication CLOUDS
  • SUN Implication DRY
  • (WIND Conjunction SNOW) Implication DRIFTING

We add to this KB the following two facts:

RAIN, SNOW
  1. Run the forward-chaining and the backward-chaining algorithm to show that we can derive STORM from the KB. That is, describe the steps you’d take for each algorithm one-by-one, and why at some point you hit the termination condition.

  2. Use the example to illustrate how forward-chaining can find shorter derivations than backward-chaining.

  3. Use both forward and backward-chaining to show that we can’t derive DRIFTING from the KB using the facts. Does one algorithm outperform the other?

Solution
  1. Our goal is derive STORM. The facts are RAIN and SNOW. First, we use forward chaining:

    • So, in the first iteration, we run through all the conditionals and see if we can derive anything from those facts using genMP. We come across the two conditionals RAIN Implication CLOUDS and RAIN Implication PUDDLES. We derive CLOUDS and PUDDLES and add them two our facts. But our goal is not reached.

    • So, in the second step, the facts are RAIN, SNOW, CLOUDS, and PUDDLES. Again, we check the conditionals for possible MP applications and find (CLOUDS Conjunction SNOW) Implication STORM and PUDDLES Implication HUMID. We derive both STORM and HUMID. Our goal is reached and we terminate the search.

    Next, we use backward chaining:

    • Our goal is STORM, so we inspect the conditionals until we find one that contains STORM as the consequent. We find (CLOUDS Conjunction SNOW) Implication STORM. We recognize that SNOW is already among our facts, so we replace the goal STORM temporarily with CLOUDS. Since we still have goals, we continue.

    • We inspect the rules for one with CLOUDS in the consequent and find RAIN Implication CLOUDS. Since RAIN is among our facts, we have no goals left and terminate the search.

    Both algorithms lead to the same result and, in fact, give the same derivation.

  2. In the forward-chaining algorithm, there were no choices involved and we simply looked through all chainings of MP by length until we found one. Since we went through the derivations by length starting with the shortest derivations, we were guaranteed to come across the shortest derivation first (if there is one).

    For backward-chaining, finding this particular derivation depended on the order in which we looked through the rules. If, for some implementation reason, we would have first come across HUMID Implication CLOUDS in the second step, we would have added HUMID to our goals rather than RAIN. Then, we’d have continued two more iterations going through PUDDLES Implication HUMID and RAIN Implication PUDDLES until we hit a known fact. This would have lead to a much longer derivation. This means that with backward-chaining, whether we come across the shortest derivation first, highly depends on external factors, like the ordering of the conditionals in our KB.

  3. To test this with forward-chaining, we go through all possible derivations. We’ve described the first two steps above, which gave us STORM and PUDDLES. Continuing further, we derive HUMID using PUDDLES and PUDDLES Implication HUMID and then CLOUDS from HUMID and HUMID Implication CLOUDS. At this point, we have RAIN, SNOW, STORM, PUDDLES, HUMID, and CLOUDS among our facts and can’t apply genMP anymore. Since DRIFTING isn’t among these facts, we conclude it can’t be derived.

    With backward-chaining, instead, we check for conditionals involving DRIFTING in the consequent and only find (WIND Conjunction SNOW Implication DRIFTING). This adds WIND to our goals, since SNOW is already a fact. In the second iteration, we can’t find a conditional that has WIND in the consequent, so we terminate our search and conclude that DRIFTING can’t be derived.

    Here, backward-chaining was way more efficient. This is because forward-chaining needs to go through all possible derivations to determine whether there is one, which derives our desired goal. Backward-chaining is more “surgical” in that it only looks through promising candidates and terminates earlier because there are none.

Planning

We’ve made things a more difficult for IA   by introducing a third block into the puzzle:

le Adjust our planning solution to accommodate the more complicated setup. That is:

  1. Determine how we need to adjust the language to accommodate the third block.

  2. Which rules do we need to add to our a KB to accommodate the third block.

  3. Represent the initial setup state and the goal state in the language.

  4. Find a model that satisfies the KB, as well as the setup and goal state. Then read off a course o action. You don’t need to do this formally—using resolution or chaining—but just find such a model using human intelligence.

Discussion

Check out the Wason selection task on Wikipedia.

Some researchers have argued that the experiment shows that people don’t reason with the material conditional in this case. Do you agree? Why?