Tag Archives: backpack

Should I Take The Key With Me?

Abstract 

Starting from a simple example, I argue that there are cases in which two equally plausible and important principles of practical rationality may conflict. The two principles are:

(R1) One should choose the option with the highest expected utility;

(R2) If an option is believed by the actor to be superfluous, then one should choose another option (which differs in this respect).

Since there is no obvious way to rationally decide for one and against the other principle, we have a paradox here. It is similar to, but not identical with Newcomb’s problem. Our paradox is an important one and it is hard to see what a solution could look like.

The Example

I have just moved out of my office and handed over the key to Jones. As we leave the building, I wonder whether I “really” did lock the office door. I am quite sure that I did lock the office door but I am not absolutely sure. Since I want to be “really” sure, I decide to go back and check the door. Otherwise, I would keep on worrying about the door for some time. Jones has already stored the key in the bottom part of his backpack. He offers to get the key out of the backpack. I decline: I believe that most probably I did lock the door and I do not want to bother Jones just because of some remote possibility. However, if I should go back without the key and find the door unlocked, I would need to return to Jones, ask him for the key and go back to the door a second time. Jones tells me that I am being irrational: If I take the possibility of not having locked the door seriously, then I should take the key with me in order to be able to lock the door just in case I did not do so before we left the building. Is Jones right? Is my refusal to take the key with me irrational? I will start with an argument for a negative answer and the present an argument for a positive answer. Since both answers are equally good ones, we seem to have a paradox here.

 

I.

Suppose my options are the following ones:

A) I do not go back;B) I go back without the key (if I find the door locked, I get back to Jones and we continue our walk; if I find the door unlocked, I go back to Jones, get the keys and return a second time to the office to finally lock the door, then get back to Jones to continue our walk);C) I go back with the key (if I find the door locked, I get back to Jones and we continue our walk; if I find the door unlocked, I lock it, get back to Jones to continue our walk).

Suppose further that:  

  • my subjective probability that I did not lock the door is .1;
  • my subjective (dis-)utility of leaving the door unlocked and keeping on worrying about the door for some time is -18;
  • my subjective utility of not going back but keeping on worrying about the door for some time even though it is locked is -3;
  • my subjective utility of making the effort to go back once and of letting Jones wait is -1;
  • my subjective utility of going back to the door twice and of bothering Jones with the key is -7;
  • my subjective utility of making the effort to go back once, of letting Jones wait and of bothering him with the key is -6.

Given these (not unrealistic) assumptions, the expected utilities of my options are as follows:

A: -4.5;

B: -1.6;

C: -6.

Hence, the best thing for me to do would be to go back without the key; going back with the key is even much worse than not going back at all. B is better than A and A is better than C (B>A>C). Rationality thus demands that I go back without the key.

 

II.

This, however, contradicts a second argument put forward by Jones (see above). According to this argument, choosing B is blatantly irrational. Jones argues that if I go back, then I should take the key with me; going back and not taking the key with me is even worse than not going back at all. According to Jones, C is better than A and A is better than B. Jones thus argues for the “opposite” ordering of the options. But what exactly is the second argument?

 

According to Jones, plan B is problematic in a special way because its first part[1] (going back without the key) is problematic in a special way. There are exactly two circumstances here: Either the door is already locked or it is unlocked. If it is already locked, then my going back (with or without the key) is superfluous. If the door is unlocked, then my going back without the key is superfluous, too. Whatever the circumstances, my action of going back without the key is superfluous. And I know all that. Thus, my going back without the key “does not make sense” insofar as 

  • The actor believes that his action is superfluous.

Since it is irrational for an actor to perform an action that he believes to be superfluous, it is irrational to choose B. Hence one should rather choose A or C.[2] Rational actions must fulfil the “condition of making sense”: The actor must not believe that his action is superfluous. To be sure, it does not help to point out that only the first part of plan B suffers from this defect: No other part of the plan gives me a reason for not holding the action proposed by the first part to be superfluous.

 

III.

But what then? The first argument led to the conclusion that I should rather go back and not take the key with me than go back and take the key with me. The second argument lead to the opposite conclusion that I should rather go back and take the key with me than go back and not take the key with me. The two conclusions contradict each other but both arguments seem (equally) convincing. Hence, we have a paradox here. How can we resolve it?

 

Let us look at the source of the contradiction. It seems that the two arguments rely upon different and conflicting principles of rationality. The argument for not taking the key with me relies on a principle that takes subjective probabilities into account:

(R1) One should choose the option with the highest expected utility.[3]

The argument for taking the key with me, however, relies on a principle of rationality that does not take subjective probabilities into account: 

(R2) If an option violates the “condition of making sense”, then one should choose another option (which does not violate this condition).[4]

In a way, our problem is similar to Newcomb’s problem: There are cases in which two (equally) plausible principles of rationality conflict. Furthermore, one of the principles is the principle of expected utility whereas the other principle does not take probabilities into account at all. And as in the case of Newcomb’s problem, it is hard to see how the conflict can be resolved. If rationality is a coherent notion (and we better hope it is), then there must be a solution. But what is it?

I have no idea what this shows. [5]

Peter Baumann

[1]             The first part of a plan need not be the first stage of its two or more stages of the realization of the plan. If I find the door locked, then there is no more stage of the realization of my plan (there are more stages only if I find the door unlocked). This illustrates the fact that parts of plans are not the same as stages of plans.

[2]             Neither A nor C suffers from the above defect. – One can consider this defect to be a violation of the requirements of instrumental rationality.

[3]             If there are two or more options with the highest expected utility value, the principle must be modified:

(R1′) One should choose an option such that there is no other option with a higher expected utility.

For the sake of simplicity, I disregard this complication here.

[4]             There always is another such option – if only the option of not choosing a given option. – (R2) does not tell us which of several other options to choose but it is sufficient for our purposes here.

[5]             For discussion and comments I am grateful to Abdul Raffert, Thomas Schmidt, Barry Smith, Kyaw Tun, and Truls Wyller.