From XKCD Wiki
Jump to: navigation, search


Good Books?[edit]

Would it be worth having a section in the article on what books are good for such puzzles? For example, I like "aha! Insight" (ISBN: 0-7167-1017-X) and "aha! Gotcha" (ISBN: 0-7167-1361-6) by Martin Gardner, which are both great books with cool cartoons that talk out these problems.

Another excellent book is "What is the title of this book?" by Smullyan. 16:12, 18 February 2009 (UTC)

Merging duplicates[edit]

Some puzzles are equivalent (like "Two Beagles" and "Two Dice" or "Ball and Balance" and "16 cubes". Maybe the should be merged. --Derari 18:56, 20 July 2009 (UTC)

x---------------- My contribution starts here --------------x----------------

Lets keep it simple and start with a set X with n elements. Its power set will have 2^n elements including null set and X itself. Now, sets A and B are chosen randomly from power set of X and for A to be subset of B, n(A) (i.e. number of elements in A) should be less than or equal to n(B) (i.e. n(A) <= n(B)).

Now, there can be (2^n)^2 combinations of A and B from the power set of X. To understand this, consider X = {1,2,3} and P(X) = {phi, {1}, {2}, {3}, {1,2}, {2,3}, {1,3}, {1,2,3}} (phi means null set). If we are asked to choose A and B from P(X), then we can choose (A,B) as following pairs: (A,B) = (phi, phi), (phi, {1}), (phi, {2}), (phi, {3}), (phi, {1,2}), (phi, {1,3}), (phi, {2,3}), (phi, {1,2,3}), ... and so on. There can be 64 such pairs ((2^3)^2 = 4^3 = 4^n (n=3, number of elements in X)). This explains the (4^n) denominator of the solutions proposed before me.

Now we need to find the numerator. Lets go step by step. Out of all the elements in power set of X, if we say A = phi, then A is subset of B irrespective of elements in B (Yes! even if B = phi, because null set is a subset of null set). So B can assume 8 values in above example (all the possible values in the power set of X). Thus if A = phi, we have 2^n cases where A is a subset of B. Now consider a case with A containing 1 element. So in above example, A can assume any of the following values: A = {1} or {2} or {3}. Now B can certainly not be phi as B should also have minimum 1 element. Now maximum possible cases are, B can have 1 or 2 or 3 elements. With 1 element, there is only 1 possibility i.e. if A and B have same element, then and only then can A be the subset of B. And there are n possible ways in which one can choose 1 element from a set X of n elements. Now, if B has 2 elements, B can assume values, {1,2} or {1,3} or {1,3}. Interpolating this to a general case of set X with n elements, we can say that there are (n-1)C1 = (n-1), (combination (n-1)! / (n-2)! * 1!) possible combinations of A and B where A is a subset of B.

Following above process step by step, we get following formula for probability of A being a subset of B:

P =

                             n                           (n-k)
             (2^n)    +     sum     { (nCk)    *    [     sum   ( (n-k)Cm )   ]   } .................Numerator
                           k = 1                         m = 0                        
                                         (4^n) ................. Denominator

where n is the number of elements in X.

x---------------- My contribution ends here --------------x----------------

One more probability puzzle[edit]

I think this is the solution of the problem. Assuming Card(S) the cardinality of a set named S, we can say that the probability that A is a subset of B is

[Card(B) / N] * [Card(B) - 1/ N - 1] * ... * [ Card(B) - Card(A) / N - Card(A) ]

Are we supposed to pick A and B randomly from the subsets of X? If so, it's (3/4)^N. For each element of X, there are equal numbers of subsets of X that contain it and that do not, so the chance of that element being in any given subset is 1/2. For each element, there's a 1/2 chance that it's not in B, and an independent 1/2 chance it's in A, so there's a 3/4 chance that the element does not make A not a subset of B. The elements are independent of each other, so the chance that all of them are okay is (3/4)^N.

I don't understand why an equal number of subsets would contain a given element as would not. For example, only two subsets of {1; 2; 3} contain {1; 2} (just {1; 2} and {1; 2; 3}), whereas six do not ({}, {1}, {2}, {3}, {1; 3}, and {2; 3}).
{1; 2} is not an element of {1; 2; 3}. Considering an element a, there obviously is a single identical subset except with a for every subset without a, and vice versa. --Nix
Yeah, I misread your proof. This makes a lot of sense. 01:08, 14 April 2009 (UTC)
Should this really be on this list?

Well let's see. In general, the number of subsets in a set is of course the cardinality of the powerset, which is 2n, and therefore the number of possibilities for A and B is 22n. This is the sample size. A given one of those subsets which we label B has 2b subsets, where b is the number of elements in B. Now, obviously 0 <= b <= n, and the number of subsets with b elements is n!/[b!(n-b)!], so we take the sum from b = 0 to n of n!/[b!(n-b)!], and this is the number of successful combinations of A and B. So the probability is:

Sum  _____n!_____
b=0  22n b! (n-b)!

This looks pretty aweful, though. Is there a simplification I missed? 19:51, 14 March 2009 (UTC)

The sum is missing the 2b, so you're just counting all possible subset Bs once (ignoring A), and the sum you gave always comes to 1/2n although I don't know how to prove that directly from the sum. Completed with the missing 2b, I still don't know how to simplify it, other than from the other solution. So it should simplify to (3/4)n. --Nix
I know from looking at n = 1, 2, 3, 4, and 5 that it clearly should be (3/4)n, though I don't know how to prove that directly. Assuming you're right and my sum should include the 2b, I get:
n!   n
--  Sum  ____2b____
4n  b=0   b! (n-b)!

But how to simplify this I still have no clue. But you're right, it should equal (3/4)n, which I feel like I could prove if it weren't 2:30 AM right now. 06:21, 16 March 2009 (UTC)

To simplify it, you use this-- 08:12, 20 March 2009 (UTC)
Thanks. That transforms it into:
 1   n
--  Sum  2b(nb)
4n  b=0

Now we know that:

Sum  xbyn-b(nb) = (x + y)n

So, setting x = 2 and y = 1, we get:

 1   n
--  Sum  2b(nb) = (1/4)n(2 + 1)n = (3/4)n
4n  b=0

as we expected. Therefore the answer indeed is that the probability is (3/4)n. 20:01, 21 March 2009 (UTC)

Simpler solution: There is a 1/4 chance that any given element of X is in A but not B. A is a subset of B if and only if no such elements exist, so the chance of this is (3/4)^n, Q.E.D. Ravi12346 01:59, 27 April 2009 (UTC)

Can you explain a little further? How do you know that there is a 1/4 chance? How is the complement (3/4) raised to the nth power?

For each element of X, there are four possibilities:

1.) it is in A but is not in B,
2.) it is not in A, but is in B,
3.) it is in both A and B, or
4.) it is in neither

As explained above, A is a subset of B if and only if each element satisfies one of the last three possibilities. Since there is a 3/4 chance of this occurring for each element, the probability that all elements satisfy this is (3/4)^n.

I'm not sure how to simplify it but the answer I got was: Sum of (i+1)/(n+1) from 0 to n all divided by n+1. Does this come to the same answer?-- 13:21, 28 May 2014 (EDT)

I have not yet worked out a solution, however, this solution proposed here fails a basic and elementary subset n. When n=1, then A will always be a subset of B. when n=1, A can only be (1), and B can only be (1), therefore, A is always a subset of B. This seems to throw a wrench into (3/4)^n, no? --

No, it doesn't. A can be either {1} or the empty set, B can be likewise either {1} or the empty set. A will be a subset of B in 3 out of 4 cases, there's still the fourth case when B is empty and A isn't... -- CrystyB 15:15, 30 June 2010 (UTC)

Dice Rolls[edit]

'Just to clear this up: the correct solution is 275/1296, and any other answer is the result of an error in logic or a faulty reading of the problem. Claims that the problem is ambiguous are contradicted by the fact that people can write simulations of the game which solve the problem without running into these ambiguities, and claims that the answer is not 275/1296 are contradicted by the results of these simulations. Anyway, on with discussions!'

Here's a solution that has not yet been posted, and leads to yet a different answer. Oy vey.

Let's name two events:

  • A = Bob wins
  • B = Bob rolls a six on his second turn *and* the game gets far enough for him to do so (mind, this is different from "given")

We wish to calculate neither of these things, but rather the conditional probability P(B|A), the probability of B given A. This can be done by using Bayes theorem, which states: P(B|A) = P(A|B) P(B)/P(A)

--- Alternatively, we can simply use the definition of conditional probability: P(B|A) = P(B&A) / P(A). In this case, B implies A, so P(B&A) == P(B).

So we have three things to calculate: P(A), P(B), and P(A|B), then we can calculate the answer.

P(A|B) is just 1, because Bob rolling a six on his second turn implies that he wins.

P(B) is the number we know and loathe: (5/6)(5/6)(5/6)(1/6) = 125/1296

P(A) is the probability of any sequence that looks like this: *6, ***6, *****6, ..., where * represents any number other than 6. These sequences are all mutually exclusive, so we can just sum them up:

 sum_{n=0..inf}{ (5/6)^(2*n+1) * (1/6) }
 = sum_{n=0..inf}{ (5/6)^(2*n) * (5/6)(1/6) }
 = sum_{n=0..inf}{ (25/36)^n * (5/6)(1/6) }
 = (5/6)(1/6) sum_(n=0..inf){ (25/36)^n }

The geometric series theorem gives us that sum_{n=0..inf}{ (25/36)^n } = 1/(1-25/36) = 36/11. So P(A) = (5/6)(1/6)(36/11) = 5/11.

And thus, the probability we are looking for, P(B|A) = 1*(125/1296)/(5/11) = 275/1296, which is about 0.212.

I made a simulation here: [1]

The answer is not 275/1296 The answer is 125/1296

Not really sure why this is so difficult...i've been reading posts here talking about the semantics of "oh but what does it really MEAN to say that sue rolls first" and blah blah blah..

listen! it means that SHE ROLLS FIRST!....far out!

sue rolls a 1,2,3,4 or 5 ( 5/6 ) bob rolls a 1,2,3,4 or 5 ( 5/6 ) sue rolls a 1,2,3,4 or 5 ( 5/6 ) bob rolls a 6 ( 1/6 )

probability = (5/6) * (5/6) * (5/6) * (1/6) the game is over... Sue rolled first! ... Bob rolled a six on his second turn... what's so hard about this?

to all the people that made computer simulations i say this... just because you've made a simulation does NOT make you correct... you've misunderstood the problem and implemented an incorrect solution. [Reply: No, you've misunderstood the problem].

if you REALLY are not convinced... forget a computer simulation... get a freaken pencil and paper and write out all the possible combinations of rolls and then count the number of combinations that end up with bob rolling a 6 on his second turn.

--- Yep, Sue rolls first. But you missed the point that it's only the games that Bob wins that matter. That make it 275/1296 ---

Here's an easier way to get P(A):

  • C = Sue wins

Given that Sue rolls 1-5 on the first roll (5/6 chance), Bob's chance of winning is the same that Sue's was at the beginning of the game (by symmetry).


P(A) = (5/6) * P(C)

combined with

P(A) + P(C) = 1

solves to

P(A) = 5/11

We believe the solutions posted so far are incorrect. We claim that the answer is in fact 1/6, and this would apply on *any* of Bob's rolls (second, fifth, one-hundredth). Here's the logic:

Let's first count the total number of events in our event space. In Sue's first roll, there are 5 events that allow the game to continue (a 6 ends the game). Now, for every one of these 5 events, Bob can roll any number, but a 6 would end the game prematurely, so only 5 of Bob's first roll outcomes continue the game. So far, we have 5*5 events that keep the game going. Now Sue rolls a second time. Again, only 5 out of 6 events allow the game to continue, increasing the total to 5*5*5. Finally, Bob rolls a second time. Since this is the final level which we are looking for, we can accept any of the 6 outcomes (a 6 will result in a win, and anything else would keep the game going, but the question is only up to Bob's second roll). So the total number of event in our event space is 5*5*5*6. This logic can be extended to any roll beyond Bob's second. Let me sum it up as follows:

Sue's 1st roll = 5 events that keep the game going (a 6 makes Sue win)
Bob's 1st roll = 5*5 events (a 6 ends the game prematurely)
Sue's 2nd roll = 5*5*5 events (a 6 makes Sue win)
Bob's 2nd roll = 5*5*5*6 (at this level, we can accept all of Bob's outcomes)

Now, of those 5*5*5*6 events, how many result in Bob winning? In the next to last level of this "tree", for each of the 5*5*5 events present in Sue's second roll, only 1 gives Bob a win: a six (the other five make the game keep going, but we don't want this). Therefore, out of our whole event space, there are 5*5*5*1 events that result in Bob winning. Hence, the answer to the question is:

P(Bob wins in second roll) = 5*5*5*1/5*5*5*6 = 1/6

Weird, huh? It's simply the probability of rolling a six on a single die roll. Notice that the answer would be the same for any of Bob's rolls, since we just keep adding 5s in both the numerator and denominator of the fraction.
What do you think of this?
(Posted by Meithan and Tlamatini)

  • I think that answers "What are the odds Bob wins on his second roll, given that Bob makes his second roll?", which was not the question. *ABC*
  • That's only semantics, and as I understand the question, it assumes Bob makes his second roll. This is just a difference in how the question, stated in common language, is interpreted. We would need to ask the creator of the problem what he meant.
  • In fact, we've been thinking it over. The phrase "Bob rolls a 6 before Sue" means that Bob won the game, of course. But this happens only once in the event space. The question is: "what is the probability this happened on Bob's second roll". As we understand it, it means it didn't happen in the first roll, so Bob actually got to roll for a second time. The question you have in mind, if phrased in a better way, would be: "What is the probability Bob wins either in his first or second rolls?" (Meithan and Tlamatini)

For Bob to roll a 6 on his second roll, the following sequence must occur (roll outcome in brackets, probability in parens):
Sue's 1st: [1-5] (5/6)
Bob's 1st: [1-5] (5/6)
Sue's 2nd: [1-5] (5/6)
Bob's 2nd: [6] (1/6)

Multiplying, P(Bob rolls a 6 on his 2nd roll) = (5/6)^3*(1/6) = 125/1296

  • While I don't disagree with your math, I think it's an unfair distinction that the problem states Bob rolled a 6 before Sue. It's stated as a given, but it's then being calculated as a condition of the probability. It's like saying, "what is the chance of a coin coming up tails on THIS flip (and oh, by the way, assume you just flipped heads 100 times in a row)" Either we know it was 1-5 on the prior 3 rolls, or we don't. It's badly stated, much like the conveyer-belt runway. 17:04, 11 February 2009 (UTC)

The question isn't 'What are the odds that Bob wins on his second roll,' the question is 'What are the odds Bob wins on his second roll, given that Bob wins?' I don't think it's poorly stated, just trickier than it looks.

  • I solved it the first way, (5/6)^3*(1/6) [What is the probability Bob rolled the 6 on his second turn? - no more assumptions] and with the assumption that Bob wins [What are the odds Bob wins on his second roll, given that Bob wins?]. I don't know if I got the correct answer but it's not that tricky. What I don't understand is why someone would think the answer to be (5/6)*(1/6). *ABC*
    • To answer ABC's question: The problem states that Bob rolled a 6 before Sue. Therefore, every one of Sue's rolls is between 1 and 5; only Bob can roll a 6 before the game ends. So the problem can be rephrased as "What is the probability that Bob first rolls a 6 on his second roll?" That probability is exactly 5/36. [I'm not saying this argument is correct, just that it's intuitive. Probability is deeply weird.]
      • Thanks! *ABC*


I hate these problems, where an ambiguity in the wording leads to two groups of people convinced they have right and incompatible answers. For what it's worth, if the problem means this:

  1. the trial has already taken place,
  2. we know that Bob won, and
  3. we wonder what the probability was that it happened on his second roll,

then the answer is (5/6)*(1/6) = 5/36. I personally think that that's the most sensible interpretation of the wording of the problem, but since Randall wants that not to be the answer, then I think the problem is poorly stated. [Reply: you have only calculated the probability that Bob wins on his second roll for a game that he could lose. What you are told is, is that Bob has won a game. He could have won it on his first, second, third, ... roll. So you need to find the probability of him actually winning a game, at all. You haven't done that part].

  • First, solve for the odds of Bob winning at all: Well, if he wins on his first roll, it's 5/36. If he wins on his second roll, it's 5/(36^2), on the third it's 5/(36^3)... well, looks like a geometric series. It sums nicely to 1/7.

Then, figure out the odds of Bob winning on his second roll: (5/6)^3*(1/6) = 125/1296. Multiply by 7 (dividing by 1/7) equals 875/1296. Finally can get this problem out of my mind.

[Reply. Your probabilities are wrong. Probs for Bob's first, second, third, ... rolls are: (5/6)(1/6), (5/6)^3 (1/6), (5/6)^5(1/6),.... That lot adds up to 5/11.

It seems to me that the sum for Bob's wins comes to (5/11) not (1/7). Treating as a geometric series, the first term is (5/36) for Bob's winning on his first roll. As stated above, the odds of him winning on his second roll are (125/1296). Remember that Sue has a chance to win between each of Bob's rolls. Thus our common ratio is (25/36) not (5/36).

This is also logical. If Bob only had a 1/7 chance in winning, that means that Sue's chances were 6 times greater, merely by going first. Bob's 5/11 chance seems a lot more reasonable.

Then, if you divide Bob's odds of winning the second roll (125/1296) into his odds of winning at all (5/11), you get a final value of (275/1296). That is to say, given that Bob wins the game, there is approximately a 21.22% chance that it was done on his second roll. - E.Meader (who add this to the wrong comment the first time)

What is the probability Bob rolled the 6 on his second turn? - Doesn't it remain constant? Each time someone picks up the die 1/6.

[Reply: You're right in that each roll has a probability of 1/6 of being a 6. But you've not considered the probability of actually getting to the point at which that roll can be taken. For it to be Bob's second roll, the previous three rolls must have not been a 6, the probability of failing to get a 6 with three rolls is (5/6)^3 = 125/108. Then for the next roll to be a 6, the probability is 1/6, as you stated. So the overall probability is 125/108 * 1/6 = 125/1296. However, you now divide by 5/11 to only include games that Bob wins => 275/1296].

E.Meader's result is correct. The probability that the second roller wins is (1/6)sum((5/6) + (5/6)^3 + ...) = (1/6)(5/6)geometric_series(25/36) = (5/36)/(1 - 25/36) = 5/(36 - 25) = 5/11. - Evan


I hate these problems, where an ambiguity in the wording leads to two groups of people convinced they have right and incompatible answers. For what it's worth, if the problem means this:

  1. the trial has already taken place,
  2. we know that Bob won, and
  3. we wonder what the probability was that it happened on his second roll,

then the answer is (5/6)*(1/6) = 5/36. I personally think that that's the most sensible interpretation of the wording of the problem, but since Randall wants that not to be the answer, then I think the problem is poorly stated.

  • That is the correct interpretation, but your answer is incorrect. The explanation at the top that gives 275/1296 is the correct answer for your interpretation. You can't just ignore Sue's flips once you know she didn't win. -- dazmax

I would agree that the problem is really "What are the odds that Bob rolled a six on his second turn GIVEN THAT Bob rolled a six before Sue?" However, I think your math for computing P(Bob rolls a six first) is incorrect.

  • As noted, the probability of Bob rolling a six before Sue and on his second turn is (5/6)*(5/6)*(5/6)*(1/6) = (5/6)^3 * (1/6).
  • However, the probability of Bob rolling a six before Sue is the series of the following probabilities:
    • Sue (not 6), Bob (6) = (5/6) * (1/6)
    • Sue (not 6), Bob (not 6), Sue (not 6), Bob (6) = (5/6)^3 * (1/6)
    • [Sue (not 6), Bob (not 6)] x 2, Sue (not 6), Bob (6) = (5/6)^5 * (1/6)
    • [Sue (not 6), Bob (not 6)] x 3, Sue (not 6), Bob (6) = (5/6)^7 * (1/6)
  • In short, this becomes sum( (5/6)^(2i-1) * (1/6), i = 1 to inf) = (1/6) * sum( (5/6)^(2i+1), i = 0 to inf) = (1/6) * sum( (5/6)^(2i) * (5/6), i = 0 to inf) = (5/36) * sum( (25/36)^i, i = 0 to inf), which is a simple geometric series. That series has sum equal to a/(1-r), where a is the first term and r is the multiplicative part, so that equals (5/36) / (1 - 25/36) = (5/36) / (11/36) = 5/11.
  • As such, the final probability is [(5/6)^3 * (1/6)] / (5/11) = [5^3 * 11] / [5 * 6^4] = (5^2 * 11)/(6^4) = 275/1296. 18:07, 11 February 2009 (UTC)

Simulation shows that the answer 275/1296 is correct.

  • I can vouch for the fact that simulation shows 275/1296 is correct.

  • I think I agree that this question is poorly stated. We have, as given information:
    • The rules of the game.
    • The fact that Bob rolled a 6 before Sue. Since this means Bob ended the game, you can restate it as "Sue never rolled a six."
  • Given that, we don't even have to consider the probabilities of Sue's rolls -- we've been told that she never rolled a six. (Consider being asked the question, "I just rolled a die and it didn't come up six. What is the probability that it didn't come up six?") That means the chance that the game ended on any particular one of Bob's turns is just P(he gets to take that turn) * (1/6). But that gets us back to 5/36, which is not supposed to be the answer. I suggest that the wording of the "Bob rolls a 6 before Sue" needs to be adjusted.
    • "I just rolled a die and it didn't come up six. What is the probability that it didn't come up six?" -> 1 in 6!
      • I don't understand this at all. Obviously the probability a fair six-sided die did not come up six is 5/6 (assuming, naturally, exactly one side is labeled "6"). The probability that the die did not come up six given that it did not come up six is obviously 1. I don't know hwo you get 1 in 6. Also, be careful where you put your exclamation points, as 6! = six factorial = 720. 23:55, 31 March 2009 (UTC)

A more intuitive solution:

Everyone has already solved for "the odds that Bob will win on his second roll" - 125/1296, for "Sue misses, Bob misses, Sue misses, Bob hits" in that order.

The problem is that that answer is the odds that Bob will win on the second turn, out of the set of ALL GAMES. What we want is the odds that Bob will win on his second turn, out of the set of GAMES THAT BOB WON. How do we know how many games Bob will win?

Well, pretend that Sue and Bob roll simultaneously, but that Sue's die is counted first if it is a 6. This gives us 36 possible results. Of those, 6 are wins for Sue, 5 are wins for Bob (he loses the 6-6 tie), and 25 are "roll again". This means that there are 11 possible exit conditions of the loop, all equally likely, and Bob wins 5 of those. Thus, in the set of ALL GAMES, Bob will win 5/11 of them.

Therefore: In all games, Bob wins 125/1296 of them on the second turn. Bob wins 5/11 of all games. Therefore, Bob wins on the second turn in (125/1286) / (5/11) = 275/1296 of all games he wins.

Which is the same answer everyone else is getting. Using a faster method. -John 22:27, 11 February 2009 (UTC)

^ Thank you for addressing the ambiguity in the problem, which is the only reason anyone's having trouble with it.

The reason I think some people (including me!) get confused: Sue's *first* roll doesn't matter, but her second roll does. The reason the intuitive "Ignore Sue's rolls and just get 5/36 with Bob's!" solution isn't correct is that Sue's *second* roll isn't irrelevant. If you think of the game as "Whenever Sue wins, roll that round again until she doesn't" (which is equivalent to a probability of 1/5 on 1 through 5 and 0 on 6), it should be apparent that of the 36 potential outcomes of Bob's two throws, some of them are more likely then others.

(And it's easy to show that Sue's *first* roll doesn't matter -- you get the same probability if you give Bob the first turn! (When operating under the assumption that Sue loses.))

-- Nicely spotted. You are correct in that the final answer is the same regardless of who rolls first. But the intermediate probabilities are different. If Bob rolls first then the probability of him winning the game is 6/11 (rather than 5/11). The [absolute] probability that he wins a game on his second roll is (5/6)(5/6)(1/6) = 25/108 = 150/1296 (rather than 125/1296). The relative probability of Bob winning on his second roll is now (150/1296)/(6/11) = 275/1296 as before. Clearly it is important that you don't mix the two scenarios midway in the calculations.

I doubt that confused anyone, because I doubt that they even calculated it.

All that I have shown is that it is irrelevant whether Sue rolls first or second, not that Sue is completely irrelevant. Because saying that Sue is completely irrelevant leads to a different result (there are several different opinions on this page as to what the answer should be under that assumption), they cannot all be right. To me it is a matter of common-sense that to mystically say that Sue is irrelevant, is almost certainly why they get the wrong answer. I include Sue because Sue really does roll the die in the game. If Sue's rolls were irrelevant, then we'd all get the same answer (assuming no other errors or mistakes are made), whether or not we allowed for her rolls.

An intuitive argument against the (5/6)*(1/6) value:

Consider the first two rolls made by each player. There are 5*6*5*6=900 where Sue doesn't receive a 6, and of those, 5*5*5*1=125 where Bob gets a 6 on his second roll, but not his first. This is where the 125/900=5/36 figure comes from. However, the value of "900" there both overcounts and undercounts the real sample space. For instance, it doesn't count the state "1, 6, 6, 3" (Sue rolls a 1, Bob rolls a 6, and if they kept going, Sue would roll a 6, and Bob a 3). The extra rolls after the game is finished are included here to keep the probabilities equal (all strings of four rolls have equal probability of occuring... if there's a 6 in there, you simply count how many strings with the same preceding rolls remain in the state space to get the relative probability). Now, this would be ruled out of the state space used by the 5/36 argument... but it counts, as Bob won. On the other side, the state "1, 2, 3, 4" is included in the state space by the 5/36 argument... but it doesn't completely count, as Sue could still win. Specifically, it only counts for a weight of 5/11 - the chance that Bob would go on to win from here (others have already covered this value well enough).

So if we take all those together, our state space, instead of 900 elements in size, is the number of states where Bob wins on the first go, regardless of what Sue would get on her second roll (5*1*6*6=180) plus the number of states where Bob wins on the second go (5*5*5*1=125), plus 5/11 of the states where neither roll a 6 (5*5*5*5*(5/11) = 3125/11). Add these up and we get 6480/11. So our final probability is 125/(6480/11), and not 125/900=5/36. Evaluate that out, and we get 1375/6480 = 275/1296... the same value the Bayesian method gets. Phlip 10:44, 12 February 2009 (UTC)

A second, intuitive argument to show that Sue's second roll matters:

Imagine if instead of the second player, Bob was the 1001st player. In order for Bob to get a turn, 1000 other people have to NOT roll a 6.

Meaning, in order for Bob to get a SECOND turn after failing to win in his first turn, 1000 people must again not roll a 6.

If you run this long enough, Bob will win a couple - but, of those wins, how many will come up when he beat the (1/6)*(5/6)^1000 odds, and how many will come because he bucked the (1/6)*(5/6)^2001 odds?

This should make it clear that the other players DO affect when Bob's wins will come, even if we take only the subset of games where Bob won. -John 17:27, 12 February 2009 (UTC)

To the 5/36 people:

You are understanding the problem and it's wording correctly, the problem with your argument is the false (but reasonable sounding assumption) that you can ignore Sue's rolls just because she lost. If this doesn't make intuitive since, consider the following modified problem:

Let's say instead of rolling one dice, sue rolls 100 dice and picks the best roll. Let's also assume that we don't know who won. Since Sue goes first, the odds of Bob winning on the first round are (5/6)^100*1/6=2.01244558 × 10-9 while the odds of bob winning on the second round are (5/6)^200*5/6*1/6=2.0249686 × 10-17. This means bob is roughly 100,000,000 times more likely to win on his first roll than his second roll (intuitively, he might have dodged Sue's 100 dice once but it's pretty inconceivable that he dodged them again). Now, let's assume we know Bob won. This doesn't change the fact that he's still 100,000,000 times more likely to win on his first roll than his second roll. This means that assuming Bob won, the probability that he won on his second roll<.0000001%.

Now, since the only difference between my alternative problem at this point and the one XKCD linked to is that Sue gets to roll more dice it should be clear that either the answer to the XKCD problem is <.0000001% or more realistically, Sue's roll does matter, even when you know that Bob won. (I just read John's argument after typing this and realized that it was pretty similar, but I already typed this all up so I'm going to post it anyway.)

Or if that doesn't convince you, run the simulation for all possible rolls(including Sue's rolls) up to some ridiculous point. Then once this is done, discard all sets that end up with Sue winning. Then calculate how many of the results where Bob wins have him win on his second roll. You'll get the same result as XKCD. 05:44, 13 February 2009 (UTC)

Take the odds that the game was won on the second round. 25/36*11/36 275/1296. Of the games Bob wins, the distribution across rounds will be the same as the distribution of games Sue wins, which will be the same as the distribution of wins overall. 09:20, 14 February 2009 (UTC)


We know that Bob won. This means Sue did not roll a six at any time before Bob rolled a six. It does not matter what Sue rolled in the range of one to five because the dice has no memory so does not influence Bob's roll. The probability of Bob rolling a six on his second roll is the probability of not rolling a six on the first roll multiplied by the probability of rolling a six on the second roll. 5/6 * 1/6 = 5/36.

This commits like ten errors, but the most important is that you absolutely cannot ignore Sue's roles. It is true that distinguishing between 1, 2, 3, 4, and 5 is unnecessary, but distinguishing between these and a six is. Very roughly put, even given that Bob wins, it is more likely that he won earlier because the longer the game goes on, the more likely it is Sue could win so the luckier Bob would have to get to stay in and eventually win. Now, define P(A) = the probability that Bob wins, and P(B) = the probability that Bob wins on his second roll. The question then asks for the value of P(B|A) = the probability that Bob wins on his second role, given that Bob wins. There are a number of ways to calculate this, but a convenient one is to use Bayes' theorem: P(B|A) = P(A|B) P(B)/P(A). P(A|B) = 1, obviously, because if we know Bob won on his second roll he certainly won. P(A) = 5/11, which is explained in multiple sections including the nice argument that out of the 36 possible combinations of initial two rolls, Bob wins 5, loses 6 (since if Sue rolls a 6 then Bob rolls a 6, Bob still loses; in fact, he doesn't even get the chance to roll the second time), and the other 25 just lead to another set of rolls, so Bob wins 5/11 of the games. P(B) = (5/6)(5/6)(5/6)(1/6) = 125/1296, because in order for Bob to win on his second roll, first Sue must not roll a 6, then Bob must not roll a 6, then Sue must again not roll a 6, then finally Bob must roll a 6. Therefore, going back to our initial calculation, P(B|A) = 1 * (125/1296)/(5/11) = 275/1296. So yes indeed, the answer is 275/1296. 04:47, 31 March 2009 (UTC)


Revised by self: 4-3-09 3:47PM EST

I, the unschooled, must school all you math geeks who are probably at least undergraduates. I'll take you on a journey demonstrating the various flaws in logic that were made, and some other ones that COULD be made, before arriving at the correct solution. Basic calculus and set theory end up prevailing in the end. [Reply: that might have been helpful if you actually had even the smallest clue about probability and infinite processes. Sadly, you are most profoundly lacking in both. I can't understand why you are so unaware of your own ignorance?].

You’re all wrong, because the puzzle is flawed, or intentionally much more deceptive than anyone anywhere seems to get.

Even the people who head down the clever, but incorrect road to arrive at 21.21% haven’t taken into account the fact that there are an infinite number of cases in which neither one ever, ever rolls a 6, for all of eternity. [Reply: there are precisely 0 games that don't end with a 6 being rolled. If a 6 hasn't been rolled, then the game is still being played]. Nowhere in the rules was it stated that eventually someone HAS to roll a 6 in *general play* and end the game. [Reply: you have to roll a 6 to end the game] because It didn’t even state that the 6-sided die had a side denoting ‘6′ as stated?, nor did it state if the game could move through an infinite number of iterations before one person finally rolls a 6 or not,[Reply: Because Bob rolled a 6 before Sue, we can deduce that the die has a 6 face and that it took a finite number of rolls - otherwise the game would still be being played] but that’s just getting into set theory pedantry that’s unnecessary. Games can be infinite and never resolve [Reply: wrong, they just haven't finished yet], or infinite AND resolve [Reply: wrong. If it's resolved (aka ended), then a (de)finite number of rolls were taken], and you didn’t search “all possible games” since you didn’t search or account for infinity [Reply: wrong. the 5/11 has allowed for "infinite" game - that's easy as they have 0 probability]. Nor did you fully explore the question because of this. In this SPECIFIC instance, it was stated that Bob *DID* eventually roll a 6. Sue did not, thus her odds of ever doing so are 0 over the whole of the game, and his are 1, even over an infinite length game. This tells us that a game can be infinitely long and still resolve itself; the implications of this are outside the scope of what I'm writing here but become important below. Also at the bottom is a more complete justification for excluding Sue, if that wasn't enough for you.

The odds that it was on his second roll that it happened are uncomputable because of this basic fact: we don’t know how long the game was [Reply: yes we do, it took 4 rolls of the die - 2 by Sue and 2 by Bob]. Even if the odds of an infinite game happening are infinitely small, it is still possible by the rules [Reply: you are treating infinity as if it were a (finite) number]. It might then be thought that this problem is not possible to solve [Reply: only by an incompetent]. You could say the odds it happened on the 2nd roll are infinitely small [Reply: then you'd be wrong, because the probability is 125/1296 (assuming you meant meant Bob's second roll)], I suppose. I.e. 0, but this would hold true with any position you ask about by this logic, so it must be wrong, or it presents a paradox, since the puzzle explicitly stated that he did in fact roll a 6, eventually, so every position can't have a 0 odds. What it really means is that we can't assign a specific value to any one roll, however, but we CAN derive a meaningful answer as I will demonstrate, using correct logic. [Reply: the only paradox I see is that you think that you know what you're talking about and I think that you don't know what you're talking about].

(Any mathematician worth their salt should now intuit the cancellation of infinities I'm about to perform in dumbed down language...infinitely small odds of an infinite game, but also infinitely small odds of him doing it on his second roll in an infinite game...come on now people...) [Reply: You can be even dumber - I'm impressed].

Let’s look at it another way. The only question we *can* answer easily is, what is the probability that he rolled *a* 6 on his second turn. We know that, assuming he didn't do it on his first [Reply: you don't need to assume that - Bob cannot have rolled a 6 on his first turn if he won on his second turn], that probability is independently 1/6. This might (erroneously) be said to be equivalent to asking *the* 6, because it is game-ending; in an infinite number of games which Sue does not roll a 6 before him, in which he does eventually roll a 6 and the game terminates, he will roll a 6 on his second turn with a probability of 1/6. No more rolls are made, and the sequence is made non-infinite, and thus computable. Thus, the probability that he rolled *the* 6 on his second roll is 1/6, because that’s the probability he will roll *a* 6 on *any* roll. Except this isn't the correct answer either.

Delving deeper than that, the answer actually becomes that the odds are "more than or equal to 0, and less than or equal to 5/36," or in plain English, no more than 5/36. He would've had to roll 1-5 on his first turn, but then maybe he didn't. 5/6 * 1/6 = 5/36. The answer is <= 5/36 because it is the result we can define for the infinite series at this point, but we know that if we continue out asking "his 5th roll? His 78th roll? His 100,000th roll?" ad infinatum we will eventually reach "infinitely small," as shown above. That is the reason for the "less than or equal to" part. This answer tells us that there is at MOST a 5/36 probability of this happening, even though practically we can't say for sure that that was the probability that it WAS what happened. We can only say what its upper limit was. In formal language, we write [0, 5/36) . 0 is excluded because he eventually rolls a 6, so there has to be SOME possibility, right? [Reply: the probabilities add up to 5/11, not 5/36].

But we're not QUITE there. Up above, we established that a game CAN run infinitely long and still resolve [Reply: you asserted that, you didn't (and cannot) establish it, because it's not true]. To reiterate, the rules stated no bound on the length of the game, and probabilistically speaking, it is possible to never roll a 6 for all of eternity [Reply: probabilistically speaking it is certain that you will roll a 6 (eventually)]. But the problem also stated that Bob DOES roll a 6, so an infinite game can be resolved {Reply: because Bob rolled a 6, the game was finite]. We CAN assign 0 probability to every *discrete* position. So we write the answer an inclusive (0, 5/36).

People often come to probability through learning what their mistakes were, but they don't integrate it tightly enough [Reply: you are still only in the making mistakes part of the learning curve]. "The dice has no memory" is an indication of this. That is an argument that is made either to one who is ignorant of probability, or made for ones' own line of reason, possibly indicative of an improper understanding of the mechanics. But this is seriously basic computability and probability, messed up by attempts to be clever that just aren’t clever enough, because of the deceptive nature of the puzzle.

Before you cry "but wait! You CAN'T IGNORE SUE!" The problem explicitly TELLS YOU TO. It says, SHE NEVER ROLLS A 6. Even hypothetically if she rolls 5000000000000*5000^202000000 die before Bob gets to roll a single one, she NEVER. ROLLS. A. 6. in this game. [Reply: Calm down - Sue simply doesn't roll a 6 in the games that Bob wins]. Remember that we're talking about a specific game. Her range of rolls is NOT 1-6 with a 1/6 chance of either, it's 1-5 with a 100% chance of one of them each and every time, as indicated by "...rolls a 6 before Sue does." We know that at every possible point in this game (including "at infinity"), her odds of rolling a 6 are 0. The problem states it. Thus we have no need to distinguish between 1-5 and 6, or worry about how her going first gives her an advantage, because she has no effect. If the question was "What are the odds that Bob might win on his second roll, given Sue might roll a 6 twice before him [Reply: if Sue rolls a 6 before Bob, then Sue (not Bob) wins], and the outcome of the game is undetermined" she would be relevant and the answer would be (0, 21.21%). But the outcome IS determined, so that isn't the question. It explicitly states that for this specific game, she doesn't have any chance of rolling a 6 and thus winning, because Bob rolls it and wins. They are mutually exclusive conditions. She is in fact canceled out of the equations if you actually take the time to write them out formally. So she is not relevant, and it becomes a game of "when will Bob eventually roll a 6?" That's a very, VERY basic probability theory mistake...but a very hard one to decide sometimes. [Reply: why don't you go the whole way, in the games that Bob win's on his second roll, it is certain that Sue didn't roll a 6 on her first or second roll, and that Bob didn't roll a 6 on his first roll and he did roll a 6 on his second roll. Now emphasise that as Sue CANNOT roll a 6, and Bob CANNOT roll a 6 the first time and MUST roll a 6 on his second roll. Then the probability of Bob winning on his second roll out of all the games that Bob won on his second roll is 1 - he's certain to do it, so you can ignore both Sue's and Bob's rolls.]

This problem is either incorrectly formulated (i.e. the one who posed it didn't understand its implications properly and thus didn't frame it well), or the level of "cleverness" was of a higher order than most seem to have realized. Not knowing its origin I can't hazard a guess as to which is true.

[Reply: Almost every assertion you make is false. You treat infinity/eternity as if it were finite. What you call an infinite game is simply a game that hasn't yet ended. Most comically, you think a game which took 4 rolls is infinite. It is quite clear to me that you don't have any real knowledge of the nature or meaning of probability and infinity].


I cannot even begin to understand your conceptions of infinity or probability except to say that they are mathematically nonstandard and nonrigorous at best and completely wrong at worst. You clearly demonstrate no understanding of an infinitely long game at the point where you ignore the numerous infinite series that others with the correct solution have computed, state that a six can be rolled in an infinitely long game (unless you specify that the game lasted for a specific number (say, omega) rolls, there is no "last" roll), and end up with a confusing, convoluted, and demonstrably incorrect solution. You don't even understand 0 probability, a rather important concept when dealing with infinite sets. In particular, you state that the probability of Bob winning on a given roll should be zero (but then later contradict this) and that it is impossible for the probability of every roll to be zero and still have winning be possible (in fact, there are many situations where infinite possibilities each have zero probability but one of them must occur). I have to go now, but when I come back later I'll try to give specific reasons your result is impossible. 21:50, 3 April 2009 (UTC)
Please re-read the argument. I was modeling the system set up by the problem given, which is not consistent with reality, and the usual analysis are not applicable to it because of this. The problem itself is "broken." My analysis diverges from a "real-world" approach because the problem is not a real-world problem; it is fundamentally flawed, but still partially solvable. I analyzed it by its own rules to provide the answer. Specifically, I (intentionally, for demonstrative purposes) went through a chain of false conclusions, finding what was wrong with them and then solving it to produce a new one, until eventually arriving at what I believe to be the correct one. That's why I directly contradicted myself. I wasn't as explicit as possible, but there's only so much I can type, being disabled. Mathematically speaking, it shows more rigor than most other things I've seen here. Ever hear that old joke about a biologist, a physicist, and a mathematician on a train that see a brown cow? The biologist says, all cows are brown! The physicist says, some cows can be brown. The mathematician says, there was at least one cow, at least one side of which was brown. That's a fairly good standard for mathematical rigor, actually. I can understand how if you don't understand set theory and how it applies to this problem, you might have trouble following the argument. I do look forward to hearing your critiques though :-P I'm certainly not terrifically formally educated, and I realize that I am fallible. The thing is, I do understand all the analysis others applied to it perfectly - and also understand their flaws in this context. They might be perfectly good in most other contexts, but this is a special, broken case, as I'd hoped I'd demonstrated. Finally, you said "in fact, there are many situations where infinite possibilities each have zero probability but one of them must occur," which is only tangenitally related, but I think I see what you're getting at. I thought I demonstrated how this applies? Did you not read it the first place I did so, and then miss the second?
I like to think I'm educated in these things [Reply: you are wrong about that too], but Wikipedia probably isn't the best University around :P. Seriously, though, your comment still makes no sense to me. There is nothing about this puzzle especially non-real world. We could certainly play this game in real life, and while dice may not be perfectly random, we can approximate them to be very close. Your objection seems to be that there is a possibility of a game lasting for an infinite number of turns, but that is in fact the ONLY possibility which has probability zero, and as such we can safely ignore it for our analysis (it is given no weight when calculating probability). The probabilities of winning on a given turn can be arbitrarily small, but must be nonzero, so we must include all of them in an infinite sum (although there are easy ways around this, of course). Anyways, the only real flaw I can understand well enough in your analysis to point out is where you state Sue's rolls can be ignored; this is definitely not true, and several explanations above gave good parallels to help explain this. Imagine an analogous scenario where instead of rolling a six on a six-sided die to win, Sue only had to roll any number except one on a 1,000,000-sided die to win, while Bob still had to roll a six on a six-sided die as usual. Now, we are given that Bob somehow got extremely lucky and Sue did not win on her first roll, because we are given that Bob wins. However, we don't know when. Now, do you expect it to be more likely that Bob won on his first turn, or that he managed to get extraordinarily lucky again and have Sue again roll a one, such that Bob can win on a subsequent roll? It is clear that the first possibility is far more likely, and that the reason for this is that we cannot ignore Sue's rolls. The problem we are trying to solve is completely analogous but with a 1/6 probability replaced with 999,999/1,000,000.
A final note is that dice do not have memory. This isn't a mistake, this is just obvious. I mean, I just tested this by asking some dice what their last roll was and they couldn't tell me; couldn't remember a thing. I don't know what you meant by this statement.
To help me out here, maybe you could try explaining more precisely the error in the reasoning of, say, John. That might help me understand what you think the flaw is and why your answer makes sense (and, honestly, what it is). 01:47, 4 April 2009 (UTC)

--- To the unschooled person before the last poster. There is nothing wrong with the question, there is something very wrong with your ability to deal with it. I have no difficulty whatsoever in seeing that the proposed game can really be played. All that is required is two people, one die, and their willingness to participate. Very roughly, one in a half a billion games will require over 100 rolls. Bob and Sue will be dust long before they could play anywhere near enough games to have had a reasonable chance of playing one that long.

Here's a way to do the problem that should make it clear. First consider all possible games, without concerning ourselves about winning and losing etc. The probabilities of a game ending (due to a 6 being rolled) on the first, second, third, ... roll is 1/6, (5/6)(1/6), (5/6)^2(1/6), ... The probability that the game ends on the nth roll is (5/6)^(n-1)(1/6).

Now we note that if n is odd, Sue wins, and if n is even then Bob wins.

Check - those probabilities had better add up to 1, because a 6 must be rolled eventually (1/6) + (5/6)(1/6) + (5/6)^2(1/6) + (5/6)^3 + ... = (1/6)(1 + (5/6) + (5/6)^2 + (5/6)^3 + ...) = (1/6)/(1 - (5/6)) = 1, as required. Note there is no upper bound on the number of rolls required. That allows for what you quite incorrectly call infinity.

Now add up the probabilities of the game ending on an odd numbered roll. We get (1/6)(1 + (5/6)^2 + (5/6)^4 + ...) = (1/6)/(1 - (5/6)^2) = 6/11 Similarly we get 5/11 for the odd even numbered rolls. NB there are several much slicker ways of calculating the 5/11 and 6/11. But I want you to see each roll individually.

The probability that the game ends on the fourth roll is (5/6)^3(1/6). The fraction of games that end on the fourth roll compared to the fraction of games that end on an even numbered roll is (5/6)^3(1/6)/(5/11) = 275/1296. Now we note that the fourth roll is equivalent to Bob's second roll, and that games that end on an even numbered roll correspond precisely to the games that Bob wins. That fraction is the probability that the question is asking for.

By ignoring Sue's rolls, you are actually analyzing a game in which the only player is Bob (why couldn't you see that for yourself?), and you are simply finding the probability that Bob rolls a 6 on his second roll for that die patience game.

The above disposes with assertion that you can ignore Sue's rolls. If you were right, then you'd get the same answer as me. If Sue rolls a 6, then it simply means we don't include that game when calculating the answer to the posted problem. I'll let you draw your own conclusions as to the technical merit of your post - hint: it's garbage.

I expect that you'll get into the same hopeless mess when asking about the probability of flipping a coin and getting a heads. You'd argue that it could require an infinite number of flips and so it's incalculable. Before you know it, you'll conclude that the problem is poorly written and unrealistic.

PS I hadn't spelled out the obvious. All that we know is that Bob won the game. He must have done that on his first, second, third, ... to infinity roll. We don't know which of his rolls was the winning one. So we are looking for the fraction of games that Bob wins on his second roll, of the games that he wins on his first, second, third, ... to infinity roll.


OK, I do agree with the calculations got using Bayes' Theorem, but I can't work out the flaw in this line of reasoning: Since Sue does not roll a 6 ever, the probability that Bob has a first go is 1. Therefore the probability that he rolls a 6 on his first go is 1/6. Therefore the probability that Bob has a second go must be 5/6 since Sue cannot roll a 6 on her second go either. The probability of rolling a 6 is 1/6. Therefore the probability of rolling a 6 on his second go must be 5/36.

Alternatively, we can use the method of (desirable outcomes/possible outcomes). Sue can roll 1 of any 5 numbers (since she does not roll a 6) and Bob can roll any one of 6 numbers, so the number of possible outcomes is 5*6*5*6=900. We are looking for those outcomes where Bob rolls a 6 on his second go, but NOT the first one. There are 5*5*5*1 of these = 125. 125/900 = 5/36

OK, I misunderstood the puzzle. It says, "Sue rolls first" as an example, and I thought that she HAS to roll first. Anyway, there is a very simple way to arrive at the 275/1296 solution. If Bob rolls first, him getting a six on his second roll would mean that first he has to roll something else (probability 5/6) and then Sue too (5/6) and then Bob gets a six (1/6). So, the probability of this happening is (5/6)*(5/6)*(1/6) = 25/216. If Sue goes first, then it's (5/6)*(5/6)*(5/6)*(1/6) = 125/1296. Since these are the only two possibilities of Bob getting a six on his second roll, and they are mutually exclusive, if you just add the probabilities, it's: 25/216 + 125/1296 = 150/1296 + 125/1296 = 275/1296. No need for anything too complicated. Now, I haven't made a simulation, but if you say that's what you confirmed, I'll trust you.

To see why the logic that 'Sue's rolls don't matter if it is given that she has lost' is faulty, consider the following problem:

Bob rolls a six sided die on his turn and wins if he rolls a six.
Sue rolls a one sided die on her turn and wins if she rolls a one.
Bob rolls first.
Given that Bob wins, what is the probability that he won on his first roll? (Hint: it isn't 1/6) [Ans: 11/36]

If this example seems inapplicable, consider Sue winning with an n-sided die if she rolls lower than n for a large value of n. - Ben H

This is a stupid problem. Where in the problem statement does it claim that we're only supposed to be considering the cases where Bob wins? [Reply: It says, "Bob rolls a 6 before Sue]. Yes, the nerd in me likes this problem better because it's trickier and more interesting, but I don't really think there's any reason to assume the problem statement is asking for that

I can see 3 reasonable answers based on parsing of the problem.

1. 1/6 - the problem is asking "What is the probability Bob rolled the 6 on his second turn? ". If you are Bob, sitting there taking your second turn, then the probability you will roll a 6 that turn is indeed 1/6. 2. 125/1296 - Out of all the games played, Bob will win on his second turn 125/1296 of the time. I would argue that this is the most reasonable parsing of the question. 3. 275/1296 - Out of all the games played that Bob wins, he wins 275/1296 of them on his second turn.

Why is everyone accepting the #3 as the "true" way to interpret the question? Is it just because that's the answer the XKCD guy gave on his blog?

Poor wording. Simple Math.

Simplified text: Sue and Bob take turns rolling a 6-sided die. Once either person rolls a 6 the game is over. Sue rolls first.

Question: How probable is it for Bob to roll a 6 on his second turn?

Turns: Sue - Bob - Sue - Bob

Possible Rolls: [1,2,3,4,5] - [1,2,3,4,5] - [1,2,3,4,5] - [6]

5*5*5*1 = 125

Number of possible combinations for rolling a 6-sided die 4 times: 6*6*6*6 = 1296

Solution: 125/1296

Person above. You have correctly answered your incorrect interpretation of the question. i.e. you have calculated the probability that Bob wins on his second roll out of all possible games. The question makes it clear that you are supposed to calculate the probability of Bob winning on his second roll but only out of the games that Bob won. i.e. this is a "given that Bob won" type question.

The probability of Bob winning is 5/11. That's the sum of the probabilities of him winning on his first roll, second roll, third roll, ..., infinitieth roll. If you work that out with your method, you'd get the probability of Bob winning is 5/11. But Bob has won, so we need to rescale so that the probability of Bob winning, given that he won, is 1. Do that by multiplying his unconditional probabilities by 11/5. That gives 125/1296 * 11/5 = 275/1296. The rescaling follows by considering all games, then simply discarding the games that Sue won.

To those who say the question was poorly phrased, I thiink it's because you want the question to be something other than what it is. The statement that Bob rolls a 6 before Sue does, is equivalent to "Bob wins" - that's the only bit of the question that's even remotely strange (but it's to avoid almost spelling out "given that Bob won"). I think that the question was written with unusually good clarity. It contains no ambiguity, vagueness or misdirection.

This is the same person as 3 posts above... I appreciate your explanation above. I still think its a bit of a lame puzzle considering that most of the disagreement doesn't come from anything unintuitive in the logic used to arrive at the solution, but in the parsing of the prompt.

For example, let me state another problem as follows:

Sue and Bob take turns rolling a 6-sided die. Once either person rolls a 6 the game is over. Sue rolls first, if she doesn't roll a 6, Bob rolls the die, if he doesn't roll a 6, Sue rolls again. They continue taking turns until one of them rolls a 6. Bob rolls a 6 before Sue. What is the probability of this event?

Would you answer 100%, because it states "Bob rolls a 6 before Sue"? [Reply: "No". The probability of that event is 5/11 = 45.45...%].

Just to address at least one misconception - from the frequentist point of view, the very nature of probability is that infinite numbers of trials are considered. As long as the die has a 6 face, then every game will end with either Sue or Bob rolling a 6 - and that will happen after a finite number of rolls. IMO the question has been written unambiguously and clearly.

The probability that a game will only take one roll is 1/6, the probability that it'll take two rolls is (5/6)(1/6), the probability that it'll take n rolls is (5/6)^(n-1) (1/6). This probability becomes very small for a large number of rolls to have been needed. FWIW, the average number of rolls per game is 6.

It is quite clear that the die is to be taken to be an ordinary, fair, 6-sided die. That Bob wins is given - "Bob rolls a 6 before Sue". That Sue rolls first is given. We are told that Bob has won a particular game (the probability of that happening is 5/11 before the game starts). Bob could have won it on his first roll, second roll, third roll, etc. For each case there is a probability of a particular roll being the winning one. In particular, we are being asked what the probability of it being Bob's second roll.

Here's a few ways of calculating the probability of Bob winning a game. I won't repeat the most direct infinite series version.

For a single roll of the die, let S, s denote the event of Sue rolling a 6 or not, and let B, b denote the event of Bob rolling a 6 or not. P(S) = P(B) = 1/6 and P(s) = P(b) = 5/6. Let p = probability of Bob winning a game.

Bob could win on his first roll. Otherwise he loses on his first roll and as we're back to Sue's turn (i.e. we're back at square one), the probability of Bob winning is [still] p. So p = P(s)P(B) + p(s)P(b)p => p = p(s)P(B)/(1 - P(s)P(b)) = 5/11

Every game must end with S or sB. So p = P(sB)/(P(sB) + P(S)) = P(s)P(B)/(P(s)P(B) + P(s)) = (5/6)(1/6) / ((5/6)(1/6) + (1/6)) = 5/11.

As it's so elegant, I'll repeat a third method that was given above. If Sue fails to roll a 6 on her first go, then the remaining probability for Bob to win is the same as Sue's initial probability of winning, and that's 1 - p. So p = (5/6)(1 - p) => p = 5/11.

The absolute probability (i.e. at the outset of a game) of Bob winning on his second roll = P(s)P(b)P(s)P(B) = 125/1296, so the probability of Bob winning on his second roll, given that he won is: (125/1296)/(5/11) = 275/1296 = 0.212... Note, that is the fraction of all the possible games that Bob won on his second roll out of all the games that Bob wins (regardless of which roll was the winning one).

--- Here's another way to understand that 275/1296 is the correct answer. I'm simply trying to make the situation more concrete. The probability that Bob wins a game on his second roll is 125/1296. Only a few posters don't even get that bit - this note isn't for them. The probability that Bob wins a game is 5/11. I believe that (almost) everyone accepts that too.

If Sue and Bob played sets of 1000000 (one million) games, then on average Bob will win 1000000 * 5/11 = 454545 of them. Bob will win 1000000 * 125/1296 = 96451 games on his second roll. (I've rounded the numbers to integers). So the fraction of games that Bob wins on his second roll out of all the games he wins is (approximately) 96451/454545 ≈ 0.212

Now taking out the factor of 1000000, we see that the exact ratio is 275/1296 ≈ 0.212

Maybe condense the first sentence for clarity? From

"Sue and Bob take turns rolling a 6-sided die. Once either person rolls a 6 the game is over. Sue rolls first, if she doesn't roll a 6, Bob rolls the die, if he doesn't roll a 6, Sue rolls again. "


"Sue and Bob take turns rolling a 6-sided die, with Sue rolling first. Once either person rolls a 6, they win and the game is over. "

P.S. This puzzle has attracted so much criticism that I have to add - nice one, I really enjoyed it. I don't really understand what everyone else is complaining about - the wording is quite precise. Anyone who is told they have the wrong answer should be able to go back to the puzzle and spot what they're missing within a few careful read-throughs. Which, like, logic puzzle. You were warned.

I have added a note to the original puzzle. The note is particularly relevant for the people who've been arguing above that you can just ignore Susan's rolls, because you know she did not win.

The reason you can't ignore Susan's rolls is that they lower the probability of the game lasting as many turns. If Bob plays by himself, then for him to to win on turn N requires N-1 rolls to not be a 6. In the two player game, Bob getting to turn N requires 2N-1 rolls to not be a 6. So having Susan in the game means Bob wins proportionally more games in early turns than later turns. --Jack Mac (talk) 18:17, 5 September 2015 (EDT)


@Shimavak: The problem cannot be solved recursively, since the ant that you're going to "put" after the longest run of n-1 ants (lets call this time t1) has to be put somewhere at time 0 and till time t1 is free to collide (and interfere) with the n-1 ants!

@Andrew: Assuming that you put ants at 0.5, 1.5, ... then only the first collision will occur at 50cm mark.

The answer is pretty straightforward: 100. @McGeddon provides the best solution, @Narapas's is also acceptable.


Simply assume they don't collide - as the ants all must start at the same time, move at the same speed, and reverse when ant x's location = ant y's location, we can model the difficult and complex behavior by simply switching the ants. Put them all in a row, going the same direction, and put one on the edge of the ruler. They'll get 100 seconds to move a full meter.


I believe this problem is best thought of in reverse. We know that the situation which will take the longest would be to have an ant turn around at the very end of the ruler before it falls off (having started from the other end), so we need to look at what configuration would cause this. We can then add an ant each time to provide the longest run time for that iteration to produce the longest run of the n-1 ants iteration, and simply expand to n=100. The rest is left as an exercise to the reader.


Are we allowed to drop ants onto the stick all willy-nilly whenever we feel like it?


Since the ants are zero size, they can all exist at the same location at the same time. Put them all at 50cm facing the same way. They all constantly collide with each other (at the same point) and so constantly turn around. Ants will stay on the ruler forever. [Reply: the ants at the outside of the group will either walk directly to the end, or bounce of the group and then move directly to the end. Either way that creates a new pair at the outside of the central group. That will all be done in 0 seconds. Then the ants will take 50 seconds to fall of the ruler. So Stick the group at one end, then you get 100 seconds].

-- SoftNum

Two zero-size ants bouncing off each other is the same as two ants walking past each other, so this is essentially a question of where to put one ant, which is "at one end, facing the other" giving a time of 100 seconds. [Reply: that's the smart way to do it].

-- McGeddon

Well, the question is asking what arrangement will do this, a simple one would be an ant every cm facing alternating directions, starting with one facing inwards at the very end. The amount of time for all ants to fall off will be the same as the amount of time for one ant to cross the stick, or 100 s.

Now the idea of them all being in one place with the same direction only holds if these "ants" aren't fermionic, though I believe that the fact that they are colliding in the first place means that they must be exerting some force on each other when they are infinitesimally close. If they're all stacked in one place, then they would not be able to deflect each other, so they'd all move to the end of the ruler together, to their doom.

Of course, if the deflection is self initiated, then the ants would constantly perceive themselves to be colliding and choosing to change direction, so they would always switch directions. But the key word here is "collide", and I think that implies that they have to approach each other....

Besides, if they were all in one location, the energy stored in this system would cause the ants to collapse into a micro black hole, which would then evaporate through Hawking radiation (or consume the Earth, if you believe the tabloids). Anyone know how to calculate the amount of time this would take?

Or you could try entangling th- <gets slapped by a physicist>

--SteveMcQwark 18:27, 1 March 2009 (UTC)

couldn't you not think length but width, so line all your ants up on the edge of the ruler and as soon as they step forward they would fall, so total time being around <1

-- I love this problem. I sat there for 5 minutes trying to figure out optimum ant arrangement. When i figured out that it doesnt matter, the ants are f*cked in 100 seconds anyway, I had to slap myself silly.

-- What if you put 50 ants 1cm apart on the left side of the stick and facing the center of the stick, and 50 ants 1cm apart on the right side of the stick and facing the center of the stick? They would all start bouncing back and forth, the ants toward the edges of the stick would be falling of quickly but the ants in the center would be kept bouncing for awhile. Longer than 100 seconds? I don't know. But how about placing all the ants at intervals of greater than and less than 1cm apart? You might be able to increase the time that way. Seems this would take a whole lot of math to figure out [Reply: No ant will have more than two collisions].


Well, it doesn't take a lot of math to figure out, just some modeling. So, you arrange the ants with one at each centimeter line, starting on the edges of the ruler, with 50 on the left facing inwards and 50 on the right facing inwards. After 1 second, ant 1L hits ant 1R at 50 cms, and turns around (here you can focus on just one side of the ruler, since the problem is symmetrical) At 1.5 seconds, ant 1 hits ant 2 at 49.5 cms. At 2 seconds, ant 2 hits ant 3 at 49 cms. This continues on until you reach 25.5 seconds, at which point ant 50 hits ant 49 at 25.5 cms, leaving ant 50 with 25.5 seconds left of survival (51 total). At 26 seconds, ant 49 hits ant 48 at 26 cms, leaving it with 26 cms left to go for a life-span of 52 seconds. This continues back down the line of ant-oscillations until you reach 50.5 seconds, with ant 1R hitting ant 1L at 50 cms, leaving them with 50 cms and seconds left to travel, for a life span of 100.5 seconds, which is the maximum possible for this problem!

100.5, not 100.


Andrew, ants 1R and 1L will only bounce off each other at integer numbers of seconds, NOT at 50.5 seconds. Think of this whole bouncing off thing as Narpas suggested: "Simply assume they don't collide [...] simply switching the ants" (as if they walked past each other: they only switch paths, it doesn't really matter which particular ant is going down a path as long as the whole system respects the rules!); in such a case, the most one ant can get is start at one end and fall off the other end after 100 seconds. -- CrystyB 08:12, 9 April 2010 (UTC)

I can't figure out how zero-length points manage to collide. [Reply: because they have width]. - Devon

Solution is extremly easy. Because if you realize that it is irrelevant if they collide and change direction or just pass each other. So ansver is exactly 100 seconds with one any number of ants if you place one at the end.

Blue Eyes Puzzle[edit]

The solution can be found on xkcd: [2]. People who claim it is incorrect are wrong.

Suggestion: 2 days. 199 leave, 50% odds on the final soul (better than only the 100 blue eyes leaving!)

Two people stand side by side, a third walks over and stands between them if their eye colours differ, otherwise goes to the left side (or right). 197 people then individually do the same without communicating. The final person is the only one that is uncertain of their eye colour, but after everyone else leaves in the night confident that their eye colour is the same as those they stood beside (barring the unknown one), the Guru has no choice at the following noon hour to advise if the remainder has blue eyes. If not, it's still better than the 100 brown eyes stranded in the accepted solution.


This is a classic puzzle which has been restated here in a strange way. Usually, the puzzle says that the islanders adhere to a bizarre religion which forbids knowledge or discussion of eye color and, if you find out your own eye color, you have to kill yourself. When the puzzle is stated that way, the solution is ridiculous because (1) it assumes that each person has some incentive for trying to figure out the optimum strategy for deducing one's own eye color, knowing that solving the puzzle leads to suicide, and (2) it assumes that nobody makes any mistakes like miscounting the number of blue eyes or forgetting how many days it's been, and (3) it assumes that when day 99 comes and goes with no suicides you can't think of any alternative explanation but to conclude that you must be the 100th blue-eyed person. As stated here, the puzzle neatly sidesteps the first two of these fatal flaws by changing the consequence of suicide to merely leaving on a ferry (removing the disincentive) and by changing the religious zealots into "perfect logicians" (as if such a thing exists). But I'm still not convinced it overcomes the third fatal flaw. If you, a perfect logician, expect 99 people to leave the island 99 days after day zero, and it fails to happens, why is "I have blue eyes" the only logical conclusion? I can think of several alternative explanations. Are we to assume that these perfect logicians have no imagination? - Ralph

I believe the solution on xkcd is wrong, but not for any of the reasons below. Randall's reasoning is correct, except he fails to mention that the same logic applies to the brown eyed as well. Assuming there are at least 3 blue eyed people on the island, each blue eyed person can see 2 blue eyed people, and he knows that each of those 2 blue eyed people can see each other. Thus they can all conclude logically that everyone on the island knows that at least one blue eyed person exists - the guru's statement imparts no new knowledge. As such, Randall's reasoning for when the blue eyed people would leave also applies to the brown eyed. The correct answer is that on the 100th day, everybody but the guru would leave. The brown eyed people would see that on the 99th day, the 99 brown eyed people they see failed to leave, indicating that there must be one more brown eye, and the same applies for the blue eyed.

The solution provided on xkcd is actually incorrect. The solution operates on the reasoning that each of the x number of blue-eyed people see x-1 blue-eyed people, and can assume that they don't have blue-eyes, so they can imagine that each of the x-1 they see individually sees x-2. In other words, if there are 3 blue-eyed people, each sees 2, and can assume that both of the 2 only see 1. Each blue eyed person watches the whole process from the perspective of a brown-eyed person until they collectively realize they are the final blue-eyed person. This works fine exactly up to the point where you developed the solution, but no further. The maximum number of blue-eyed people that this style of reasoning works for is 4.

If there was only 1 blue-eyed person, he would know his eyes were blue, because he sees 0 blue-eyed people; he could leave the first night. If there were only 2 blue-eyed people, each would only see 1 blue-eyed person, who they could assume sees 0 blue-eyed people, and the presence of the other blue-eyed person the next morning would prove that they see 1 blue eyed person also, so each one also must have blue eyes. If there were 3 blue-eyed people, each would only see 2 blue-eyed people, who they would imagine to see only 1 blue-eyed person. On the first morning, they would still see 2 blue-eyed people, and they would realize that if these were the only blue-eyed people, they would both realize it that day and leave that night. When they were still there the next morning, each would realize they were the third blue-eyed person, and they would all leave that night. If there were 4 b.e.p, they would each see only 3, and tentatively think that each of the 3 only saw 2. The first morning they each still see 3 b.e.p, but if each of these 3 only sees 2, then the next morning each of the three would realize they are blue-eyed, and leave the third night. When they don't leave the third night, each realizes they must each see 3 blue-eyed people, and they can conclude that they themselves are the 4th blue eyed person.

The problem comes at five blue-eyed people: 5 see 4 who apparently see only 3. The first morning, they still each see 4, but think each of the 4 only sees three. When each can see at least 4 each morning, they realize there is no way to deduce how many there are. At the very least, each of the 4 known blue-eyed persons must see at least 3 other blue-eyed people. This eliminates the helpfulness of the first day and night; the first morning, they each see 4. If there were only 4, each of the 4 would see 3. If there WERE only 3, each of the 3 would see only 2, and the process could continue as usual. But the blue-eyed people KNOW there are actually at least 4, so their reasoning cannot function based on the assumption that if there were 2 they would leave the second night, and if there were 3 they would leave the third night. Their collective understanding of the fact that there are 4 even though each of the 4 could possibly only be conscious of 3 undoes the reasoning of the solution. Each night effectively becomes the first night, because no new information is gained over time. The solution absolutely hinges on the possibility that each of the blue-eyed people could possibly see ONLY 2 others; this is the crucial first step in the reasoning process. With 5 blue-eyed people, they cannot possibly reduce their reasoning to allow for each to only see 2, so the deduction falls apart.

The true solution is that the guru provides no new information, and no one leaves. The solution provided on the site is enticing, but the assumption that because it works for 1, 2, 3, or 4 blue eyed people, it works for ANY number, is false. In fact, the premises are false in light of the solution that no one makes it off the island. If the guru was perfectly logical as the puzzle states, she would know no one could get off the island given her statement. A perfectly logical person would have simply said "There are 100 blue-eyed people and 100 brown-eyed people here."

Prakhar Goel

  • I came across this puzzle and the xkcd solution only recently but I do agree with the solution presented. Here's the thought experiment I did; feel free to point out any flaws in my logic:
  • Let's make an assumption which (I believe) doesn't change the problem: One of the blue-eyed persons is blind. Under this assumption, his eye color remains the same, he can still hear the guru's statement, and the problem doesn't change from the perspective of any other person on the island (since they can't communicate).
  • The only thing that has changed is that the blind person doesn't know anyone else's eye color. He can, however follow the same reasoning as the other blue-eyed people. He already knows that there is atleast one blue-eyed person on the island. Since, on the first night no one leaves, he now knows that there are atleast two (he may be blind, but he can still hear or otherwise know that people are leaving). This process continues until night 100, when 99 people begin to board the ferry. He quickly reaches the conclusion that there must be exactly 100 blue-eyed people including him, and joins them on the ferry.
  • Some anticipated criticisms and responses:
    • "Since he's blind he doesn't have the same information as the others and that changes the problem." Given that he's not actually blind and can reason perfectly, he should be able to deduce that if he can come up with the solution with more uncertainty in the information he has, the solution will not change when the uncertainty reduces.
    • "How does he know that it was 99 blue-eyed people that boarded the ferry?" Since he's not actually blind and is just working with less information, he can readily observe the people boarding the ferry.
  • So far as I can figure out, this logic can't be extended to the brown-eyed people or the guru. --Abhijeet

The answer is: "100 blue eyed people leave in 100 days". How? Lets say, there's just one blue eyed person 'A' on the island. Rest all have different colored eyes. 'A' hears: "at least 1 person is blue eyed here". 'A' leaves immediately, as he doesn't see any other blue eye. Peace. Let me call this first case or event as 'alpha' . Now say, there are 2 blue eyed persons 'A' and 'B'. 'B' should see 'alpha' happening the first day. If this doesn't happen then 'B' deduces: Gosh! This means my eyes are blue too, because 'A' was watching me and was assuming I will be doing 'alpha' ! So they both leave on 2nd day. Peace. Lets call this event 'beta'

Now, there are 3 blue eyed persons 'A','B','C'. 'C' assumes he is just a spectator and should see 'beta' happening in 2 days. If this doesn't happen then 'C' deduces: Gosh! This means my eyes are blue too, because 'A' was assuming 'B' and me will be doing 'beta' on 2nd day! 'B' was also thinking on the same lines'. All three of them leave on 3rd day. Peace. Lets call this 'gamma' . Continuing on the same logic, the 4th blue eyed person 'D' should see 'gamma' happening in 3 days. If this doesn't happen 'D' leaves too along with others on 4th day. This logic is recursive. So Nth blue eyed person should see N-1 people leaving in N-1 days. Otherwise Nth joins in. Peace!! -----Solution ends here-----


On xkcd: [3] there are 3 additional questions given:

 1. What is the quantified piece of information that the Guru provides that each person did not already have?
 2. Each person knows, from the beginning, that there are no less than 99 blue-eyed people on the island. How, then,
    is considering the 1 and 2-person cases relevant, if they can all rule them out immediately as possibilities?
 3. Why do they have to wait 99 nights if, on the first 98 or so of these nights, they're simply verifying 
    something that they already know?

Ques 1 is really interesting. Let me rephrase: "Is the above answer applicable even if there is no announcement by Guru?" My one line answer is "Only if number of blue eyed people are more than or equal to 3"

To understand the logic, lets focus on the 'information' that the Guru provides:

  "Theres at least one blue eyed person on the island". -Information

The above answer will apply ONLY if theres another source of the information. If there is only one blue eyed person 'A' on the island. He'll have no clue of the information. If there are 2 blue eyed persons 'A' and 'B' on the island, 'B' will think "I am aware of the information. But poor 'A', if my eyes are not blue, he'll have no clue of the information". But if there are 3 blue eyed persons 'A','B','C' on the island, 'C' will think "I am aware of the information as I can see 2 blue eyed persons in front of me. 'A' and 'B' both are also aware of the information even if I dont have blue eyes, because they can see each other, which gives them the information". So at least 3 or more blue eyed people are needed to convey the information amongst each other.

Ques 2: Considering the 1 and 2-person cases is relevant because A person leaves the island, only when he is sure of his own eye color, not others eyes color. This color deducing process ultimately boils down to 1 person and 2 person case, as I demonstrated using 'alpha' and 'beta'. In other words, the 3-person case 'gamma' is dependent upon 2-person case 'beta' which in turn is dependent upon 1-person case 'alpha'

Ques 3: They wait for 99 'days' because, a 'day' is quantized in the problem as the smallest time period in which a decision (case) is implemented. All the people in the island are aware (or should be aware for the solution to exist) of this quantized time period.


The Guru observes that she spoke, relizes that she is the guru and thus has green eyes, and leaves. edit: No, this is a retarded conclusion. First of all, why would the Guru not realize she was the Guru until she spoke? And why would knowing she was the guru inform her that she had green eyes? Nowhere on the island is a pamphlet that says "the guru has green eyes".

They are similar indeed, though in the case of the two examples you supplied the question is the opposite of the Blue Eyes puzzle, that being the number of people leaving instead of who and when. Either way, just providing the answer here (after conflict resolution): All the blue-eyed people leave on the 100th night. - Guest 18:04, 11 February 2009 (UTC)

I'd like some more explanation to this answer. As they all know that there are at least 99 blue eyed people, the statement of the guru adds no new information. Apparently they use the statement as a trigger to start counting days. But that was neither agreed nor the only logical thing to do.-- 20:25, 11 February 2009 (UTC)

  • Consider the case where there are only 2 blue-eyed people (A and B) on an island, surrounded by non-blue-eyed people. The Guru makes that statement that she sees at least one blue-eyed person. Both people with blue eyes see one blue-eyed person, so they can't determine their own eye colors. However, if A were the only blue-eyed person, then A would have to leave that night. So when morning comes and B observes that A has stayed the night, B realizes that A must have seen another blue-eyed person, and that must be B. Person A follows the same logic, and both leave on the second night. Extend this for n blue-eyed people to reach the solution. (Note that because no one leaves until the nth night, the Guru's daily observations become superfluous after the first day.)
    • That would work but not in this case, because the trigger is wrong. It would work if they were dropped on an island on a certain day or the ferry started running on a certain day. The Guru's statement holds no information, because everybody can see at least 99 blue eyed people. The brown eyed people have the same information: that there is at least 1 brown eyed person there. If the trigger was that the ferry started running on some day, both the blue and brown eyed people would leave on the same day and the guru would spent eternity alone on the island. (So she'd better keep her mouth shut :) -- 23:50, 11 February 2009 (UTC)
      • The guru's statement does impart some new information; after he speaks, everyone on the island knows that everyone else on the island knows that someone on the island has blue eyes. They all already knew there was at least one blue-eyed person. They just didn't know that everyone else knew.
        • It's worth mentioning that there is absurd amounts of discussion of this puzzle (and a few variants also) yonder. Phlip 12:06, 12 February 2009 (UTC)
          • If everybody can see either 99 or 100 blue-eyed people, everybody knows that everyone can see at least one blue-eyed person (actually everyone can see 98 or 99 blue-eyed people), so there's no new information.
            • There is new information. To keep things simple, lets assume *lots* of people on the island, so blue eyes are very rare. Thus each person, including each blue eyed person, assumes that they do not have blue eyes. Now, if I'm on the island, I know at least one person has blue eyes. And I know *that everyone knows* at least one person has blue eyes. But I do not know that everyone knows that everyone knows that everyone knows (...) that at least one person has blue eyes to arbitrary depths of recursion. That is, we do not have common knowledge of at least one person having blue eyes.
              • As long as there are more than 3 blue eyed people on the island, everyone knows that everyone has knowledge of at least one blue eyed person's presence on the island. The problem states that everyone knows that everyone can see everyone's eyes in the beginning, so as long as there are at least 3 blue eyed people, each blue eyed person can see two blue eyed people, and knows that they can see each other. The knowledge of at least one blue eyed person is already known to the island, so the guru's statement imparts no new knowledge.
            • Thus, I look out, and see 100 blue-eyed people. I look at one of them, imagine him surveying the island. In my imagination, he looks around, sees 99 blue-eyed people, and imagines one of them surveying the island. In his imagination, he looks around, sees 98 blue-eyed people, and imagines....etc.
            • Eventually, I come to an imaginary person who sees no blue eyes.
            • However, once the guru speaks, *all mental levels* suddenly contain at least one pair of blue eyes. Thus, this really is new information. Maxwell 02:14, 19 February 2009 (UTC)
            • Yeah, people keep saying that... but you're still wrong. Phlip 21:34, 12 February 2009 (UTC)
              • I think I have an idea here, but it's past midnight and I'm in that half-slumber where you have wonderful ideas. If in the first night, no-one leaves, everyone will know that there are at least 2 persons with blue eyes, based in the deduction that if person X saw only brown-eyed persons and heard the guru, he'd leave instantly. Then, if another day passed, the fact that there are 3 persons with blue eyes would be known (Gut feeling here, I can't explain how), and after 99 or 100 days (can't figure out the amount!) everyone would know that there are 100 persons with blue eyes. After basical math, all the blue-eyed would leave at the same time, and their combined weight would sink the ferry. Please, work on that idea while I get some well-deserved sleep.
      • Actually, the trigger is correct, the whole thing works by induction. The countdown can't start by the ferry coming to the island (well, it could if they agreed to do some sort of countdown, but without some sort of prior communication, the islanders can't prove anything). In order for the induction to work, it has to be true that even if there was only one blue eyed person on the island, they would still know it. The Guru's observation is important not because it gives new information, but because it gives information that is true independently of how many blue-eyed people are on the island as long as there is a positive number. Yes, everyone already knows there is a blue eyed person, but had there only been one blue-eyed person on the island, they wouldn't know it and the Guru's information would be useful. The induction requires knowing what would have happened had there only been one person on the island and so the Guru's information actually changes what can be reasoned with it. Without the Guru's information there is no base case for the induction. This type of puzzle is probably my favorite because it is just so ridiculously counterintuitive. It's not the Guru that provides new information, it's how the perfectly logical islanders respond to the Guru's information that provides new information, which thus incites a response/non-response which provides even more information.... 06:09, 13 February 2009 (UTC)

The two-blue case does not accurately represent the situation. Assume that there are three blue-eyed people. All of them have the same information: They all know the same things and have identical actions, so we'll follow A. A knows that there are two people, B and C, with blue eyes (He doesn't know about the third, himself). When the oracle makes the pronouncement, he doesn't know his own eye color. If he saw nobody with blue eyes, he would know his eye color, and would leave. Since nobody leaves, everybody sees at least one blue-eyed person. The next day, he knows that everybody sees at least one blue-eyed person(BEP), including everyone with blue eyes. There are therefore at least two BEP If there were only two BEP on the island, they would see exactly one BEP. Knowing that there were at least two BEP, they would both leave. Since nobody leaves, everyone now knows that everyone, including BEP, sees at least two other BEP. Therefore, there are at least 3 BEP. Now, A, who sees two BEP, knows that there is a BEP he doesn't see- himself!

Note that the brown-eyed people never get to leave, unless they know that the rest of the people have brown eyes.

Also, the trigger starts either when the ferry comes to the island, or when the pronouncement is made, whichever is later.

You've nicely set up a base case: that for 3 BEP, they all leave on the 3rd night. But no one has made an argument for the inductive step: If n BEP leave on the nth night, then n+1 BEP leave on the n+1th night. It's been alluded to by actually using 2 BEP leaving on the 2nd night as the base case for n=3. So, if there were n+1 BEP, each would see n BEP and expect them to leave on the nth night. Since no one leaves, they realize everyone else sees n BEP, meaning there must be an additional BEP, "the viewer". Each can then conclude that he has blue eyes. N.B. n=2 is the real base case. The use of n=3 as a base case uses this and does not describe the puzzle more accurately, it just offers a specific case of the solution.

Blue Eyes Puzzle - answer: 200 people leave on the third night after the ferry comes regardless of whether the guru has spoken yet. Fact 1 – everyone on the island is perfectly logical and perfect logicians. Therefore if one person can logically know something all will know the same thing logically if they have the same information. Fact 2 – there are 201 people on the island, 100 blue, 100 brown and 1 green (though the cardinal numbers aren't really important for – at least – all situations in which n > 3 for any grouping). The people do not know the distribution of eye colours Fact 3 – If there was 1 person with brown eyes and that person knew there was at least 1 person with brown eyes they could leave. If there were two it would take until the second day because each person would look at the other and say “if they were the only one, and perfectly logical, they would have left on the day 1, given that they didn't I must have brown eyes” and so on... -> infinity. Now, take a Brown eyed person, called Bob (why not...). He looks around and see 99 other people with brown eyes, but doesn't know if this is because there are 100 people with brown eyes, including himself, or just the 99 he can see. Should it be the case that there are 99 brown eyed people only (pessimistic assumption) then Bob can know that AT LEAST the other people can see 98 people with brown eyes (the 99 minus themselves). Leads to Fact 4 – everyone can see at least 98 people with brown eyes. There are at least 98 people with brown eyes (on the pessimistic assumption that bob doesn't have brown eyes and that everyone who bob can see who has brown eyes (99) each cannot know their eye colour and make the pessimistic assumption that there are only 98 people with brown eyes). Everyone knows this. Therefore there is at least one person with brown eyes (what the guru would have said to them if she had been talked about them). Now we split time into imaginary time (I.T.) and real time (R.T.). IT is the logical deduction of what the state would have been like had the RT run through to that point. Therefore everyone can imagine (in IT) that the ferry has been coming for 97 days and no one has left – they couldn't because there is at least 98 people with brown eyes no matter who looks at it. So IT day 98 = RT day 1. If it is the case that no one leaves (as is to be expected because everyone can see 98 people with brown eyes) then everyone can know that there are in fact 99 people with brown eyes because if there were 98 then the 98 would have left on the first opportunity after day 97 because they would see that there are 97 other people with brown eyes but that they did not leave on day 97 – given that that person could not see anyone else with brown eyes it must be them who is the other and so they can all leave. So on IT day 99 (=RT day 2) everyone looks around and sees that there are 99 other people with brown eyes, if it was the case that there were only 99 then each person with brown eyes would see 98 and know that there were in fact 99 people with brown eyes (because of their actions on RT day 1) and know that they can leave so long as they only see 98 people with brown eyes – because they must be person 99. If no one leaves on that day then we get to IT day 100, RT day 3 in which everyone knows that there are 100 people with brown eyes (Bob's “optimistic” assumption) but bob can only see 99 people with brown eyes and knows that there are 100 – so he must be no. 100. Thus everyone can leave. The same is true for those with blue eyes. --Joe

  • People with brown eyes cannot leave the island, because they cannot logically conclude that their eyes are brown instead of grey, purple, red, black, etc. They don't possess the knowledge that there are only blue-, brown-, and green-eyed people on the island. Similarly, the Guru cannot leave the island. --Mark
    • I agree the Guru cannot leave, but I fail to see why the observation that our Bob made (I can see 99 people with Brown eyes) cannot lead to the general claim "there are at least 98 people with brown eyes" - which is essentially a higher version of what the guru would say to the blue people. It must be true - no matter the colour of any individual's eyes - that in the situation where there are 100 people with brown eyes everyone can know with certainty that there are at least 99 people with brown eyes and moreover everyone can be sure that everyone else knows that there are at least 98 people with brown eyes (e.g. the universe of people with brown eyes for bob is 99 people - if he assumes he doesn't have them - and so he knows that everyone in that universe knows for a fact that there are at least 98 people with brown eyes (everyone excluding themselves) so there is common knowledge there which goes far beyond what the Guru would say - if she felt like talking to brown eyed people) --joe
      • The point is that, although Bob can see 99 people with brown eyes, Bob can imagine a scenario where Bob has red eyes. Therefore, in Bob's imaginary scenario, the first brown-eyed person to his left (1BEPL) would see 98 people with brown eyes. In Bob's imaginary scenario, 1BEPL could imagine a scenario where 1BEPL has red eyes. Therefore, Bob can imagine a situation where 1BEPL imagines that 1BEPL has red eyes, and so in this imagination of an imagination, the second brown-eyed person to Bob's left (2BEPL) would see 97 people with brown eyes. This continues down the line, until...
      • Bob imagines that 1BEPL imagines that 2BEPL imagines that 3BEPL imagines that 4BEPL imagines that ... that 99BEPL imagines that he has red eyes. Therefore, Bob can conceive of a situation where all the brown-eyed people he sees aren't sure of the color of their own eyes. Without knowing a Guru-style comment (new information which says, essentially, "At the highest repetition of imagining another person's thoughts, it is impossible to imagine that nobody has blue eyes") it is impossible for Bob or any other brown-eyed person to leave the island.
      • Indeed, you say, "[Bob] knows that everyone in that universe knows for a fact that there are at least 98 people with brown eyes," but what this story requires is that he knows that everyone knows that everyone knows that everyone knows that everyone knows...etc.
      • Incidentally, in order for the progression of imaginations to collapse, days have to pass where nobody leaves. That's why the blue-eyed people, armed with the Guru's new information, are able to leave on the 100th day. And that's why, on the 101st day, the brown-eyed people (whose level of imagination went one level deeper) say "Oh, darn! I guess our eyes aren't blue." But they still don't know what color their own eyes are. --Mark

All blue-eyed people leave at midnight on the 100th day after the Guru makes her declaration. All green- and brown-eyed people stay behind. The Guru's declaration contains new information, which is the necessary triggering event. It works as follows:

On Day 0, the Guru states, "There is at least one blue-eyed person on the island." This is an equivalent statement to, "If you can see nobody with blue eyes, you may leave at midnight tonight (on Day 1), knowing your eyes are the blue ones." However, nobody leaves at midnight on Day 1, because they each see at least one other person with blue eyes. We know this'll happen because we can see the eye-color distributions.

Since nobody leaves the island, everyone recognizes that there must indeed be at least two blue-eyed people on the island - the one who would've left on Day 1, and a second blue-eyed person whose presence prevented the first from leaving on Day 1. However, now we have what is essentially a second iteration of the Guru's statement, one which everyone knows. "There are at least two blue-eyed people on the island." This is an equivalent statement to, "If you can see one other person with blue eyes, you may leave at midnight tonight (on Day 2), knowing your eyes are also blue." However, as before, nobody leaves at midnight on Day 2, because they each see at least two other people with blue eyes. Again, we know this'll happen because we see the eye-color distributions.

This continues following the same formula. The next rendition of the Guru's statement becomes "There are at least three blue-eyed people on the island: if you see two other people with blue eyes, you may leave at midnight tonight (on Day 3), knowing your eyes are also blue." Because this can be expanded, we can generalize it to become "There are at least N blue-eyed people on the island: if you see N-1 other people with blue eyes, you may leave at midnight tonight (on Day N), knowing your eyes are also blue."

But what is the maximum limit on N? We know that each person with blue eyes sees 99 other people with blue eyes. This satisfies the predicate in the second part of the the Guru's statement written above for N = 100. Therefore, 100 blue-eyed people leave the island on Day 100. The brown-eyed people (and the Guru) are stuck behind, because they never had an initial Guru statement on which they could base their logic for their own eye colors.

Note that this is not induction: each new version of the Guru's statement takes as data the fact that nobody left the island at midnight. --Mark

--Just because no one leaves the island doesn't confirm that there are at least x-amount of blue eyed people. After 100 days a blue eyed person would observe that no people have left the island, but for all he knows he could have brown eyes. ALL the blue eyed people would have to GUESS that they had blue eyes. After 100 days a blue eyed person observes that he can still see 99 blue eyed people on the island, but that does not give him any information about his own eye color. No one has left the island after 100 days because all the blue eyed people logically know that they could have eyes that are brown or green or hazel or black, and as far as I can tell no one ever WILL leave the island.

You're forgetting that the Guru states that she sees at least one person with blue eyes:

1) Suppose there is just one blue-eyed person on the island. That person, seeing no other blue-eyed people, will leave on the first night. Everyone else will know that because he left on that first night, they all don't have blue eyes.
2) Suppose there are just two blue-eyed people on the island. Those people will both see one other blue-eyed person. When that one blue-eyed person doesn't leave on the first night, they know that there isn't just one blue-eyed person on the island (as per scenario 1). They'll both leave on the second night. Everyone else will see two blue-eyed people leave on the second night, and will all know that they do not have blue eyes.
3) Suppose that there are just X blue-eyed people on the island. Those people, seeing X-1 other blue-eyed people, will leave on the Xth night because X-1 people did not leave on the X-1th night. Everyone else, seeing X blue-eyed people leave on the Xth night, will know their eyes are not blue.
You should remember that the experience of blue-eyed people is different from the experience of non-blue-eyed people. Blue-eyed people will see one fewer blue-eyed person in the population than non-blue-eyed people.

The fact that everybody already knows all the rules means this doesn't work. This would only work if there are only one or two blue eyed people on the island, and the logic doesn't chain up. It only works for two people because they would expect the other person to leave if they didn't have blue eyes themselves (as if that was the case that would be giving the sole person with blue eyes new information), thus making them not leaving be new information. If they see many people with blue eyes (and thus know that everybody else sees many such people), they do not learn anything when nobody leaves, and there is no difference in their knowledge on the 98th day than there is in the first. Besides, the Guru isn't giving any new information, as per the rules being known everybody already knows the Guru sees people with blue eyes. Whoever came up with this puzzle didn't think it out right, and nobody leaves.

I can try to explain the inductive argument better, but I don't buy it:

The guru announces that he can see at least one person with blue eyes. 1) If there is just one blue-eyed person, learning that there is a single blue-eyed person means that he now knows the color of his own eyes and is free to go the first night. 2) If there are exactly two blue-eyed, each of them sees just one person with blue eyes. Because each of them doesn't leave on the first night, both of them now realize that they must also have blue eyes, and then leave on the second night. 3) If there are exactly three blue-eyed, each of them sees two people with blue eyes. If they were the only two people with blue eyes, it would be situation #2 and they would both leave on the second night. Because that doesn't happen, all three of them now know that they must have blue eyes and leave on the third night. 4) In general, then, let's assume f(N) that if N people have blue eyes, they will sort it out within N days and leave (as I've demonstrated in the cases of 1, 2, and 3). Then, if there are N+1 people with blue eyes, each blue-eyed person sees N people with blue eyes. Based on our assumption, N people with blue eyes will have figured it out in N days. Seeing that they don't, it must be the case that there are N+1 blue-eyed people and they will all leave on the N+1 day. In other words If f(N), then f(N+1). 5) Because f(1), f(2), and f(3) are true as demonstrated, and since f(N) implies f(N+1) as demonstrated, for any number n of blue-eyed islanders, they can figure out exactly how many of them have blue eyes in n nights and leave. In the case of this example, then, an island containing 100 blue-eyed islander will figure this out in 100 days and leave. Sadly for the people who don't have blue eyes, each of them was waiting for the nth night to see if the situation resolved itself. Because it did, the can conclude that they don't have blue eyes, but are stuck not knowing the color of their own eyes as a result.

Here's why I'm not really buying it: Every islander already knows that no one on the island can suspect that they are the only person with blue eyes. For that matter, everyone on the island knows that everyone else can at least see 98 other people with blue eyes. (If I'm blue eyed but don't know it, I know that there at least 99 people with blue eyes. That means each of those people will at least see 98 other people with blue eyes). The guru's announcement means nothing to any of the islanders. He just as well have announced that the sky is blue. If they wanted to start "counting down" the induction, they could have done so immediately without the aid of the guru. For that matter, all of this holds true for the brown-eyed people as well. This leads me to believe that something's afoul thinking of this inductively, and I suspect it's because ultimately each islander has to believe that at least one person on the island may have reason to believe that there is only blue-eyed islander, and thus there could be a "first night" which either will or won't happen. In other words, learning that someone else on the island has blue eyes has to plausibly be new information for at least one islander, from the perspective of at least one other islander. This is only going to be true when there is a sufficiently small number of blue-eyed islanders (but I'm making myself dizzy trying to figure out exactly how small).

It may be easier to think about it in reverse. There are two populations on the island: Blue-eyed people - sees 99 blue-eyed people Non blue-eyed people - sees 100 blue-eyed people

Each blue-eyed person knows that each blue eyed person on the island sees either 98 blue-eyed people (99-1 (the other blue-eyed observer)) or 99 blue-eyed people (99-1 (the other blue-eyed observer) +1 (the original observer)). Each non blue-eyed person knows that each blue eyed person on the island sees either 99 blue-eyed people (99-1 (the other blue-eyed observer)) or 100 blue-eyed people (99-1 (the other blue-eyed observer) + 1 (the original observer)).

All the blue-eyed people thus know that everyone on the island sees a minimum of 98 blue-eyed people, and no more than 100 blue-eyed people. All the non-blue-eyed people know that everyone on the island sees at least 99 blue-eyed people, and at most 101 blue eyed people.

What they DON'T know is what everyone else knows - they know that any of the above numbers would generate a different estimate of people.

Take the case of the (nonexistant, but the blue-eyed people don't know that) person who sees 98 blue-eyed people. This person knows that blue-eyed people see either 97 blue-eyed people or 98 blue-eyed people. Thus they know that no one can believe there are fewer than 97 blue-eyed people on the island, nor more than 99 blue eyed people on the island (the minimum number observed plus themselves). But this (nonexistent, but unknowably so) person doesn't know whether there are people who see only 97 blue-eyed people or not on the island.

And so on down it goes, because no one knows WHAT ANYONE ELSE KNOWS. Thus while the person who sees 99 blue-eyed people knows there cannot be anyone who sees only 97 blue-eyed people, they don't know that everyone else knows that - there could be someone who sees only 98 blue-eyed people! And because THAT person doesn't know that everyone else is observing 98 blue-eyed people, THAT person might assume there is someone who can only see 97 people! While the 97 people case is known to be impossible to the guy who sees 99 blue-eyed people, it isn't known to be impossible to the theoretical observer who sees only 98, who then can only assume that no one can see fewer than 97. This continues - the person who doesn't know that there can't be someone who sees only 97 blue-eyed people could believe there is someone who does so, and thus they have to assume the person who sees only 97 blue-eyed people cannot assume that there is someone who sees only 96 blue-eyed people. Thus, while we are already below what is impossible to observe, they cannot deduce that other people can deduce that this is impossible.

However, by stating that someone has blue eyes, when this chain continues down to the person who sees no blue-eyed people, we KNOW that that person WOULD know that they DO have blue eyes, because they have that information. So, we don't know that: The person who sees only 98 blue eyes knows that the person who sees only 97 blue eyes knows that the person who sees only 96 blue eyes knows that the person who sees only 95 blue eyes knows that the person who sees only 94 blue eyes knows that the person who sees only 93 blue eyes knows that the person who sees only 92 blue eyes knows that the person who sees only 91 blue eyes knows that the person who sees only 90 blue eyes knows that the person who sees only 89 blue eyes knows that the person who sees only 88 blue eyes knows that the person who sees only 87 blue eyes knows that the person who sees only 86 blue eyes knows that the person who sees only 85 blue eyes knows that the person who sees only 84 blue eyes knows that the person who sees only 83 blue eyes knows that the person who sees only 82 blue eyes knows that the person who sees only 81 blue eyes knows that the person who sees only 80 blue eyes knows that the person who sees only 79 blue eyes knows that the person who sees only 78 blue eyes knows that the person who sees only 77 blue eyes knows that the person who sees only 76 blue eyes knows that the person who sees only 75 blue eyes knows that the person who sees only 74 blue eyes knows that the person who sees only 73 blue eyes knows that the person who sees only 72 blue eyes knows that the person who sees only 71 blue eyes knows that the person who sees only 70 blue eyes knows that the person who sees only 69 blue eyes knows that the person who sees only 68 blue eyes knows that the person who sees only 67 blue eyes knows that the person who sees only 66 blue eyes knows that the person who sees only 65 blue eyes knows that the person who sees only 64 blue eyes knows that the person who sees only 63 blue eyes knows that the person who sees only 62 blue eyes knows that the person who sees only 61 blue eyes knows that the person who sees only 60 blue eyes knows that the person who sees only 59 blue eyes knows that the person who sees only 58 blue eyes knows that the person who sees only 57 blue eyes knows that the person who sees only 56 blue eyes knows that the person who sees only 55 blue eyes knows that the person who sees only 54 blue eyes knows that the person who sees only 53 blue eyes knows that the person who sees only 52 blue eyes knows that the person who sees only 51 blue eyes knows that the person who sees only 50 blue eyes knows that the person who sees only 49 blue eyes knows that the person who sees only 48 blue eyes knows that the person who sees only 47 blue eyes knows that the person who sees only 46 blue eyes knows that the person who sees only 45 blue eyes knows that the person who sees only 44 blue eyes knows that the person who sees only 43 blue eyes knows that the person who sees only 42 blue eyes knows that the person who sees only 41 blue eyes knows that the person who sees only 40 blue eyes knows that the person who sees only 39 blue eyes knows that the person who sees only 38 blue eyes knows that the person who sees only 37 blue eyes knows that the person who sees only 36 blue eyes knows that the person who sees only 35 blue eyes knows that the person who sees only 34 blue eyes knows that the person who sees only 33 blue eyes knows that the person who sees only 32 blue eyes knows that the person who sees only 31 blue eyes knows that the person who sees only 30 blue eyes knows that the person who sees only 29 blue eyes knows that the person who sees only 28 blue eyes knows that the person who sees only 27 blue eyes knows that the person who sees only 26 blue eyes knows that the person who sees only 25 blue eyes knows that the person who sees only 24 blue eyes knows that the person who sees only 23 blue eyes knows that the person who sees only 22 blue eyes knows that the person who sees only 21 blue eyes knows that the person who sees only 20 blue eyes knows that the person who sees only 19 blue eyes knows that the person who sees only 18 blue eyes knows that the person who sees only 17 blue eyes knows that the person who sees only 16 blue eyes knows that the person who sees only 15 blue eyes knows that the person who sees only 14 blue eyes knows that the person who sees only 13 blue eyes knows that the person who sees only 12 blue eyes knows that the person who sees only 11 blue eyes knows that the person who sees only 10 blue eyes knows that the person who sees only 9 blue eyes knows that the person who sees only 8 blue eyes knows that the person who sees only 7 blue eyes knows that the person who sees only 6 blue eyes knows that the person who sees only 5 blue eyes knows that the person who sees only 4 blue eyes knows that the person who sees only 3 blue eyes knows that the person who sees only 2 blue eyes knows that the person who sees only 1 blue eyes knows that the person who sees no blue eyes doesn't exist.

While we know that everyone lower than 98 cannot exist, we don't that the people who see only 98 blue eyes don't exist, which leads to the above casual chain. However, because the chain is now ROOTED, we can then follow the logic that:

If someone saw no blue eyes, they could leave on the first day, because they know that someone has blue eyes, and if they saw no one with blue eyes, they would know that they themselves must. Thus, when no one leaves on the first day, everyone knows that EVERYONE ELSE KNOWS that the person who sees no blue eyes doesn't exist.

Thus, on the second day, anyone who saw only one set of blue eyes would be able to leave. While we know this person doesn't exist, we don't know that they don't know (xmany times) that that person doesn't exist. So when no one leaves on the second day, everyone knows that EVERYONE ELSE KNOWS that the person who sees only one set of blue eyes doesn't exist.

Thus, on the third day, anyone who saw only two sets of blue eyes would be able to leave. While we know that this person doesn't exist, we don't know that they don't know (xmany times) that the person doesn't exist. So when no one leaves on the third day, everyone knows that EVERYONE ELSE KNOWS that the person who sees only two sets of blue eyes doesn't exist.

So on the yth day, anyone who saw only y-1 sets of blue eyes would be able to leave. While we know that no one will leave on many of these days, what we don't know is how deep the chain goes. This is why the process is impossible to short-circuit - the blue-eyed folk know that everyone sees between 98 and 100 blue-eyed folk, and they know that the non blue-eyed folk know that everyone sees between 99 and 101 blue-eyed folk, but they don't know which category they fall into, and know that no one else knows that either. THIS is the cause of the recursion. So while everyone knows it is impossible for anyone to see only 97 blue eyes, they don't know that everyone else knows this.

Thus each day is communicating information, essentially; when no one leaves, they know that everyone else knows that information. On the 98th day, the people who see 99 pairs of blue eyes know that, if anyone sees only 98 pairs of blue eyes, they would know that people who see only 97 pairs of blue eyes would be able to leave. When no one leaves, they know that the theoretical person who sees only 98 pairs of blue eyes knows that no one sees only 97 pairs of blue eyes. Ergo, on the 99th day, anyone who can see only 98 pairs of blue eyes will be able to leave. When no one leaves on the 99th day, everyone knows that everyone else knows that no one who sees only 98 pairs of blue eyes exists. So at this point, the people who see 99 pairs of blue eyes knows that everyone else on the island sees at least 99 blue-eyed folk, the same as themselves. This means there are 100 blue-eyed people on the island, as all other blue-eyed people are accounted for - thus those 99 blue-eyed people you see must ALSO see 99 blue-eyed folk, which means that you yourself must be blue-eyed. As such, on the 100th day, you can leave.

What the guru tells you is NOT actually that there is someone with blue eyes. What they are telling you is what other people know other theoretical people know. Because they can eliminate the uncertainty about what other people know what theoretical other people know, each passing day provides new information to the people in the form of what other people know. In the end, all the theoretical people who can't exist DO matter because of what you DON'T know other people know.

Had the Guru said "I see at least 100 blue-eyed people", then everyone with blue eyes (who can see only 99 sets of blue eyes, and thus themselves must be the 100th) can leave. Had the guru said "I see at least 99 blue-eyed people", then no one could have left on the first day, but the 100 blue-eyed people could have left on the second day. And so on down it goes. This is yet ANOTHER way of thinking about it.

In the end, the non-blue eyed population is misdirection - it is actually irrelevant as to whether or not they exist, it is simply relevant as to whether or not they COULD exist. If there were 100 people with blue eyes on the island, and no one else, the answer would be exactly the same. - The Titanium Dragon

What's wrong with this?[edit]

What if leaving the island is a bad thing? Let's assume it's essentially a death sentence. (The version I heard first was suicide, which lead to me making this argument.)

Where does this train of thought break down?

1. I know that everyone else can see at least 95 people with blue eyes.

2. I don't want to die. Neither does anyone else. [This should probably be an axiom.]

3. People only leave the island because they assume that if there had been one less person with blue eyes, they would have left the day before.

4. So long as everyone reaches this conclusion - which I can assume because they're perfectly logical, just like me - everyone can justify not leaving the island since they'll never find out what colour their eyes are.

5. I'm safe! Phew!

[tl;dr: 'If people decide not to leave the island, no-one finds out their eye colour, so no-one needs to leave the island.']

I don't believe this argument invalidates the other piece of work: it is a separate logical conclusion that can equally be derived. If it is true: what is the minimum number of people in part 1 that it works for?

Dragon Dave.

I agree, Dave, but I'd state it slightly differently. We assume there is some optimal strategy for determining exactly how many blue-eyed people there are. The leading theory is that x people leave the island x days after "day zero". So, if I see 99 blue-eyed people and they don't leave on day 99, I can only conclude that my eyes are blue, hence I must leave on day 100. Let's call that strategy "S1".

1. I don't want to die or be exiled from the island, and neither does anyone else.

2. If I figured out strategy S1, I can assume everyone else figured it out too.

3. If I see 99 blue-eyed people, I expect them to all leave on day 99.

4. If nobody leaves on day 99, I can think of several explanations:

Maybe I'm the 100th blue-eyed person.
Maybe I made a mistake counting the number of people.
Maybe I lost count of the number of days.
Maybe everyone else made some kind of mistake.
Maybe there is a better strategy, "S2" which everyone else thought of but I didn't.
Maybe I was wrong in assuming which day would be considered "day zero".
Maybe the 99 people are in denial.
Maybe the 99 people have decided not to obey the rules.

5. Since all those explanations (and more) and technically possible, I can't logically conclude with 100% certainty that I'm the 100th blue-eyed person.

6. I really don't know for sure what my eye color is.

7. Everyone else will reach this same conclusion, hence no one will leave.

I say this strategy (call it "S3") works for ANY number, even just one blue-eyed person. All it takes is for that one person to exercise a tiny bit of imagination in rationalizing why the results were unexpected and then concluding that he/she can't be 100% certain. Never underestimate the power of rationalization, especially when the alternative is death. -Ralph

Here's a consideration from other perspective. This puzzle primarily demonstrates the problem of Common Knowledge in logic. The key to not to get lost is to mind the Occam's razor - to not multiply entities unnecessarily.

Consider the situation where you and me think about the information - I am taller. I can conclude that I am taller and you can conclude that I am taller but the information is not common knowledge if I don't know that you know that I am taller. Perhaps I saw you write down our hights so then I know you know I'm taller. But can I say that I know you know that I know you know that I am taller? One can expand this I know you know construct endlessly just between two actors.

And this is when one should ask himself - am I multiplying entities unnecessarily?

From certain point you would be.

If I say we both know that we know that I am taller then that automatically implies that we both know that we know that we know that I am taller. Surely you can construct a proof for that statement yourself.

I believe the same problem happens when people consider the 99 blue eyes puzzle. If you make a simple analysis from the point of any of the islanders, you'll find that all islanders know that all islanders know that there are blue eyed people. Considering whether all islanders know that all islanders know that all islanders know that there are blue eyed people is moot.

Ok, to answer the restated version

1) Is irrelevant, the area being affected here is common knowledge. "I know that (everyone else knows that)^99 there is at least one blue eyed person" isn't true is relevant. 2) False actually, although the question should be reworded to change this. The individuals should all know all of the individuals on the island are perfect logicians. 3) True (or to leave on day 100 with me) 4) In turn a) True, b)Not possible within the question. The conclusion that there are 99 visible blue eyed people is determinable on the available facts so you know it instantly. c)Debatably possible within the question, but it would be simple to fix this with a small reword relating to memory the question. d) True, see 2. e) True for he same reasons as 2) and 4d). f) impossible, it is logically deducible which day is day 0 therefore you couldn't be wrong about it. g)&h) are again fixed by similar/the same rewording as d and e.

And to the post directly above mine. It is clearly shown that you are not multiplying entities unnecessarily because whether or not there are blue eyed people is truly common knowledge affects what conclusions can be logically drawn. if it were already common knowledge then the answer to the puzzle would be nobody leaves, but it turns out it's not.

The Monty Hall Problem[edit]

This question has been definitively answered, defended, and answered again. The latest treatment I've seen for it can be found in Michael Shermer's column in the October Scientific American.

Short answer: Yes, change your choice.

The article also has a link to a demonstration on the web you can use to convince yourself. Or, just get three playing cards and a friend, and play the game a few times, keeping score. It's amazing how many people just won't spend a few minutes doing this. --Sidelobe 18:48, 11 February 2009 (UTC)

The answer assumes that the host always shows you another door. On the show, this was not always the case: if you chose correctly, you were more likely to be presented with the choice. If you were incorrect to begin with, Monty would sometimes just open your door and give you the goat. Let's suppose the odds look like this:
Initially incorrect: 50% chance of getting the choice
Initially correct: 100% chance of getting the choice
Then the game has changed. Given that you have the choice, the odds are now stacked in favor of keeping your door. This becomes more obvious if you change the odds to 0%/100%, where you should never change your choice because you're right if you even got to this point. -- 19:46, 11 February 2009 (UTC)
Yes, but the puzzle clearly states that both A) he always opens a door at this point and B) the door he opens has a goat. Period.

Not so short answer: Yes, change your choice. The probability that the car is behind the door you choose the first time is 1/3, the chance that it's behind one of the other two doors is 2/3. Keep in mind that Monty knows where the car is. He now opens one of the remaining doors (with a goat behind it). The probabilities haven't changed though: Initial door: 1/3, the (one!) remaining other door: 2/3. Thus changing your choice yields a 2/3 probability of winning the car. If you don't believe it, try it out!

You can put it into a look-up table. Assume that the prize in behind door #1 and that the host always picks a door that wouldn't win.

Contestant    Host       Change
  1 (win)    2 or 3   3 or 2 (lose)
  2 (lose)     3         1 (win)
  3 (lose)     2         1 (win)

If you don't change, you have a 1/3 chance of winning. If you do change, you've got a 2/3 chance of winning.

Actually the answer is No. It is not and advantage, nor disadvantage, to change the choice. When Monty opens door #3, the chances of your first choice to be right ALSO goes to 2/3. He will always open one of the doors and leave the option between the other two. It creates some kind of psychological illusion that something has changed, when in fact it didn't.

Um, slight problem there. How can the chance of your first choice being right change? When you made the first choice, you had no information about the location of the goat, and thus your chance is simply 1/3. The fact that you know something now can't help you with a decision you made in the past. I especially wonder how it can also be 2/3, given that probabilities are always between 0 and 1. Timotiis - 13:19, 16 February 2009 (UTC)
So? 2/3 is between 0 and 1...
If you pick door A and the host opens door B, the probability it is behind door C is 2/3. Thus the probability it is behind door A cannot also be 2/3, as that totals 4/3, which is greater than 1. Timotiis - 14:09, 11 March 2009 (UTC)

For a fresh take on a classic problem, suppose the host has forgotten which door the car is behind. Nervously (he might be out of a job if he gets this wrong!), he opens one of the other two doors - and fortunately for him, there's a goat behind it! Mopping the sweat from his brow, he asks if you'd like to switch. It turns out that *now*, it's neither advantageous nor disadvantageous for you to do so - the odds are 50-50! -rzh

I shouldn't think so - the important thing is not that the host knew where the goat was, but that you now know where a goat is. Your original chance of picking the car was 1/3, so the chance the car was behind the other two was 2/3. As you now know it isn't behind one of the other doors, and the chance your original pick was correct hasn't changed, you still have a 2/3 chance of winning if you switch.
The literature is that Monty must know what is behind the doors for the probability of 2/3 to be right (you have skipped the case where the host opens the door containing the car accidentally). The other assumptions are needed too. If you are presenting the "Monty Hall" problem I feel you have to present something fairly close to the original - it does not need to be made more complex, as the beauty of it is its apparent simplicity.

Let's assume for simplicity that you pick door A. There are six scenarios:

Car behind A, host flips B. If you switch you lose.
Car behind A, host flips C. If you switch you lose.
Car behind B, host flips B. The host has flipped winning door.
Car behind B, host flips C. If you switch you win.
Car behind C, host flips B. If you switch you win.
Car behind C, host flips C. The host has flipped the winning door.

Each of these happens with probability 1/6. So the probabilities

P(host flips losing door & switching is good) = P(host flips losing door & switching is bad) = 1/3.

are equal. Thus if we condition on the host flipping a losing door, switching isn't advantageous.

Contrast this with the original version of Monty Hall. Here we have:

Car behind A, host flips one of the other doors. If you switch you lose.
Car behind B, host flips C. If you switch you win.
Car behind C, host flips B. If you switch you win.

Each of these happens with probability 1/3. So here switching doubles our chances. -rzh

For a different twist on the problem, think about this. You walk into the studio partway through the show. The host has just revealed a goat behind one of the doors, and you are not told which door the contestant picked originally. What is the chance you pick correctly? Timotiis - 13:19, 16 February 2009 (UTC)

I'm not sure if this is helpful, but think of the scenario of one hundred doors. 99 goats. 1 car. You pick a door, and then the host opens 98 doors. Should you switch? The answer is obviously yes. In 99% of cases, the host will have opened every door that is not a car for you. So, in 99% of cases, you would benefit from the switch.

Try it with 20 doors. 19 goats. 1 car. You pick a door, and the host opens 18 doors. Should you switch? In 95% of cases, the hose will have opened every door that is not a car for you. So in 95% of cases, you should switch.

In the original problem, it's only 66%. But it's still in your favour to switch -DevinB

My brain can only grasp the problem as a 50/50 success rate. However, running it through 1 million times, I get the following (Note my laziness with shifting the decimal to the right two):

0.333327% Correct Initially.
0.666673% Correct with Switch.
333327 / 1000000 initially Right.
666673 / 1000000 with Switch.

Being that it's 3 AM I may have poorly represented the problem. In code I set up an array of ints. I generated one random number to represent the initial choice. I randomly selected one of the ints ("doors") and set it to 1. This will represent the correct door. I then changed the remaining doors to 2, representing the losing doors. From that I set a 2 door that was not the selected number to 3, to represent the revealed door. If the selected door was 1, I increased the initial choice counter, if it was 2 I increased the switch door counter.

I acknowledge that the initial chance of getting the door right is 33%. However, once one of the wrong doors are revealed, you're left with only 2 choices(stay or switch), and 4 outcomes. 1) You stay and get it right. 2) You stay and get it wrong. 3) you switch and get it right, 4) you switch and get it wrong. That's 2 for right, 2 for wrong, 50/50.

An option is eliminated, so its chance of being the right one should be distributed amongst the unknown, correct? I know my logic is wrong, but why is it? Even in the case of 100 doors, the fact remains that you know one thing and one thing only: One of Two doors is correct. The numbers show that I'm wrong, and I trust them more than myself, I'd just like someone to help tell me why, or help me look at it the right way.


A followup, since I kind of get it now. When you initially select one, you acknowledge that "this one has a 66% chance of being wrong", the host will reveal one wrong one, taking it out of the selection. This will leave you with the one you already marked as a 66% chance of being wrong (because those were the odds when you picked it), and another that is now only a 33% chance of being wrong. It is 5:30 AM and this is the only way that I can find peace in the results.
- Orc

Think of it this way: the way he always opens a wrong one other than the one you pick means that if you picked the right one initially, the remaining one that you can switch to will be a wrong one, which happens 1/3 of the time. If you picked a wrong one initially, which happens 2/3 of the time, the one you can switch to will always be the right one. It's not simply switching between one of three doors, but switching between winning and losing, and since you have a 2/3 chance of losing with your first pick, switching will give you a 2/3 chance of winning.

I have heard many explanations, and i have found them all to be terribly confusing, and of the longest time i was convinced it was 50 50 until someone told me it worked in experimentation. eventually I figured it out, and this is the explanation that works best for me suppose you have seen the show before, and know Monty's routine, and decide from the get go, that you will stay. then the winning conditions for you, are that you choose the car first guess, 1 in 3, suppose though that you decide to switch from the get go. then the winning conditions become that you land on a goat, EITHER goat, because Monty there is going to reveal the other goat, leaving only the car to switch to. the only way to loose if you switch, is to have landed on the car to begin with, which was a 1 in 3 chance, leaving a 2 in 3 chance of landing on a goat, and then switching to the car. this is essentially what the previous post says, but for me at least i find this way of looking at it easier to understand.

How does the goat being revealed effect the chances of your door having a car? At first, you picked a door knowing there was a 33% chance of a car with any door. Monty eliminated one goat. Now either door has a 50% chance, and the third door has a 0% chance (obviously). This means that the door you picked now has a 50% chance, which makes sense. Why would your door's chance be effected by when in the game you picked it, as long as it was not revealed to have a goat? Eliminating a door increases the chances of all doors with mystery items behind them.

The reason I've seen supporting the opposite conclusion is mostly, "You picked door #2 when it had 33% chances, so it STILL has 33% chances, while the other door has 50% chances!" However, these chances not only don't make sense (door #2's chances are magically suspended because you picked it), but they also don't add up to 100%, like they should, because there are only two doors to pick from now.

[I would appreciate it if somebody could explain to me why I'm wrong, if I'm wrong. Thanks.]

When you eliminate the goat, why should the probability of your initial guess being right/wrong change? (Hint: It doesn't.)
If the probability of your door doesn't change from 33%, and the probability of it being behind a door is 1, what's the probability of it being behind the door that isn't yours (Hint: basic probability theory)

I feel the above explanation and the explanation where you pick a strategy before hand and then consider the win condition are the best at breaking the counter-intuitivity barrier. However, if you still don't see why the odds of winning are 2/3, consider this alternative approach:

Monty isn't revealing one door, he's revealing all but one of the remaining doors, with the promise that he won't reveal the prize. Or, more succintly, when there are n doors, he opens n-2 of them. When there are three doors total, this means he only opens one door (n-2 = 3 -2 = 1). But if there were four doors, he would open two, (n - 2 = 4 - 2 = 2). Pretend there were 100 doors. You pick one, with a 1/100 chance of being right. He opens 98 doors and reveals 98 goats. Switching looks pretty good now, doesn't it? --aemmott

The easiest way I've found to express this is as follows; Alternate problem: There are n doors (where n>2), Behind 1 door is a fabulous prize, behind the rest is a goat. You pick a door. Then the host asks you would you like to stick with your door, or pick the n-1 doors remaining. Obviously you switch because you win if the car is behind any of n-1 random doors rather than 1 door.

The Monty Hall problem is IDENTICAL to the above problem. The host opening doors doesn't change the fact that you are picking between 1 and n-1 doors. All he's doing is showing you that n-1 includes some goats, which you already knew. ---SPACKlick

One of my favourite tricks with this is the recursive Monty Hall problem, which catches out lots of people who know the monty hall problem. [answer immediately below]

There are 5 doors, you pick 1, the host opens 1 of the 4 other doors (which he knows has a goat), you can stick or switch your 1. The host opens 1 of the 3 doors (with a goat). You can stick or switch. The host opens 1 of the 2 doors you're not currently on with a goat behind it and you can stick or switch and take the prize behind 1 of the remaining 2 closed doors.

Most people who know of the monty hall problem switch every time, because that's better, the monty hall problem says so. And it still gives you a 2/3 chance of getting the car. However, there are two strategies better than that. What you have to realise the the monty hall problem works by keeping the probability you have the car (p)to a minimum right up until you switch, because you want the last door to have the maximal 1-p it can. To do that, you stick where you are until the last switch. This tactic works for any value of n. Notice that on average the strategies lead to 50/50 odds of getting the car but no individual strategy gives 50/50.

S= Stick s = switch %C = chance of car.

SSS 20%C SSs 80%C SsS 40%C Sss 60%C sSS 26.[6]%C sSs 73.[3]%C ssS 36.[6]%C sss 63.[3]%C

Once you have chosen your door it either has a car or a goat behind it. It doesn't matter if Monty open 0, 1, 2 or 3 doors, if you had chosen the car (or goat) it would still be a car (or goat). Probability doesn't enable you to transform goats into cars or cars into goats. That requires other techniques such as cheating behind the scenes.

If you initially chose the car (which you would do 1 in 3 times on average), then swapping would cost you the car. If you initially chose a goat (which you would do 2 in 3 times on average), then swapping would get you the car.

I find it easier to explain if you assume that there are 3,000 doors, and only one contains the car. It is much easier to grasp that you probably didn't pick the right door initially, and switching is adventatious.

Resistor Grid[edit]

if we were to connect a power supply to the grid in a knights move pattern(assuming that we connect one negative terminal and one positive terminal) then we measure the voltage drop, and measure the amperage drop to calculate the resistance(we could use an ohmmeter, but this is usually more accurate).

for convenience, let the move be up two and left one.

this can be represented as:

  • up 2 left 1
  • up 1 left 1 up 1
  • left 1 up 2

here, the resistance would be 3(number of resistors the current must travel) divided by 2, and by 2 again. We half the resistance for each alternate possibility that contains the same number of resistors. This is because electrons can move here twice as fast when they have a second route to take. if we only had a 2x3 grid, then the resistance would be 3/2/2. (e.i. 0.75)

however when the size of the grid is increased to 4x5(the knights move in the centre) we have a list of other possible paths:

  • left 2 up 2 right 1
  • up 3 left 1 down 1
  • ...

to a nearly limitless selection(try drawing it, the possibilities are almost endless), but when we introduce the limit of not being able to double back or visit the same point twice, we narrow it down quite a lot(just under a hundred, but my calculations aren't always perfect).

logically, we would divide 3 by how ever many possible paths their are, but this is not the case. when the path requires 5 resistors to move through, the resistance increases, so dividing by two is no longer accurate. we still need to divide, but by what? to obtain the number we divide by, we use the following algorithm/formula/whatever

1/(number of resistors) +1

this means that when the path is 5 resistors, we divide by 1.25(1/2/2 +1). please note that the path will always require an odd number of resistors.

using this logic, we can divide three as following:


this will continue until we run out of paths. also, we can assume that since the number we divide by will always be greater than 1(we add one to it, remember?) that the resistance will never get larger. however, the resistance will always be greater than 0, because we never subtract anything from it. these two rules form together as follows:

the resistance will be above 0 and less than 3, and the resistance will be divided by a number greater than 1 but less than or equal to 2.

this should be able to be simplified by 10^-X, where X = is the number of available paths divided by number of resistors.

this applies to all grids large enough to accommodate the knights move.

since the grid is infinite, X is also equal to infinity(an infinite number divided by a smaller infinite number), so the resistance can be contained as:

 0.00000000000000000(an infinite amount of zeros)00000001

or, slightly over 0.

  • But arent there an infinite number of paths? The path can go away from the second resistor for any amount of time and then go outside the other paths. IE the steps left*n down*n right*(2n+2)up*(n+1) left*n will always make a longer path (assume the marked nodes are as in the comic (right right up from one to the other)).-- 06:43, 17 February 2009 (UTC)
  • For a grid, you can write the equation for the voltage at any node (in terms of nearyby nodes), and use 2-d discrete fourier transform methods to move toward an answer. avoid the "path" method. this method fails horribly for simple circuits, like a wheatstone bridge -- the middle resistor doesn't appear in the actual resistance equation, but by treating it as a set of parallel paths, it will. 06:15, 18 February 2009 (UTC)
  • Also, "0.00000000000000000(an infinite amount of zeros)00000001" is not slightly over zero. It is equal to zero. It is also an incorrect answer to the problem (The correct answer can be found on the fora). 10:53, 18 February 2009 (UTC)
Do you by any chance have some forum links to support that affirmation (where the answer could be found)? TIA. -- CrystyB 08:49, 9 April 2010 (UTC)
  • I can put two bounds on the answer. There are two paths that use exactly three different resistors (resistors from one path are not contained in the other path). This puts the upper bound at 1.5 ohms. -crms 01:29, 3 March 2010 (UTC)
  • A lower bound can be found by viewing each of the two end-points individually. For each endpoint, there are exactly four paths for electricity to go through to go to/from that point to an adjacent point. If we ignore the rest of the grid and simply replace those connections with 0 Ohm wires, then the total resistance is 1/4+1/4=1/2, our lower bound. -crms 01:29, 3 March 2010 (UTC)
  • Any answer that claims that the result is <1/2 is not taking into account that paths that share a resistor cannot be considered completely separate. -crms 01:29, 3 March 2010 (UTC)
  • The resistance is exactly half. This is a common problem, and its explained here:


It is not a common problem, it is a variation of a common problem. The solution you posted does not measure the resistance across a knight's move. - Ben H
  • Hello, I don't know shit about resistors or current or what not, but this problem looks suspiciously like the problem of finding the Cardinality of Rational Numbers. It seems to me like it could be solved using Georg Cantor's method - see's_diagonal_argument. It can be shown that this infinite field can be "added up" by an integral whose upper bound is infinity.

The Bridge[edit]

 C+D -> // 2 min
 D   <- // 1 min
 A+B -> // 10 min
 C   <- // 2 min
 C+D -> // 2 min
 total 17 mins.


 C+D -> // 2 min
 C   <- // 2 min
 A+B -> // 10 min
 D   <- // 1 min
 C+D -> // 2 min
 total 17 mins.

A slight variant of this puzzle is found in the Nintendo DS game Professor Layton and the Diabolical Box. The differences are it is phrased in terms of horses you have to take with you which you can only take 2 at a time (and ride one back), and they have speeds of 6, 4, 2, and 1 hours. However it is solved exactly the same way.

Coin Tosses 2[edit]

With a probability 1/2 [tail on the first toss] Bob will pay Sue $0.
With a probability (1/2)^2 [tail on the second toss] Bob will pay Sue $1.
With a probability (1/2)^3 [tail on the third toss] Bob will pay Sue $2.
With a probability (1/2)^{n+1} [tail on the third toss] Bob will pay Sue $n.
Therefore, the expectation value of how much will Bob pay Sue is:
<math>1/2 * 0 + (1/2)^2 * 1 + (1/2)^3 * 2 + ... = \Sum_{n=0}^{\infty} n (1/2)^n = 2</math>
So Sue should pay Bob $2 for the game to be fair. *ABC*

  • The result of your sum is 1 and not 2
    • I don't think so, the terms of the sum are: 0, 1/2, 1/2, 3/8, ... The first three add up already to one. (Maybe you are missing the n inside the sum or something.) *ABC*
      • There shouldn't be a n inside the sum, I don't think. The 1/2 probability of getting 1 dollar includes the 1/4 probability of getting two dollars. You have to be successful in each iteration to get the next iteration, so really you just need to add the chance of getting each additional dollar. So value of the game=(.5)($1) + (.5^2)($1) + (.5^3)($1) + ... *Matt*
      • Actually your left term is different from your right term. In the left the terms are 0, 1/4, 1/4 ... In your rhs the terms are 0, 1/2, 1/2, 3/8...
      • It's just a geometric distribution [Reply: No it isn't, it's the derivative of one]. The sum is \sum_{n=1}^\infty n(1/2)^(n+1) = \sum_{n=1}^\infty (1/2)^n. ie. what Matt said, but if you stick the n inside the sum, the exponent must be n+1 (the probability of EXACTLY one head is 1/4, not 1/2). -- bradluen
      • If the exponent is n+1 then the sum starts at n=0 and the first term is (1/2)^(0+1) = (1/2); there is no possible value for n that can make (1/2)^(n+1) equal 0
  • Solution 1: The infinite sum yields 1. (as stated by Matt)
  • Solution 2: We can restate the problem the following way: Bob wants to play until he wins at least one time. Given this precondition we can already pay him in advance for his win. In this case all the coin tosses become independent. As Bob pays $1 for each win, Sue should pay the same to Bob for his (future) win.
  • Solution 3: Recursively, the game gives a 1/2 chance of winning nothing, and a 1/2 chance of winning $1 and getting to play again. So if we call the expected value E, then E = (1/2)*0 + (1/2)($1 + E). This solves to E = $1.

The solution cannot be $1 because the payoff will look like this for (Sue, Bob)

Heads: ( $0, $0) play continues (essentially Bob gives back sue her dollar)
Tails: ($1, -$1) Play stops
Heads, heads: (-$1, $1) Play continues
Heads, Tails: ($0, $0) play stops
Heads, Heads, Heads: (-$2, $2) play continues
Heads, Heads, tails: (-$1, $1) Play stops.

So Sue loses only if a tails occurs on the first toss, else she will always breakeven / win viceversa for Bob.

  • You say "only" as if it was a rare occurance... but that means that Sue loses half the time, breaks even 1/4 of the time, and wins 1/4 of the time... so she loses twice as often as she wins. But, when she loses, she always loses $1, and when she wins, she can win more than $1 (and averages to $2). So it's even (1/2 * $1 == 1/4 * $2), and the cost of $1 is fair. Phlip 11:52, 12 February 2009 (UTC)

  • I think the simplest way to think about this puzzle is to assume that Bob and Sue keep playing the game forever. Then the situation becomes:

Every time a tails comes up (a new game begins), Sue pays Bob $x. Every time a heads comes up, Bob pays Sue $1.

Clearly, for this to be fair, x must be 1. --JDB

erm I'm just learning probabilities in school but the solution for the infinite sum, which I found logically to :be the solution of this problem( S = 1/2^n x n for n going from 1 to infinity) is Exactly 2 (the series seems to be convergent) :, and I found this computationally, which cannot be disproven that easily , sry ;p (atleast as far as 32-bit double :floats precision's goes, I mean...)
So that should be a definitive answer? I'm curious and eager to spread this so we need a consnsus here ; )
Simple c++ algorithm:
#include <iostream>
using namespace std;
int main() {
int i, iter = 1;    //ITERations
double sum, aux; //and AUXiliary variable
while (iter != 0 ) {
    cout << "Insert num of iterations: (or '0' to exit)"; 
    cin >> iter;
    aux = 1;
    sum = 0;
    for (i = 1; i <= iter; ++i ) {
        aux /= 2;
        sum += aux * i;
    cout << "Sum: " << sum << endl;
return 0;

You'll notice this algorithm alredy converges to 2 at a few iterations. 00:20, 14 February 2009 (UTC)

Actually, the formula you use is incorrect. It should be S = 1/2^(n+1) x n for n going from 1 to infinity, which does actually add up to 1. -- CrystyB 17:17, 9 April 2010 (UTC)

-- Let $E be Bob's expected losses. I'll deal with Sue's contribution later.

If Bob flips a tail, he loses $0 and we're done. Else he flips a head, loses $1 and expects to lose a further $E. Each case happens with probability 1/2, so E = (1/2)*0 + (1/2)*(1 + E) => 2E = 1 + E => E = 1.

So Sue should only put in $1 if the game is to be fair.

As no-one has shown how to do the infinite sum, here's another way to do it (the above is one way in disguise). Bob's expected losses are 0*1/2 + 1*1/2^2 + 2*1/2^3 + ... = 1/4*(1 + 2*1/2 + 3*1/2^2 + 3*1/2^3 + ...) Now consider the well known identity: 1 + x + x^2 + x^3 + ... = 1/(1-x), where |x| < 1 (which it is in our case). Differentiate wrtx => 1/(1 - x)^2 = 1 + 2x + 3x^2 + 4*x^3 + ... You could also get that by squaring the infinite series. In fact it is well-known identity. Comparing that with the series for E => x = 1/2. So E = (1/4)/(1 - 1/2)^2 = 1.

If you are skeptical of the very slick math trick, perhaps the following will convince you. Let x be the amount Bob loses each time he rolls a head, and let h be the probability of rolling a head. Then I claim that

E = (1-h)0 + h(x+E) => E(1 - h) = hx => E = xh/(1-h)

Now the non-dubious way (NB the first term is 0, but I show it in full). E = 0x(1-h) + x(1-h)h + 2x(1-h)h^2 + 3x(1-h)h^3 + ... = x(1-h)h(1 + 2h + 3h^2 + 4h^3 + ...) = x(1-h)h/(1-h)^2, where I've used the well-known identity, which I proved above. => E = xh/(1-h) as I claimed.


The problem is that Sue can win an infinite amount. Sue wins $0 1/2 the time, $1 1/4, $2 1/8, $4 1/16, $8 1/32... $2^n 1/(2^(n+2)). Or for each time she plays she wins 1/4 + 1/4 + 1/4 + 1/4 +... = infinity dollars on average. Wikipedia has a page on a similar problem.-- 07:22, 17 February 2009 (UTC)

  • That is an other problem (St. Petersburg paradox). I have summoned some courage and added it to the front page as a variation of this game. I think it is more interesting anyway.
    • If you don't want to check the related wikipedia article, the solution is the same as above: the expected value diverges into infinity, which makes the problem a paradox. The solution to the paradox can be explained by the utility of money. Yeah if you play forever, you end up winning infinite amount of money. But infinite amount of money is not infinitely more useful than one dollar. If you could play a game where you pay $1 and have 1/10^40 chance to win 10^80 dollars, would you pay to play the game? I know I wouldn't.
      • To follow up: The utility function of money is obviously not linear. But that leads us to another question. What if instead, we consider the payoffs to double in utility each time? For instance, given a "fairly reasonable" utility function of log($n), the payoffs could be e, e^2, e^4, e^8, ... This leads to an expectation of infinite utility (I would definitely consider this game, since you have a 1/16 chance in getting 8.8 million or more, and 1/32 chance in getting 79 trillion or more.) We either lead to the conclusion that utility of money must be bounded (which could actually be a reasonable assumption, because you can't actually leverage an infinite amount of things from "infinite" money), or else you should still be rationally wager an infinite amount of money with a specific payoff table. -- 21:36, 14 June 2015 (EDT)
    • I made a quick python script that simulates lots of games, and calculates the mean and standard deviation of the pots. Out of 10 million games, the average pot size was $13.52 but that doesn't mean nothing, because the standard deviation was $6283.88 :) I have no idea how much money I should ask from a friend to play this game:)
  • Ok, I haven't read all of your solutions, but I did this with a simple equation. Define x to be the expected value Bob pays Sue (and thus the amount Sue should pay Bob). x=1/2*0+1/2*(1+x)=0+1/2+1/2x. thus, 1/2x=1/2, and x=1 -Emily

Smurfs and Gargamel[edit]

The smurfs agree among themselves that when the game begins, they will all check whether they see an odd or even number of red hats on other smurfs, and the first to be called on will claim that his hat is red if and only if he sees an odd number of red hats. At this point, each smurf knows the color of his own hat. If the first smurf was correct (has hat matched his claim) then there must be an even number of red hats, and if he was wrong there must be an odd number. If there is an even number, any smurf who sees an even number has a white hat, otherwise he has a red hat, and vice-versa.

-- Evan

  • RE: Evan -- I'm not sure if this would work. They don't have any means of communication. They can see each other, but thats it. Plus, the amount of hats turned red is undefined. Meaning there is no base number to go off of.
    • I posted the problem, and Evan's solution is the one I was looking for. Maybe the problem is badly stated (in which case we should discuss improvements to the problem description). After the hats have been exchanged the smurfs can see each other. They can hence count the number of red hats (except their own ones). Suppose the first smurf yells 'red' because he has seen an odd number of red hats. Everybody should now know what color he has. If a smurf sees an even number of red hats he must have a red hat. Otherwise he still has a white hat. 19:20, 11 February 2009 (UTC)
      • The problem is that the other smurfs don't KNOW that "Red" means "odd number of red hats", since they can't communicate. 20:16, 11 February 2009 (UTC)
        • They develop this strategy before the game. The problem states that the smurfs accept only "after a short discussion."
          • Objection withdrawn! -John 21:39, 11 February 2009 (UTC)
          • I object to the inclusion of the note "as usual no trick" since you need to make note of the "short discussion" when it's presented as more of a logic problem; much like the "what is the color of bus driver's hair" riddle. -Jordan

The smurfs form a queue: they get into the queue one by one, each one standing at the border of red smurfs and white smurfs. If the queue is already a single colour, the next smurf just stands at either end. After this step every smurf except the two on the border know their own colour. Either of the two that is called first will get killed, and the other immediately knows his own colour. This way, it's not the first one who is in danger, but the two at the white/red border in the queue. --FarzanehSarafraz 11:48, 5 November 2010 (UTC)

  • Problem Statement says "(order chosen by Gargamel)." -Matthew

Alternative 1[edit]

I think the answer is the first n-1 smurfs would risk their own lives by informing the others of how "many" (odds or even) of a particular colour they see. (Smurf #1 describes colour #1, smurf #2 - colour #2, and so on.) -- CrystyB 18:01, 9 April 2010 (UTC)

  • Actually, I think no more than the first ceiling((n-1)/floor(log_2(n))) must be put at risk because each smurf that risks their life can say more than just two colors now. They should encode as many odds/evens as possible in their statement. For example, with 4 colors, these translations can be made:
    • Smurf 1 says Color 1 - There are an odd number of Color 1 and an odd number of Color 2 (other than Smurf 1)
    • Smurf 1 says Color 2 - There are an odd number of Color 1 and an even number of Color 2
    • Smurf 1 says Color 3 - There are an even number of Color 1 and an odd number of Color 2
    • Smurf 1 says Color 4 - There are an even number of Color 1 and an odd number of Color 2
    • Smurf 2 says Color 1 - There are an odd number of Color 3 and an odd number of Color 4 (other than Smurfs 1 and 2)
    • Smurf 2 says Color 2 - There are an odd number of Color 3 and an even number of Color 4
    • Smurf 2 says Color 3 - There are an even number of Color 3 and an odd number of Color 4
    • Smurf 2 says Color 4 - There are an even number of Color 3 and an odd number of Color 4
  • Using this code, each smurf can encode information about floor(log_2(n)) different colors (1 color pairity per bit of the options), and n-1 colors need to be described. There are some numbers such that the last smurf only needs to tell about one color, but this smurf is still at risk, so we must take the ceiling of (n-1)/floor(log_2(n)) --Phoenyx 18:07, 10 February 2011 (EST)

There's a much simpler solution. The first smurf calls out the value of the sum of the hat colors mod n, and then each subsequent smurf has a unique value for his own hat that makes the sum work out properly.

Alternative 2[edit]

RE: Alternative 2. The first smurf calls out the colour of the smurf in front of him, he has a 0.5 chance of living and smurf 2 knows his own colour and smurf 3's. Smurf 2 then says his own colour in english if it is the same as smurf 3's. If smurf 3 has a different colour, smurf 2 calls out own colour in japanese (or with a terrible japanese accent if we assume Gargamel responds only to english). Smurf 3 then knows his colour and continues. -Corey

  • This feels like cheating to me, since it adds extra bandwidth to the communication channel. Might as well allow him to pitch his voice to a frequency that encodes the colors of all the others' hats. The odd/even approach still works in this case, though: smurf 1 calls out 'red' when he sees an odd number of red hats; smurf 2 then knows his own hat's color based on whether he sees an odd or even number of red hats; for n > 2, smurf n learns his hat's color by comparing smurf 1's answer to the number of red hats he can see plus the number of red hats announced by smurfs 2 through n-1. --Evan
    • Evan - I know it does seem like it is adding an extra variable to the problem. I merely posted to get the discussion moving to see if anyone else has come up with a solution. However I don't believe your method will work for this version as each smurf can only see the colour of the smurf in front of him (behind him in order of choosing). -Corey
      • Only if you're proposing another variation; The original note2 says each Smurf can only see "the hats of the Smurfs in front of him" not "the hat of the Smurf in front of him." --Evan
        • Oh cool, that makes sense. Cheers.- corey

Another RE: Alternative 2. This is actually very similar to the main problem, but the smurfs need to store a mental variable of 'odd/even'. If the back smurf sees an odd number of red hats, he says red (doesn't matter if he lives or dies). All the other smurfs now know there are an odd number of reds remaining, so initialise var = odd. If smurf 99 can see (var) red, he says white and we proceed. Otherwise he says red, the other smurfs flip the value of var, and we proceed. Similarly if the back smurf saw even, he says white and the other 99 smurfs initialise var = even, and proceed as above. -- TLH

General solution for all of the above[edit]

Since the smurfs know all possible hat colors, and have time to discuss an ordering, they can assign each color an integer in [0, n). I will treat colors and numbers modulo n as equivalent in the following. First, all smurfs remember any number/color; doesn't matter which, as long as they all have the same one. As each smurf is questioned, he will subtract all the colors of smurfs that haven't been questioned yet from the number he remembered, and say that. The remaining smurfs will subtract that answer from their number.

That's it. The first smurf questioned has a survival chance of 1/n, and all others always survive. It works whether you use alternative 1 and/or 2 or not, and it doesn't need the restriction that there are less colors than smurfs. It also happens to recover with only one additional death if a smurf messes up. -- 19:47, 19 May 2011 (EDT)

Alternative 3[edit]

Solution to Alternative 3: Call an assignment of hat colors to Smurfs a pattern and call two patterns equivalent if they are the same on all but finitely many hats. This equivalence relation partitions the set of possible patterns into equivalence classes. Note that by observing the hat colors of all the other Smurfs but not his own, each Smurf can tell which equivalence class the pattern belongs to -- his own hat color is irrelevant because patterns which differ on finitely many Smurfs are equivalent. The Smurfs invoke the axiom of choice to select (and agree on) a distinguished representative pattern for each equivalence class (that is where the magic is required). Once the hat colors are assigned, each Smurf determines the equivalence class of the pattern by examining only the hats of the other Smurfs, then says the color of his hat in the preselected distinguished representative of this equivalence class. By the definition of the equivalence relation, only finitely many Smurfs get their hat color wrong.

Actually, i'm not really sure that would work. As far as i can tell, there would be exactly three equivalence classes[note] (if at all -- i'm undecided if the relation as defined above would indeed be an equivalence!), and the three would be: (a) finitely many reds, the rest being white; (b) finitely many whites, the rest being red; and (c) infinitely many reds and whites (most likely scenario, IMO). In the first two cases, the answer is obviously guessing the side with the greater numbers, no need for "magic". But in the third one, even if going with your suggested algorithm, there is no way for any smurf to match what he sees with any representative pattern -- he may have one of the finitely many hats that differ! -- CrystyB 18:01, 9 April 2010 (UTC)
[note] I am only considering the case of a countable number of smurfs; the uncountable case would offer a much greater difficulty. I would be curious how you would approach the problem if the smurfs would be identified by positive real numbers...
The case (c) identified above represents multiple equivalence classes. For example, the cases RWRWRW... and WRWRWR... both have an infinite number of each hat color and differ at every position, so they are not equivalent. I'm fairly sure that this solution works, even if the number of smurfs is uncountable.
It works, if the Smurfs have a probability of exactly 1 to get all the maths right. However, if some fraction, no matter how small, messes something up, an infinite amount of smurfs will die. An equivalent puzzle with a function R->R was posted here -- 10:43, 16 July 2012 (EDT)

The smurfs build a line. It starts with two smurfs. The third smurf sees if they have the same or a different color. If it is the same he goes left of them, of they have a different color then he goes between them. The following smurfs will do the same. At some point there will be two colors in the line and from that point on every following smurf has to go between the to colors. So he doesn't have to know his hats color, but the border between red and white hats will be left or right of him. In the end you have a line that is devided in two colors. Every smurf but the last two ones know the color of their hats, because its that of their neighbours.

  • Course, the problem with risking any finite number of Smurfs to find out the code is ... what if Smurfette is in the finite set? If she goes, their entire species is lost. So they have to stipulate that she is asked later (or earlier depending on the version of the problem). - Derrill
  • Allowing the building of the line to convey information could be taken one step further: after all the smurfs are in line, two of the smurfs that know their colour (one of each colour) could split those last two and help them find their own colour too. ;-) The only problem would be if one of the two colours would have either one or no hats at all. ^^ -- CrystyB 18:01, 9 April 2010 (UTC)


If it's the first time a prisoner is being interrogated, then the prisoner should flick the switch REALLY loud so all the prisoners can hear it and keep track. Once the switch has been flipped 99 times then all the prisoners know that everyone has been interrogated.

  • The solution to the problem as stated is to have one prisoner designated the 'counter'. The light is stated to be in the 'off' position to begin with. Any prisoner can turn the light 'on' if they've never flipped it before, but only the counter can turn it off. Once the counter has turned off the light 99 times, then 99 prisoners and the counter have been interrogated.
    • Without knowledge of the light's starting position, on or off, then the counter cannot be sure the first time they turn off the lightbulb if they have counted a single prisoner or no prisoners. The way around this is to have everyone turn the light on twice. The 198th time they flip the switch off, they will have either counted the 99 prisoners who have flipped the switch on twice, or 98 prisoners who have flipped the switch on twice, and one who only flipped it once (but was nevertheless interrogated)
      • Wouldn't it be more efficient, then, to have the counter wait until they have turned off the light 100 times, just to be sure?

There are multiple solutions, this PDF from a Berkely student seems to be comprehensive. [4]

  • The referenced pdf contains the solution. However their initial problem is different. Here the time between interogations is random whereas in the pdf the time is fixed to be one prisoner a day.
    • I added the alternative version (1 prisoner a day) to the front page. 19:08, 11 February 2009 (UTC)
      • It doesn't really make a difference to the standard solution (except that the prisoners know not to speak up until at least 100 days have passed — and their method will require much more than 100 days anyway).
  • This PDF from a Stanford mathematician is more comprehensive, and addresses many variations, including where prisoners do/don't know the current time.

Isn't there a simple solution where everyone counts ? I would think that what you need to track is a change in state, not who did what/when. So what you do know is that the second time anyone goes in for interrogation, they change their state from new to interrogated. So you just need to count those unique events

  • If this is your 1st time in the room, leave the light off
  • If this is your 2nd time in the room, leave the light on
  • If this is your 3rd or later time in the room, leave the light off
  • Keep count of the number of times you see the light on,
    • When you get to >100, everyone has been in the room twice.

I can't see how that will work... there will only ever be 100 occasions in all when the light is left on (namely, after each prisoner's second visit) so it's vanishingly unlikely - one in a googol squared I believe... - that one prisoner will see all 100 of those occasions.'

Unknown initial state

The papers mentioned give a strategy for an unknown initial state of the switch, but only for the case where one prisoner is taken to the room each day (trivially, the first prisoner turns it off to begin with). Our problem doesn't give prisoners the knowledge of who the first prisoner is. My strategy:

Choose a counter (the others are drones).

The drone operates in 3 stages:

Stage 1: Do nothing. Move to stage two after visiting the room with the light on.

Stage 2: If you visit the room with the light off, turn it on and move to stage 3.

Stage 3: Do nothing.

The counter:

Start an internal counter at 0.

If the light is on when you arrive, and you turned it off last time you were in the room, increase your internal counter by one. On all visits, toggle the light.

Note: Drone stage 1 ensures that if the light began in the on position, the counter knows about it and was able to reset it. The counter always toggling the light allows drones to move both from stage 1-2 and stage 2-3 (and so be counted).


A better solution to the one with initial state unknown: One leader is chosen. He starts an integer count at 0. While the count is less than 200, whenever he comes to the room and the bulb is on he turns it off, and increase his count by one. All the others: if the bulb is on, they leave it on. And the first TWO times they come to the room and the bulb is off they will turn it on. So this will end up with everyone coming at least twice, except for a possibility of one guy coming only once, if the initial setting is on.




P1 will propose that all odd-numbered pirates (p1, p3, ..., p99) receive 1 gold piece. Each odd-numbered pirate will vote in favor of this proposal, so it will be accepted.

We can reach this result by induction:

In general, a pirate proposing a division will always vote for his own proposal (Survival). If pirate n is able to propose a division D(n) (worth W(n) to himself) that would be accepted, then his survival is not in doubt. Define M(n, i) as the most that pirate i can expect to receive when there are n pirates left. Pirate i will vote 'no' on any division proposed by pirate n that gives pirate i less than M(n-1, i) (due to greed) or the same (due to bloodthirstiness). He will vote 'yes' on any proposed division that gives him more.

1. D(100) and D(99) exist, since in each case the lead pirate can propose that he keep all 50 gold, and his 'yes' vote will ensure acceptance.

For n < 99, if D(n+1) through D(100) exist, then D(n) exists iff pirate n can 'buy' the votes of 50 - ceil(n/2) other pirates. 'Buying' the vote of pirate i consists of paying him more than M(n-1, i). Due to greed, a pirate will choose the 50 - ceil(n/2) cheapest votes, propose to pay the minimum M(n-1, i) + 1 on each, and keep the rest for himself. At most, pirate n will be able to keep ceil(n/2) gold.

2. D(98) exists, since D(99) and D(100) exists and M(99, 100) = 0 (when there are 99 left, p99 keeps it all), so p98 can buy p100's vote for 1 gold and keep 49. This makes M(98, 99) = 0 and M(98, 100) = 1.

3. For even n with 1 < n < 97: if M(n+1, i) = 0 for even i and M(n+1, i) > 0 for odd i, then there are exactly 50 - n/2 pirates whose votes can be bought for 1 gold. D(n) will then consist of 1 gold for each other even-numbered pirate and the rest for pirate n. This makes M(n, i) = 1 for even i and M(n, i) = 0 for odd i.

4. For odd n with n < 98: if M(n+1, i) > 0 for even i and M(n+1, i) = 1 for odd i, then there are exactly 50 - ceil(n/2) pirates whose votes can be bought for 1 gold. D(n) will then consist of 1 gold for each other odd-numbered pirate and the rest for pirate n. This makes M(n, i) = 0 for even i and M(n, i) = 1 for odd i.

5. By induction, D(1) consists of 1 gold for each other odd-numbered pirate (49 of them), and the remaining 1 for pirate 1.

Beyond 100

1. With 101 pirates, the first needs 51 votes, and 50 odd-numbered pirates' votes can be bought, if he keeps none for himself. If his proposal failed, there would be 100 pirates left, and those pirates would get nothing.

2. With 102 pirates, the first needs 51 votes, and 51 pirates votes can be bought! He can buy any 50 and keep nothing.

3. With 103 pirates, the first needs 52 votes, but only has enough gold to buy 50. He dies.

4. With 104 pirates, the first needs 52 votes . . . and pirate 103 doesn't have an acceptable proposal, so he will vote yes for nothing! He dies too if this proposal fails. 50 out of the same 51 pirates' votes from step 2 can be bought, bringing the total to 52.

5. With 100 + n pirates, where 2^i < n < 2^(i+1) for some i > 0, the first needs 50 + ceil(n/2) votes. He can buy 50, and has n - 2^i free votes: his own and the votes of pirates 100 + 2^i + 1 through 100 + n - 1. But from n < 2^(i+1) we get n - 2^i < n/2 <= ceil(n/2) and there aren't enough votes. These pirates die.

6. With 100 + 2^(i+1) pirates for some i > 0, the first needs 50 + 2^i votes. He can buy 50, and has 2^i free votes: his own and the votes of pirates 100 + 2^i + 1 through 100 + 2^(i+1) - 1. This is enough for him to survive and take nothing.

So at 200 starting pirates, they'll die until there are 164 left.


Shouldn't the next "live" after 108 be 116 (50 bribed, plus 109 to 116 don't want to die)? Similarly, shouldn't the live numbers be 100 + 2^n? -- bradluen

Yes, thanks! I've fixed it. --Evan

Won't pirate 103 vote no always? He's not able to survive or make money, so he should go for bloodthirstiness and vote no to kill off 104. --Aegeus

 No, because he *can* survive by voting yes along with 104 and the 50 other pirates that are getting gold. --Evan

There is an implicit assumption in this proof. We have "Pirate i will vote 'no' on any division proposed by pirate n that gives pirate i less than M(n-1, i) (due to greed) or the same (due to bloodthirstiness). He will vote 'yes' on any proposed division that gives him more." where M(n,i) is the *most* he can expect. This assumes that if pirate n-1 could bribe either pirate i or another pirate for, say, 2 gold pieces, pirate i could be bribed for at least 3 gold, but not for less. However, it is ambiguous whether or not this would actually happen. Would pirate i accept a bribe of 1? How about of 2? Rejecting and accepting the pirate n's proposal both do not have defined weights in the pirate's priority list. Accepting guarantees them money, but rejecting gives them the *possibility* of getting more money (and more deaths).

Now, this case never actually arises, so it does not affect the outcome, but I still find this to be a minor problem with the provided solution. Interestingly, this case does arise if a tie did not count as an acceptance. --Ezbez

I think what Ezbez said (or a related concept) also needs to be considered in the case of 104 or more pirates with the problem as stated.

Note: I'm reversing the numbering so that the highest numbered pirate is the one making the proposition. That way if they are killed, everyone keeps their number and the solution is equal to the case with fewer pirates to start with. It was already established that 101 would give a coin to pirates {odd 1..99}, and so 102 needs to give a coin to 50 out of {even 2..100, 101}.

Now 104 proposes an assignment to some 50 of {1..102}. 104 and 103 will vote for the proposal, to stay alive, as will 102 if he's assigned a coin, for greed. Anyone in {1..102} that is not assigned a coin will vote against, for bloodthirst. What should the rest do? They know that if they vote against, 104 and 103 will be killed, and 102 will make a proposition that will be accepted. However, they don't know exactly what 102's proposition would be.

102 can choose any 50 from {even 2..100, 101} only. Obviously anyone chosen by 104 in {odd 1..99} should vote for 104's proposition. The even ones and 101 pose a problem: it isn't clearly defined what they should decide. Would they choose the certain coin from 104, or a 50/51 chance (although 102's probability distribution could be anything) of getting a coin from 102, plus having two guys killed?

If we extend their stated greed to estimated expected value, they should always go for the coin now. That would allow 104 to choose any 50 from {1..102}, 108 from {1..104}, etc.

Without knowledge of them doing this reliably, I guess 104 couldn't take the risk of getting killed. That would mean that 104 can only choose from {odd 1..99, 102}. This too is different from what was previously said, although "50 of the same 51 pirates' votes from step 2" is quite near. Similarly, 108 could choose from {even 2..100, 101, 103, 104} and 116 from {odd 1..99, 102, 105..108}. --Nix

100 pirates: My thoughts are the same as been discussed. p1 offers gold to 49 other pirates, and keeps one for himself. I was thinking "wait, the other pirates will be bloodthirsty and kill p1, since p2 would offer them a similar deal". However, they would all be rational enough to know that this would create a chain reaction that would eventually get them killed (except the lowest few, but they wouldn't be enough to turn over the initial vote).

Actually, now I'm thinking that p50 (and some better ranked) may be able to vote no on the first few resolutions to get more money or kill more pirates without losing their life to the same logic. I'd have to think about it more --Case

100 pirates: I'm thinking the bloodthirstyness would give the following scenario: No matter what p1 proposes, the 51 final pirates would vote no, same goes for p2-49. When p50 makes a proposal the final 26 pirates will vote no, same goes for p51-74. Then the final 14 pirates would vote down and kill p75-86. Now the last 8 pirates will band together and murder p87-92. 5 pirates will then vote down p93-95. The next two pirates, p96 and p97 fare no better. p98 is killed by p99 and p100. Now there are only two pirates left and p99 proposes to give himself the entire treasure. Since he has 50% of the votes, he gets the entire treasure.

Being "perfectly intelligent, logical and rational" (and this being common knowledge), wouldn't they see the next step coming and so determine that voting against will only help you get killed, never mind getting no gold? This should be easiest to see in the case of three or four pirates left. Assuming p98 proposed to give p100 some coins (1), why would p100 vote to kill p98 as that would lead to getting none from p99? Now, if you had them not care about how much they get, p98 would get killed but still, p97 wouldn't, since p98 would have to vote yes to save his own life. --Nix

Perhaps I'm misreading/oversimplifying this, but wouldn't the top ranked pirate get all of the coins? If everyone cares about survival first, the top ranked pirate will always vote for himself, and since a pirate can only die by being the top ranked living pirate, every pirate will attempt to keep themselves from being the top ranked living pirate, meaning that they'll vote to follow the top ranked pirate. The top ranked pirate, realizing this, will give all of the gold to himself on the wealth priority, and the bloodthirst priority will never come into play.

Doesn't D(-1) matter? --Mark

"The pirates are perfectly intelligent, logical and rational." I think this is a hilarious proposal in itself, to say nothing of an intellectual discussion on their methods of profit distribution. Par for the course for XKCD, though; I love it. :)

Think of it this way: except for the last one or two pirates, if all the pirates before them are killed the dilemma for all the rest of them is essentially the same as for the initial head pirate. They would all know that if they repeatedly vote for deaths it will bring it to themselves and their own death. Unless the last so many pirates are able to collude and make a deal ahead of time they have no choice but to vote yes whatever the first pirate says.

Oops, was mistaken there, I realized earlier. As was said earlier if it gets to the second to last pirate he can keep everything, but you can extrapolate backwards from there. When it gets to pirate 98 he can offer the last pirate one coin, and thus get his vote (as otherwise he would get nothing), and one additional vote is all he needs. Then for pirate 97 he also only needs one additional vote, but he can't make the same offer, as then the bloodthirstiness would come in since pirate 100 would get 1 coin either way but would rather have an extra pirate die. However he doesn't need to offer an extra coin, as he could instead give him nothing and offer one coin to pirate 99. Pirate 96 need two extra votes, which he can get by offering one coin each to pirates 98 and 100 (it doesn't matter that they would also get that coin from some of the later pirates because they know pirate 97 has a winning strategy under which they get nothing). Going this far the pattern seems clear: any pirate would have to offer one coin each to every other pirate, starting with two pirates after him. Therefore the solution is: The 1st pirate would keep 51 coins, and give away 49 coins, one each to every other odd numbered pirate. However this only raises the question about why perfectly intelligent, logical, and rational pirates would agree to this method in the first place, since half of them would always be guaranteed to get nothing, the rest other than the head pirate to get only one piece of loot each, and there would be a bloodbath if they acquired loot consisting of only a small number of coins.

Assuming there are n pirates: Interestingly, a regressive look at the pirates' strategies is the most insightful here. If there are only P3 and P4 left, P3 will keep the whole pot, and Pn will get nothing but will survive. In fact Pn will always survive. P(n-1) will also always survive, and if he is the "lead" pirate, will get all gold. Therefore, p(n-1) will ALWAYS vote no. P(n) will vote no until P(n-2), and then be bribed by P(n-2), because that is his highest payout opportunity. P(n-2) will know that P(n-1) will vote no and take all the gold, if he can. Thus, P(n) will want to maximize his payout, and P(n-2), wanting to survive, will give P(n) gold. Since P(n-2) wants to live, he will be best off offering ALL the gold to P(n). This is due to the strange ordering of priorities. The problem states that a pirate will always choose life over any amount of gold. Thus, P(n-2) would rather forego any and all gold than die. Knowing this, P(n-2) will want everyone except P(n-3) to die, and will accept a bribe from P(n-3), or even just vote yes for the sake of survival. The key insight here, however, is that the death of P(n) through P(n-3) do not matter to P(n-2) in terms of decision making, because P(n-2) has a winning strategy when P(n-3) is alive. P(n-3)'s strategy is similar. He will want everyone but P(n-4) to die. He can maximize his gold the most when P(n-4) is alive and bribes him. However, since this essentially leads to a "kill'em all!" scenario, knowledge of this nature of it must somehow affect the votes of some pirates down the chain. The only pirates truly immune are P(n) and P(n-1) who will always mutiny (except as stated above). Ultimately this is a very tricky one because it's recursive; the optimal strategy of each pirate, if previously foreseen by other pirates, will change the optimal strategy of those pirates, which will change the optimal strategy of the original pirates.


I wrote a program that takes N (number of pirates) and K (number of gold bars) and then prints how many gold bars each pirate will end up with and -1 if he will die. Basically what I do is look at the case with 1 pirate, then the one with 2 pirates, and so on. And each pirate votes for someone to stay alive only if they can offer them more bars than their current offer. Each pirate tries to give gold bars to the half of the pirates with lowest current offers and the rest he keeps to himself. If he can't do that the other ones' gold bars and not affected and he is killed (his current offer is set to -1). So here is the code:


Cyclic List[edit]

Cyclic List Algorithms[edit]

I'd use a tortoise+hare solution.

In pseudocode:

loop = true
Tortoise = first object in linked list
Hare = first object in linked list.
While (Tortoise != Hare) do
Tortoise =
Hare =
catch nullpointerexception {loop = false}
print "There is a loop:" + loop}

Basically, Tortoise moves through the list one item at a time. Hare moves through two items at a time. If there's no loop, Hare will run off the end of the list and throw a NPE after NumEntries/2 iterations. If there is a list, Hare will enter it and loop it until Tortoise catches up to it or it catches Tortoise.

As for how to find WHICH entry is the first in the loop... I don't suppose I can just cheat and have Tortoise mark each entry as it goes past, eliminate hare entirely, and just have tortoise check for it's marks? The problem is, that requires changing the objects, which is generally cheating in puzzles like this. -John 20:28, 11 February 2009 (UTC)

Oh, duh, gotta run Tortoise and Hare's advancements once each before the While starts, or you fall out immediately. But still!
  • no. marking is not allowed.
    • yeah, that's to be expected. And I suppose, since it's a proper linked list, you can't cheat and have each entry somehow know where it is in the list?
      • In case you haven't seen the solution (or don't want to see it). No: you can't know the position. The solution is extremely simple and elegant.

Here the same solution (just for the predicate) in Scheme:

 (define (cyclic? L)
   (define (tortoise/hare t h)
      (or (eq? t h)
          (and (not (null? h))
               (not (null? (cdr h)))
               (tortoise/hare (cdr t) (cddr h)))))
   (if (null? L)
       (tortoise/hare L (cdr L))))
  • Here is a solution for the second part of the problem (finding the first element where the list starts to cycle). We assume that the list is cyclic (simply run cyclic? before). The idea of the following algorithm is, that we have one pointer p1 starting from the beginning and one, p2, running inside the loop. The difficulty is to "sync" these two pointers so they will meet each other at the first possible list-element.
    Now reusing the 'cyclic?' idea we have a tortoise and hare. When they meet the tortoise has advanced by n steps, whereas the hare must have taken 2*n steps (it is twice as fast). Set p2 to be equal to the tortoise's pointer. (Simply think of p2 as having advanced by n steps). If we advance p2 by another n steps it will have done 2*n steps (similar to the hare). If we start another pointer p1 at the beginning then p1 and p2 will obviously meet. At the very latest p1 and p2 will both advance by another n steps. However, as p1 and p2 advance at the same speed they might meet earlier at the very first element of the cyclic list.
 ;; L must be cyclic.
 ;; returns the list where tortoise and hare meet.
 (define (cyclic L)
    (define (tortoise/hare t h)
       (if (eq? t h)
 	    (tortoise/hare (cdr t) (cddr h))))
    (tortoise/hare (cdr L) (cddr L)))
 (define (first-cyclic L)
    (define (iter l1 l2)
       (if (eq? l1 l2)
           (iter (cdr l1) (cdr l2))))
    (iter L (cyclic L)))

Two Beagles[edit]

Solution: 1/3, because there are four equal-chance possibilities at first (MM, MF, FM, FF), the shopkeeper narrows it to three (MM, MF, FM), and one of these three is MM. -- dazmax

Further Clarification: When she shopkeeper says that one of them is male we do not know which is male the first or the second in the set. This is why HAVE to treat these as a pair. These can be described as MaleA(Ma) or MaleB (Mb). At this point we can establish all possibilities. There is only one possible female so we cannot account for two:

1. Ma + Mb 2. Mb + Ma 3. Ma + F 4. Mb + F 5. F + Ma 6. F + Mb

There only two out of these six which are all male, options 1&2. Hence 2/6 reduces to 1/3.


Perhaps you should consider that when the shopkeeper asks the guy who's giving them a bath, he will only check both if the first he checks is a female (assuming he's a rational, efficient guy). So it's either male first, then a 50% chance of another male, or Female First, and the next is a male (meaning a 0% chance the other is a male, as we know it's a female). Logically then, there is a 50% chance that the other beagle is a male.

I believe.

  • You believe wrongly. Whether the bather checks one or both puppies is irrelevant - the information he imparts to the shopkeeper is "at least one puppy is male". There are four possibile sets of two puppies - MM, MF, FM, FF - and since at least one is male, we know you don't have the FF. That means that you have either MM, MF, or FM. Of those, the "other puppy" is female 2/3 of the time. -John 04:08, 15 February 2009 (UTC)

I think there is some confusion as to combination vs. permutation here. The order of the beagles is irrelevant; MF would be the same as FM (to be further called "one of each"). We know that it's not FF, so that leaves MM and "one of each". Two equally likely choices, 50%.

  • This is actually a good way to see why it's 2/3. Like you say, it's either "both male" or "one of each". But "one of each" comes up twice as often as "both male", precisely because there are two orders of it. (You can try this with flipping pairs of coins, for a nice experiment.)

i'd like to say that the logic of this eludes me. if we have one puppy of an unknown gender, he has a 50% chance of being a particular sex. now we have two puppies, each with a 50% chance of being a particular sex. it is true that there is a 33% chance of any one configuration (both male, both female, or one of each), but each puppies sex is independent of the other, they each, individually, still have a 50% chance of being male, now that we checked one, it doesnt affect the initial probability that any given puppy of unknown gender is male. we essentially eliminated the set. all we have now is one puppy of unknown gender, regardless of what gender the other one was. if i am wrong i don't understand, but would like to know where i went wrong

  • it is true that there is a 33% chance of any one configuration (both male, both female, or one of each). That is the mistake. They do not have equal chances, because MF and FM both have the same chances as MM and FF.
    Let's use hard figures: First puppy will be either Adam(M) or Betty(F), 2nd puppy will be either Charlie(M) or Daisy(F). There are 4 possible, equally probable possibilities:
    • Adam & Charlie (MM), 25%
    • Adam & Daisy (MF), 25%
    • Betty & Charlie (FM), 25%
    • Betty & Daisy (FF), 25%
  • There's one situation that yields MM (25%), one FF (25%), and TWO that yield MF (50%). In our problem FF has been eliminated, so there's three situations; one that yields MM (33%), and two that yield MF (66%).

Here's a more intuitive way of thinking about it: what the bather has told us, by saying that at least one of them is male, is equivalent to just saying "they are not both female". And what the question is asking, by asking what the probability that the other is male as well, is equivalent to "what is the probability that they are both male". So read the question in full as "what is the probability that both dogs are male, given that they are not both female?" and it is much easier to see why 1/3 is the correct answer. - Adam

  • so, lets look at the case of a lazy checker. (you can apply this two the N dog case as well to make it more tedious, but obvious)

with a 50% chance, the first dog was the first male found with a 50% chance, the second dog was the first male found (1+ males found as premise) if the first dog was male, there is a 50% chance of a second male. if the second dog was the first male, there is a 0% chance of a second male. so, in 100 cases, 50 would have found the male first. 50 second. of these cases, 25 would have two males. this is 25 cases with two males, 75 cases without. or 1/3rd of the cases.

An interesting corollary is the situation where the two dogs are completely indistinguishable except for gender; that is, they behave like particles and Heisenberg applies (always a reasonable assumption for dogs, I know). In this case, the probability is indeed 50% because the FM and MF cases are treated as the same. 06:48, 2 March 2009 (UTC) [Reply: you are referring to bosons. Crudely, the difference between fermions (that obey Fermi-Dirac statistics) and bosons (that obey Bose-Einstein statistics) is what distinguishes matter from the forces].

CORRECT SOLUTION: 2/3. The order of the puppies is irrelevant, so the different options are MM, MF and FF. It is obviously not FF. We now have two possibilities: MM and MF. I will now label the two male puppies Ma and Mb. So our options are MaMb or MF. The person bathing the dogs checks to see if one of them is a male. He is either looking at:

  • Ma, and so the other is Mb.
  • Mb, and so the other is Ma.
  • M, and so the other is F.

Thus the probability of it being MaMb (ie MM) is 2/3.

Unlike particles, these dogs do not cease being distinct entities because the only thing you know about them is whether they are male or female. MF and FM are always different, though in permutations you count them as two instances of the same category. That doesn't mean you can ever ignore them.
Specifically, the three possibilities are MF, FM, and MM, which we know are all equally likely because we assume each dog initially had a 50% chance of being male or female. Which dog the checker looked at is irrelevant; in fact, why not assume he looked at both and just answered the question honestly (i.e., at least one was male, but he's not telling what the other was)? Regardless, all three possibilities are equally likely, so the solution is: 1/3. 02:16, 5 March 2009 (UTC)

Wouldn't it be a 50/50 chance? The bather has said that one of the dogs is male. Therefore the probability of whether both dogs are male depends only on the second dog, which has a 50/50 chance of being male or female.

Yes, it's 50%. One puppy is male, so the question becomes "What is the chance that the other puppy is also male?" There is ONE puppy of unknown gender. The chance of one unknown puppy being male is 50%.

Think of it this way. You have two boxes, each box has one puppy in it. You open one box and see that the puppy inside is male. What is the chance that the puppy in the other box is male? 50%

A reasonable analogy, except that it fails to consider the possibility that the first box opened contains a female, and the person checking had to proceed to the second box to check that one. Essentially, yes, there is a correlation between the answer received over the phone and the gender of the second puppy. It collapses out the possibility that both puppies are female, but that's all. It does not assure you that the first puppy checked of the two is male. --Mark

Well I think the answer could be 1/4. Look CAREFULLY at the question the shopkeeper asked, and at the gender of the person being asked. One what? One gendered entity in the room? If so, the answer is automatically yes because the person being asked is male. So no information is gleaned about the dogs, and the chance of both dogs being male is 1/4. -- TLH

Ok since this is a "puzzle" and not a real situation, of course the bather isn't going to say "one" or "both" are male. he answers the Boolean question with a Boolean answer. given that ONE of the dogs is male, what are the chances that the other dog is male? Mind you, the bather could logically have checked BOTH dogs at the same time (which is why the ordering does not matter!) You have two pennies, what are the chances the other one comes up heads if one comes up heads? it's all very cute to claim that there COULD be Betty and Daisy, but in actuality why would there need to be two discreet females? Assume that the first dog (in a one dog at a time bather lookup) is female. The answer is still the same (yes, at least one is male). Now assume the bather checked the male first (and if so, why would he bother checking the second one since he was going to be a Boolean smart-ass anyhow?)... the answer is still the same. the question is essentially "what are the chances of a single dog being male or female" --- now hear me out! When you talk about genetics or other such statistical things, in every other circumstance if you say "this thing has a 50% chance, what is the probability that two discreet occurrences have the same outcome?" IE one male, one female, or two males, etc - in all cases where one outcome is NOT known, the answer would be 1/3rd. (funny how some of you are getting that confused here!) - and that is why this is a tricky problem. Because the first instinct is to say 1/3rd if you've ever worked on blue/brown eye charts or gender of children charts in school.

here's another thing to bring this home: John has two children. What are the chances that 1 is a boy and one is a girl? (answer, 50%) what are the chances that both are boys? 25%. right? i was corrected here, the number of outcomes is three but one is more likely hence 25%.

But here, we say "john has two children, at least one of which is a boy. What are the chances he has two boys?" the answer is 50%! --genewitch [Reply: No it's 1/3. The question is equivalent to he doesn't have two girls. So he could have BB, BG, GB (in age order for example). That gives 1 case in 3].

Read up on Bayesian Inference if this problem interests you: I don't like this problem because while it does have a logical answer, it's really just designed to punish people for not understanding Bayesian Inference. This is the same gimmick the Monty Hall problem exploits, except it at least doesn't pull any punches, and the problem statement asks you to make a decision, which makes the "expected outcome" aspect of the problem more obvious without giving away the answer.

People who keep insisting that the answer is 50% don't really understand the concept of "independence" in regards to probabilistic reasoning and keep conflating it with cause-effect relationships. I really don't blame for being wrong, as probability in general is very counterintuitive. Yes, it's true, there is no cause-effect relationship between one dog's sex and the other's, but cause and effect is a different animal entirely from conditional dependence/independence. As stated, this is the same counter-intuitive leap required to understand the Monty Hall problem -- the host revealing a door doesn't change the state of the doors, but new information can influence expected values.

Here is another probabilistic model to consider, with a cause and effect relationship built in: Each day there is a 30% chance of rain, (Or 0.3). This implies a 0.7 chance of no rain. We know that rain has a 0.8 chance of causing bad traffic, while on sunny days there is still a 0.1 chance of bad traffic. With no information, the odds of bad traffic are (0.3 * 0.8) + (0.7 * 0.1) = 0.24 + 0.07 = 0.31. Inversely, there is a (0.3 * 0.2) + (0.7 * 0.9) chance of no bad traffic, which comes out to 0.06 + 0.63 = 0.69.

Now we know that rain causes bad traffic and obviously, bad traffic does not cause rain. However, if someone told you that traffic was really bad today, might you not assume that the odds of rain were a bit higher than usual? We add information, (traffic is bad), and so now we ask: what are the odds of rain given bad traffic? Again, clearly bad traffic does not cause rain, but wouldn't you think it was a little more likely that it was raining on a day with bad traffic? Yes? I'll skip the math behind it, but using the numbers I've given, it comes out to about 77% chance of rain. (If this still doesn't jive with you, let's make this a decision problem: You hear on the radio that traffic is bad. Do you grab your umbrella?)

So, what are the odds of two dogs being male given that at least one is male? The MM MF FM FF illustration explains it best, at least in terms of how to derive the the value 1/3. You have 4 possibilities and are given one to eliminate (FF) which leaves the desired outcome (MM) as one of three possibilities. If you aren't onboard still, imagine an alternative universe where you are told that at least one dog is female. What are the odds that both dogs are male? Hand over fist, no one would argue: 0% So where's your 50% now? The odds that the "other dog" is male is still 50% right? Yeah. Yeah it is, but don't you see now that this isn't what the question was asking? There is no "other dog."

Finally, I propose that this and other probability questions where the nature of the problem is "Ha Ha, lay people can't calculate X given Y or even know that is a thing. Burn." be merged with Monty Hall.


As many people have stated before, simply test this experiment with coins. Have your friend watch the coins and reveal to you if "There is at least one head". Or as another so eliquently stated, "There are not two tails". Throw out all experiments where both are tails.

To all the people saying that the beagles' genders are independent: we don't know the gender of a specific beagle. We know that at least one is male. This could mean either dog A or B is male. If you make a little table it works out beautifully:

Male Female
Male MM FM
Female MF FF

Each of these quadrants has a 25% chance of being the case with two random beagles. When the shopkeeper says that "at least one is male," all he does is rule out the FF option. He does not tell us the gender of a specific beagle. So, we are left with three options: MM, MF, and FM. 1/3 chance of them both being male.

To include all the factors that people have been talking about I lay out the full 8 scenarios below

Dog1 Dog2 Looked at first Male? Look at second dog? Male? Report is male
Male Male Dog 1 Yes No - Yes
Male Male Dog 2 Yes No - Yes
Male Female Dog 1 Yes No - Yes
Male Female Dog 2 No Yes Yes Yes
Female Male Dog 1 No Yes Yes Yes
Female Male Dog 2 Yes No - Yes
Female Female Dog 1 No Yes No No
Female Female Dog 2 No Yes No No

The bottom 2 rows don't matter because we know the man reported one of the dogs was male. That leaves 6 equiprobable rows, and we're interested in the odds of 2 of them. 2 out of 6 = 1/3.

You need all 8 rows rather than a Both male, Mixed, Both female set because it's twice as likely that a pair will be mixed than either of the single genders.

String Burning[edit]


You have two ropes that take half an hour each to burn, but burn at a completely variable, unpredictable rate. How can you accurately measure out 45 minutes using these two ropes?

--dunno if this is hard enough, but its fun

A. Burn one normally. 30 mins.

Then ignite the other at both ends - however the variability goes, the two parts meet after 15 minutes.


Is the question worded wrongly? "Each piece of string takes one hour to burn". --Luke

As stated (with 2 1-hour ropes) 1)light both ends of rope A, and one end of rope B, simultaneously. 2)after 30 min, when rope A finishes, light the unlit end of rope B. 3)after 15 more min, rope B finishes. --Pabo 00:37, 13 February 2009 (UTC)

You light one string at both ends, and the other at one end. Wait for the first string to burn up, then light the other end of the remaining string.

The first string will take 30 minutes to burn, at which point the second string will have 30 minutes left to go. Lighting it at its other end will reduce this to 15 minutes, for a total of 45 minutes.


I think this puzzle should be rephrased in the XKCD way[6]:

You have 2 pieces of string of different, unspecified length, and some matches. Each piece of string takes one hour to burn. There's a guard who stabs people who try to fold the strings in half.

Using only the matches and the strings, measure 45 minutes. --Andras

Ill offer the guard two nice pieces of string if he will tell me when 45 minutes is up. ( he has a watch to check when his shift ends )

Another solution: light rope A at both ends and somewhere in the middle. When one pair of burning meets, light one end of rope B; when the other pair meets, light the other end of rope B. When the two burning parts meet on rope B, it's 45 minutes. (First rope gets 15 minutes on average, the second rope gets 30 minutes and takes the average for you.)

Note: the original solution is this one in the limit as "somewhere in the middle" approaches one of the ends of rope A.

Along the same line as the above solution, you can burn rope A at both ends and somewhere in the middle. When one pair meets, light the other pair somewhere in the middle. Keep repeating this until rope A burns up, which would take 15 minutes. Then light rope B at both ends for 30 minutes.

Lay a very large number of matches end-to-end. Light both ends of rope A and one end of the match chain. When rope A finishes, count how many matches have burned. Burn half that number more, and it's been 45 minutes. This method is only guaranteed to be precise within one match-burn. I'm just adding it so there can be a lateral thinking solution along with the quantitatively better ones. --Michael

Wind the two ropes together.Cut of exactly 1/4.Light it at one end.- This solution only works for some instances, and has a big margin of error :) --Balazs

Do the matches have a constant burn rate? If they do, just burn a rope at one end and keep burning matches (always starting the next one once the previous one finishes) and when the rope is done multiply your number of matches by 0,75. --Felix

Magic Watch[edit]

Let 1 stand for the proposition that the car is behind door number 1, 2 that it is behind door number 2, and 3 that it is behind door number 3. Let Y stand for the proposition that a yellow flash means yes and B stand for the proposition that a blue flash means yes. Two questions (with non-english syntax) that will let you determine the right door are:

"Is it the case that (1 and Y) or (3 and B)?"

"Is it the case that ((1 or 2) and Y) or ((2 or 3) and B)?"

If the car is behind door number one the yellow light will flash for both questions. For door number 2 the light will flash each color once and for door number 3 the blue light will flash twice. I wish I had been first to post one of the harder ones, but I do what I can. -- NHUP (New Hopefully Unique Pseudonym)

  • What? The two questions you propose don't sound like yes or no questions, and even so would require the above paragraph about what yes and no mean. Since you only told us that we had two questions, an explanation of your questions shouldn't be allowed. (In any case, the solution might be fine if you explain it in simple english.)
    • They're yes or no questions. They're symbolic logic statements that evaluate to either TRUE or FALSE - and he's asking the watch if they're TRUE. -John
    • As John said, they are yes or no questions, just like "It it the case that the sky is blue?" is a yes or no question. They don't, strictly speaking, require the explanatory paragraph, but they would be significantly longer and harder to make sense of if you wrote them out in totality. That's why I wrote them out as I did. -- NHUP
  • Why would you need "Y" and "B"? They're simply "yes", and being ANDed, and anything AND yes is just the anything. If you write the questions as #1: "is it behind 1 or 3?" and #2: "Is it behind (1 or 2) or (2 or 3)?" you will get the same answer... except that the second question will ALWAYS result in a yellow flash, no matter what, because either 1, 2, or 3 is true, meaning either (1 or 2) is true or (2 or 3) is true, meaning that "(1 or 2) or (2 or 3)" is always true. -John 14:25, 13 February 2009 (UTC)
    • "Y" and "B" aren't simply yes. On days when a yellow flash means yes "Y" is true and "B" is false. On days when a blue flash means yes "B" is true and "Y" is false. Certainly anything being ANDed with a true statement is always itself, and anything ANDed with a false statement is always false. That's the point. The (Proposition 1 and Condition) or (Proposition 2 and not Condition) format allows you to effectively ask a different question depending on whether the Condition is true or false. In our case this effectively allows us to ask different questions on days when blue means yes and days when yellow means yes. -- NHUP
    • Ah ha! I reread what I wrote and it finally penetrated. I wrote "Let Y stand for the proposition that Y means yes and B stand for the proposition that B means yes", which isn't very helpful. My mistake. I'll edit it to say what I actually meant. Sorry about that. -- NHUP

Another solution (a little easier I think):

Your two questions:

"If I ask you if behind door number 1 is a car, will you turn blue?"
"If I ask you if behind door number 2 is a car, will you turn blue?"

If it turns blue on the first one, then the car is on door number 1.

If it turns blue on the second one, then the car is on door number 2.

If it turns yellow on both, the car is on door number 3.

-- Andrés

  • This solution is just a generalization of the old one about the liar and truth-teller twins. If you get n questions, you can always phrase them in this way to find out n pieces of information, and (like in this case) can often use process of elimination to get the last one, if there are n+1 pieces of information to be had.
  • Your solution should read "If it turns the same color on both questions, the car is on door number 3. Proof:

Let your first question be labeled 'x' ("If I ask you if behind door number 1 is a car, will you turn blue?") and the second one 'z'.

case 1: car-1, blue=yes x-b z-y

case 2: car-2, blue=yes x-y z-b

case 3: car-3, blue=yes x-y z-y

case 4: car-1, blue=no x-b z-y

case 5: car-2, blue=no x-y z-b

case 6: car-3, blue=no x-b z-b

your logic stands unless the car is behind door 3, and blue means no. therefore, when the same color flashes, the car is behind door 3. --psolms

    • No, you are wrong... it's impossible that it turns blue on both questions, since if it turns blue it means the car is on the door the question refers to. It's really a lot simpler that the way you put it. You can look at each question separately, so there are only 4 cases (blue means yes/no, car in behind door or not). In case 6, when you ask question x it would not turn blue: If you ask the question "Is the car behind door 1?" the answer would be no, since it's on door 3. Because blue = no, it would turn blue. So the answer to the question "If I ask you if behind door number 1 is a car, will you turn blue?" is YES. And yes = 'yellow'. So it would turn yellow, not blue. -- Andrés

This is a good solution because it actually can solve the problem with 4 doors. In general, to solve the problem with 2^n doors requires n questions.

- Dan Loeb

Another solution I thought of that plays around with the puzzle's interpretation:

  1. First ask the watch: "Is the car behind an odd-numbered door?"
  2. Then ask the watch: "If the car is indeed behind an odd-numbered door, is the car behind the first door?"

Explanation: The first question results in either a blue or yellow flash, lets call it color A.

The second question results either in a blue, yellow or no flash: The question is either a yes/no question or not a yes/no question (actually: not a question at all), depending on the car being behind an odd-numbererd door or not. As the watch cannot answer a non yes/no question, it will not flash.

(You might consider this a question within a question, but stating "If I ask you if behind door number 1 is a car, will you turn blue?" in one of the earlier solutions is also two questions in one).

If the watch doesn't flash the second time, the car is behind door #2,


If it flashes color A, the car is under door #1

If it flashes the other color, the car is behind door #3

-Koos G.

The second question is still a yes/no question, just an if-then one. I know you phrased it "If X, is Y true?" but the only way I can think of how to interpret that is "Is it the case that 'If X then Y'?" In that question, there will always be a flash. Unfortunately, Y does not depend on X here, and in fact X can be determined to be true or false. Since it is false, the if-then statement is always true.

So if the car is under DOOR 1: Yes, yes DOOR 2: No, yes DOOR 3: Yes, no

Therefore if it flashes the same color twice, it is behind door one. Unfortunately, if it flashes two different colors, there is no way to tell whether it is behind door 2 or door 3.

  • Wait... We don't know if Blue=Yes or if Yellow=yes. This doesn't matter if the car is behind door #1, because it'll be the same colour twice, but what if this happened:

"Is the car behind an odd-numbered door?": Yellow "If the car is indeed behind an odd-numbered door, is the car behind the first door?": Blue

What do you do then? -Arca

Since it wasn't stated that the game show had a time limit, on Day 1 ask the following two questions:

Is a car behind doors 2 or 3? Is a car behind doors 1 or 2?

If the light flashes the same color for both questions, the car must be behind door 2, if it flashes different colors, it cannot be behind door 2, see below.

On Day 2, ask the following questions:

Is a car behind doors 1 or 3? Is a car behind doors 1 or 2?

Once again, if the light flashes the same color after both questions, the car must be behind door 1. If it flashes different colors, the car cannot be behind door 1 or 2 (see above), and so it must be behind door 3.

Game show host receives compensation for his patience.

One car richer. But if you really wanted this car so bad, couldn't you just sell your magic watch?


Sure, the question was somewhat misstated, but even as it is, wouldn't you be happier getting the car right away rather than waiting a day? To do that, you'll have to use one of the two-question solutions. Alternatively, you could ask two questions around 11:59 PM and when you are able to ask more at midnight, ask the other two. That way you only have to wait a very short period of time.
But I still prefer the two-question solutions. 00:46, 10 April 2009 (UTC)


This can actually be solved with just one question, assuming that when given a question the watch can't answer, it either explodes or does nothing:

"Is (door1 = car AND blue = true) OR (door2 = car AND yellow = true) 
 OR (door3 = car AND you flash false to this question)"

If (door1 = car AND blue = true), or (door2 = car AND yellow = true), the watch will obviously flash blue for door 1 or yellow for door 2. If both parameters are false and the car is behind either door 1 or 2 (door1 = car, but yellow = true/viceversa), then the next parameter (door3=car AND you flash false to this question) is found false as well because the car is not behind door 3. You would still get blue for door 1, and yellow for door 2. If the car IS behind door 3 then the first 2 parameters are obviously false. The third parameter is now a contradiction, and the watch cannot flash true or false. I hypothesize that since the watch is 100% correct, it will wait until the question has a possible answer, and will flash false immediately after you drive your car out from behind door 3.


My solution involves using the mutually exclusive or operator, essentially asking if either of two possibilities are true, but not both.

1. Does blue mean yes XOR is the car behind door #1? (in more english terms: Does blue mean yes OR is the car behind door #1, but not both?)

2. Does blue mean yes XOR is the car behind door #2? (Does blue mean yes OR is the car behind door #2, but not both?)

Since they're both the same question for different doors, the same logic can be applied to the answer for both questions.

So the answer to question 1 can be either yellow or blue, and either yellow or blue can mean yes or no.

Case 1: Answer - Yellow, Means - Yes Since Yellow means yes, that tells us Blue means no, which tells us the first part of the question is no, and since the overall question evaluates to yes, that means the car is behind door #1

Case 2: Answer - Yellow, Means - No Since Yellow means no, that tells us that Blue means yes, which tells us the first part of the question is yes, and since the overall question evaluates to no, that means the car is behind door #1

Case 3: Answer - Blue, Means - Yes Since Blue means yes, that means the car is definitely not behind door #1

Case 4: Answer - Blue, Means - No Since Blue means no, that means the car is definitely not behind door #1

The same logic applies to the second question.

Essentially, Yellow on the first question means door #1, Yellow on the second question means door #2, and Blue on both questions means door #3. It is not possible to get yellow to both questions. - Rahul

I think it would be trivial to use this XOR solution to solve the problem even with four doors, since you could split them up into groups of two for the first question, then ask which of the two in the second. Note that this makes use of all four possible combinations of answers, whereas your solution only makes use of three. Is there a good reason to leave it at three doors instead of four? We could even generalize it to 2n doors with n questions, although that is hardly necessary. 22:00, 7 May 2009 (UTC)

Light Bulbs[edit]

All Square Numbered bulbs will be turned on. A bulb will be turned on, if it has been flipped an odd number of times. A flip of a particular bulb occurs if the bulb is a integral multiple of a number. SO all bulbs which have odd number of factors will be turned on. Only Square numbers have Odd number of factors. -- Ramachandran R

Only prime numbers and perfect squares will be OFF. 1 Turns on all the lights on, 2 will turn 2 off, and nobody will turn it back on. The same holds true for all primes. For numbers which have factors that are not square roots, each factor not a square root will have a complementary factor which will turn it back on. In the special cases of 64 the only 6th power in the set, 2 and 4 complement 16 and 32, while 8 changes state to off. -- someone

  • First sentence is terribly wrong, but the approach is correct and is basically the proof of Ramachandran's last statement about square numbers. The stuff about prime numbers is true but redundant. -- TLH

Full reasoning:

Light number N is toggled once for every person numbered with a factor of N. A light will be on iff it is toggled an odd number of times. Factors X of N occur in pairs (X, N/X) except for the case where X = N/X, which implies that N = X^2, a square number. Therefore only square-numbered lights are toggled an odd number of times, and the result follows. -- TLH

when run through an algorithm, you create the following task: let A = person number let B = bulb being toggled

for A = 1 to 100
 for B = 1 to 100
  if B/A has no remainder then toggle B
  increase B by 1
 increase A by 1

this yields the following output:

1is on, 4is on, 9is on, 16is on, 25is on,36is on, 49is on, 64is on, 68is on, 70is on, 72is on, 74is on, 76is on, 80is on, 82is on, 84is on, 85is on, 86is on, 87is on, 91is on, 93is on, 94is on, 95is on, 96is on, 98is on, 100is on.

-- Jakerman999

scratch that, I made the mistake of starting at zero for one of my counters which made it accurate up to 64, but after that fails. after fixing the problem, I'm left with all perfect squares on(1,4,9,16,25,36,49,64,81, and 100). sorry about the mistake

-- Jakerman999

Wait, what? There must be a problem with my algorithm, as I get all perfect squares as being "on".

It's modus operandi is a rather brute-force one, so I don't see where it fails (though I do use a bit array that could be messing up or something).

In sorta-pseudo code:

boolean s [100]; for (i = 1 ; i <= 100 ; i ++)

 for (j = 1 ; (i*j) <= 100 ; j ++)
   toggle s[i*j];


On my linux box, I just used grep to filter the output of my program (written in C; not the above pseudo-code), and I get 1, 4, 9, 16, 25, 36, 49, 64, 81, and 100 as being "on".

-- 01:22, 25 February 2009 (UTC)

Another way to think about it - so we don't need a computer is... Lights that remain ON after the 100th person must be numbers with an ODD number of factors. ie. the factors 1: 1 turns on (remains on); factors of 4: 1 turns on , 2 turns off , 4 turns on (remains on). etc. Squared numbers are the only numbers that can have an odd number of factors because one of the factors is matched by itself. (For 4: 1 matches 4, 2 matches 2; but For 20: 1 matches 20, 2 matches 10, 4 matches 5) -- Vikas R

The Lake Monster[edit]

Partial solution : Assuming the lake radius is 1 m. Obviously, if you try to run away on the oposite direction of the initial position of the monster, you will row 1 m and he would run <math>\pi</math>

I figured out a limit situation, where i would run in a smaller circle, with an angle of <math>\pi</math>, and he would run the same direction that i do. In that case, my smaller circle radius must be <math>\frac{1}{4}</math>, because he is 4 times faster than me. When i'm in that case, i can instantly try to escape him in the opposite direction, and i only have to run <math>1 - \frac{1}{4} = \frac{3}{4}</math> Therefore, i would get there in <math>\frac{\frac{3}{4}}{\frac{\pi}{4}} = \frac{3}{\pi}</math> the time he would. So i could escape him.

if course, to attain this limit, i should run in a spiral and get an relative angle to him as much closer to <math>\pi</math> as possible.

But there i assume the 1 second rule is false. The time difference between me and him must be something like <math>\frac{\pi-3}{4} \frac{r}{s}</math> r being the radius and s my speed. so we can't predict it.

with my solution, the monster would need to get <math>F=\pi + 1</math> times faster than me to defeat my strategy, if i'm correct.

oh, and if you don't read LaTeX, then ask your great master to support the math thingie. Batchyx 10:17, 22 March 2009 (UTC)

Don't have a solution to the escape portion though I did find the true identity of this "monster". It is quite obviously a raptor. Why has no one else realized this? You people should do less math and more reading of XKCD. - TheBigBadWolf

Just a logical solution, no math: let's say the monster starts at the south so you start moving north. To get where you'll be, he has to move [pi r], while you have to move [r]. He can go either west or east, and your responses will mirror his choice here so it doesn't matter; let's say he goes west. Once he's moved (pi r)/4 and is at the southwest "corner" of the lake, you start rowing towards the northeast corner, an extra (pi r)/4 from where he was headed; now he still has to move [pi r], if he wants to be waiting for you, while you have to move [r-x1] to get there. Once he's moved (pi r)/4 again and is at the west side of the lake, you change your destination to the east side of the lake; now he has to move [pi r] to get where you'll be, and you have to move [r-x1-x2].

I won't go through the geometry, but when the [x1, x2, ...] series' sum gets big enough, your trip to the shore will be short enough that you can do it faster than the monster can run [pi r]. - Otter

What Batchyx said, reworded and expanded:

If your boat travels in a small circle around the center of the lake you can complete a full rotation quicker than the monster. If your boat travels a larger circle near the perimeter of the lake, the monster will keep up with you. Somewhere in between there is a break even circle where you can make one rotation around in the same time it takes the monster to make one loop around the lake. This circle has a circumference 1/4th of the circumference of the lake. If the radius of the lake is R, the radius of this break even circle is R/4. Inside this circle you can go fast enough to keep the center of the lake between you and the monster and you can spiral outwards, remaining opposite of the monster. Once you reach the break even circle you will be (3/4)R away from the shore, while the monster is πR away from that point. In the time it takes you to travel (3/4)R, the monster can only travel 3R which is less than πR, allowing you to escape. Note: It's not actually possible to reach the break even circle with this method but you can get asymptotically closer the longer you spiral.

To know how close you need to get you can calculate the minimum size circle you need to spiral to. The monster will need to travel πR, in which time you can travel (π/4)R. This means you need to spiral greater than R-(π/4)R or (1-π/4)R away from the center of the lake before you make a beeline for the shore.

To calculate the speed the monster needs to travel to make escape impossible assume you travel at a velocity of v, and the monster travels at vx. The radius of the break even circle in this case will be R/x. This means you will need to travel a distance of R-R/x to get to shore. The amount of time this takes is (R-R/x)/v (time = distance / speed). The monster must cover πR in this same amount of time. The amount of time it takes the monster to get there is πR/vx. Setting these equal we get:

(R-R/x)/v = πR/vx

R-R/x = πR/x

1-1/x = π/x

x-1 = π

x = π + 1

The monster must travel π+1 times faster than you to make escape impossible. No calc needed. Further analysis will show that if the monster is traveling 4 times as fast as you will need to make almost one full loop of the spiral before being able to safely head to shore. -Hannibal

The above analysis seems perfect, but once you leave the "safe" circle, going straight to the nearest point of shore is not the best option.

The monster moves at x times our speed. Let's represent the lake by the unit circle, say we're at (1/x,0) ready to leave the safe circle, the monster is at (-1,0) and will start moving counterclockwise. Let the angle from the origin towards (1,0) be 0 and increase counterclockwise (as usual), so the monster starts at -π moving towards positive. All angles are measured from the origin, not our position. The distance from our start position to shore at angle α is √((cos α - 1/x)² + (sin α)²) = D(α). Assuming we don't go back into the "safe" circle, the monster will never change direction. By the time we've moved D(α) to reach the shore at angle α, the monster will be at angle -π + x D(α).

For example with x=4, if α=0, the monster is at -.1416 (π-3 away from us as expected). If α=π/8, it's at -.0418. Since we're at π/8 then, we're a lot farther from the monster: π/8≈.393, the angle difference .435. Actually, the difference increases all the way up to .587 for α=arccos .25 which is the maximum angle we can use without cutting into the safe circle.

Fixing α to arccos (1/x), D(α) is simplified to sin arccos (1/x) = √(1-1/x²). The maximum monster speed is just below when -π + x√(1-1/x²) = -π + √(x²-1) = arccos (1/x). x≈4.603338848751700 and α=arccos (1/x)≈1.352.

Now, I'm not saying even this is the optimal evasion, I don't know. Anyone care to try to improve with a dynamic rowing direction? --Nix

You actually don't need all this complex math. All you have to do is start at the center and row in the opposite direction of the monster's initial position. This will work unless the monster can run at 2π times your speed. Oops, nevermind. I accidentally thought the monster would have to travel the circumference, rather than half that.

so basically, you first row in a cicle (the circle is small enough you're completing a full lapse faster then the monster) until the point that there's a line shore - you - center - shore with monster. At that point, you're close enough to stop cicling and go straight for the shore.


It's provable with Calc that optimum path is a straight line tangent to the 1/4 safe circle. You've already figured out the optimum rowing strategy. One thing that you have to do is check at the very beginning that the monster isn't heading towards your heading using the shorter path rather than the longer path. It's a 50/50 chance. If the monster is heading towards you, you'll have to restart by heading back into the safe circle. I'll see if I can get a more exact number for the max x (ratio of monster speed to rowing speed). -- TanGeng

Ball and Balance[edit]

I can't see a heading for this one. My solution is to number each coin and make the following weighs.

1, 2, 3, 4, 5, 6 vs 7, 8, 9, 10, 11, 12

1, 3, 5, 7, 9, 11 vs 2, 4, 6, 8, 10, 12

1, 4, 5, 8, 9, 12 vs 2, 3, 6, 7, 10, 11

As no coin is in the same group as another three times, only one coin will end up being on the heavy or light side three times, and that coin will be the counterfeight. If it is on the heavy side, it is heavyer and visa versa. -Azazyel

This has the problem that you don't know whether the counterfeit is heavier or lighter than the others, and more. For example, if the left side is heavier in every weighing, you don't know if 1 or 5 is heavier than the rest, or 10 lighter than the rest. To make it in three weighings, you'll need to introduce the possibility of equal weights by leaving out some coins. Proof: If every weighing only introduces 1 bit of information, you can't possibly distinguish between 24 states (counterfeit one of 12 coins, and lighter/heavier) in three weighings. An optimal first weighing could be 1, 2, 3, 4 vs 5, 6, 7, 8 and what you weigh next would depend on the result. Didn't think this to the end though. --Nix

Here's a step in the direction of an answer. Suppose the answer relies on eliminating some coins with every weighing and once we've eliminated each coin we're done with it. Then the optimal first weighing must be coins 1, 2, and 3 versus coins 4, 5, and 6. If the pans don't balance, you know the counterfeit must be one of coins 1-6; if they do, you know that the counterfeit is one of coins 7-12. Either way, you are down to six coins. There is no better first weighing because any other weighing would in some cases eliminate fewer than six coins, which will in the worst case leave you with more coins to distinguish among, and we want to be able to get the answer 100% of the time. With six coins we cannot do better than weigh 1 against 2 or, equivalently, weigh 1 and 2 against 3 and 4; either way, in the worst case we eliminate two coins and have four left. With four coins we cannot do better than weigh 1 against 2, and in the worst case, where 1 and 2 balance, we still have 3 and 4 left to distinguish between. We have failed. Therefore, either the answer does not rely on eliminating coins with each weighing, or the answer relies on somehow re-weighing coins we have already eliminated to distinguish the counterfeit. --satyreyes

You always have to use four coins per side, and can never eliminate a coin as a possibility until you have done all the weighs. -Professor Z
I guess this falls under not relying (only) on eliminating coins with each weighing, but 1, 2, 3, 4 vs 5, 6, 7, 8 is better than 1, 2, 3 vs 4, 5, 6. If the sides are equal, you only have 4 unknown coins to sort through. It's true you can't sort six unknowns in two weighings, because there's 12 states to distinguish and the max you can hope to separate is 9 (3 possible results in both weighings). If the sides are not equal, you have an additional bit of knowledge to help sort the 8: you know which side was heavier. That's 8 possible states left to separate regardless of which way the first weighing went, sounds doable with two weighings. --Nix

Nix is correct. Capital letters stand for measurements: A first, B second, C third. The program starts with measuring A as 1, 2, 3, 4 v 5, 6, 7 8. If A equal, the oddball is in 9, 10, 11, 12. You would then measure B as 9, 10 v. 1, 2. If B equal, the oddball is in 11, 12. You would then measure C as 11 v 1. If C equal, the oddball is 12. If C not equal, the oddball is 11. If B not equal, the oddball is in 9, 10. You would then measure C' as 9 v 1. If C' equal, the oddball is 10. If C' not equal, the oddball is 9.

If A not equal, then measure B' as 1, 2, 5, 6 v 3, 7, 9, 10. If B' equal, then oddball is in 4, 8. Measure C'' as 4 v 11. If C'' equal, the oddball is 8. If C'' not equal, the oddball is 4. If B' not equal, then oddball is in 1, 2, 3, 5, 6, 7.

If in A 1, 2, 3, 4 were lighter than 5, 6, 7, 8, and if in B' 1, 2, 5, 6 were lighter than 3, 7, 9, 10, then the oddball is a light 1 or 2 or a heavy 7. Measure C''' as 2, 7 v 11, 12. If C''' equal, then the oddball is 1. If C''' heavy, the oddball is 7. If C''' light, the oddball is 2.

If in A 1, 2, 3, 4 were lighter than 5, 6, 7, 8 and if in B' 1, 2, 5, 6 were heavier than 3, 7, 9, 10, then the oddball is a light 5 or 6 or a heavy 3. Measure C'''' as 3, 5 v 11, 12. If C'''' equal, then the oddball is 6. If C'''' heavy, the oddball is 3. If C'''' light, the oddball is 5.

If you've been able to follow my notation so far, you should be able to extrapolate this situation for the other two cases of lightness/heaviness. Each ball is uniquely identified in this schema. However, this fails to determine if 12 is heavier or lighter than other balls if it is indeed the oddball. --Mark

I didn't try to follow or verify all of your thought, but doesn't this fix your 12: if A equal, B = 9, 10 vs 11, 1. If B equal, C = 12 vs 1. If B not equal, C' = 9 vs 10. If C' equal, 11 is the oddball, see B for light/heavy. If C' not equal, see B for light/heavy and use that knowledge to determine oddball from C'. --Nix

This is impossible. There are three weighings. Each can give you one of two results. This gives a total of three bits of information. Just enough to find one coin in eight or to find one and four and see if the counterfeit is lighter or heavier. Finding twelve coins and telling if the counterfeit is lighter or heavier requires at least five weighings. That is just a lower limit. You might need more. --DanielLC

Each weighing can give one of 3 possible results (right side heavier, equal weight, left side heavier). I think that means that you have more than 3 bits of information.

I see. That would give you 27 possible measurements, when there are 24 possible answers. I'm still not sure this is possible. -- DanielLC

A relatively simple explanation: Weigh 4 vs 4. If one side is heavier, weigh 3 from the heavy side + 2 from the light side vs 1 heavy and the 4 remaining 'normal' coins. If the 'heavy side' is still heavier, the counterfeit is heavier and it's of the 3 'heavies' on that side, which can be determined in one weigh. If the 'heavy side' is now lighter, it's either one of the 2 'light' coins on that side or the 'heavy' coin that switched sides, which can also be determined in one weigh. If the sides were even, it's one of the 2 'light' coins set aside after the first weigh. The remaining scenario is that the original weigh was even. In that case, weigh 3 of the remaining coins against 3 'normal' coins from the first weigh. If the balance is uneven, you know if the counterfeit is one of those 3 and if it's heavier or lighter, and can be determined in one weigh. Otherwise it's the final unused coin, and you weigh it against any normal coin to determine. The trick here is to break coins into groups of three where the weights are known, or 'groups' of 1 where the weight is not.

I figured out a way. It isn't simple, but I doubt there is a simple way. I'll explain in pseudocode. The coins will be referred to as A through L. + Means the counterfeit is lighter, - means it's heavier. -- DanielLC


Edit: I figured out a simple way. Basically, if there are n coins, you compare coins 1 through n/3 to coins n/3+1 to 2n/3. If they're the same, you have n/3 coins left to compare, and you need to figure out if it's lighter or heavier. If they aren't, you pair coin x with coin x+n/3 for x <= n/3, and you have n/3 pairs of coins to sort through, and still need to figure out if the counterfeit is lighter or heavier. Once you know, you can tell which coin in the pair is the counterfeit. For example, if the lower numbered coins are lighter, and the counterfeit is lighter, the counterfeit must be one of the lower-numbered coins. --DanielLC

Solution for Twelve Coins (Ball and Balance)

Which determines both counterfeit and if it is lighter or heavier without the use of a Lucky Coin explained in plain English:
    • First weighing: 3 8 11 12 vs 2 4 7 9
    • Second weighing: 1 5 7 9 vs 2 3 4 10
    • Third weighing: 1 2 10 11 vs 3 6 7 8
      • L stands for left side is heavier, R for right side is heavier, and B for breaks even.
B L L Coin 1 is heavier
B R R Coin 1 is lighter
R R L Coin 2 is heavier
L L R Coin 2 is lighter
L R R Coin 3 is heavier
R L L Coin 3 is lighter
R R B Coin 4 is heavier
L L B Coin 4 is lighter
B L B Coin 5 is heavier
B R B Coin 5 is lighter
B B R Coin 6 is heavier
B B L Coin 6 is lighter
R L R Coin 7 is heavier
L R L Coin 7 is lighter
L B R Coin 8 is heavier
R B L Coin 8 is lighter
R L B Coin 9 is heavier
L R B Coin 9 is lighter
B R L Coin 10 is heavier
B L R Coin 10 is lighter
L B L Coin 11 is heavier
R B R Coin 11 is lighter
L B B Coin 12 is heavier
R B B Coin 12 is lighter
No events are duplicated and no other events are possible. I gave this question as a bonus to my engineering students on their final exam. They did not do well.
-Professor Z

My solution on this one is linked using coins 1 to 12, marked as follows; A (nothing known) A= (known good coin) A+ (possibly heavy) A- (possibly light) AC (Known counterfeight, not known which way) It displays what knowledge you get at each step and I believe it is the solution which puts minimal coins on the balance. (16 coinweighs maximum required, Average 15 1/3 coinweighs). In text that method is below

Step 1 Weigh (1-4) against (5-8) If Balanced (A)weigh (9&10) against (11&1) else (B)weigh (1,2&5) against (3,4&6) (A)If balanced weigh (12) against (1) else weigh (9) against (10) (B)If Balanced weight (7) against (8) else if same as (B) weigh 1v2 else weigh 3v4 B= Balanced, L= left heavy, R= Right heavy (assuming first of each pair is on left of balance)

BBB - Impossible
BBL - 12H
BBR - 12L
BLB - 11L
BLL - 9H
BLR - 10H
BRB - 11H
BRL - 10L
BRR - 9L
LBB - Impossible
LBL - 8L
LBR - 7L
LLB - 6L
LLL - 1H
LLR - 2H
LRB - 5L
LRL - 3H
LRR - 4H
RBB - Impossible
RBL - 7H
RBR - 8H
RLB - 5H
RLL - 4L
RLR - 3L
RRB - 6L
RRL - 2L
RRR - 1L

To explain why this works the coins are 3 state data. True Coin, Counterfeit Light or Counterfeit Heavy. But they are also dependent, There is only one counterfeit (reducing the 531441 states to 24)

So the first weighing of 4 v 4 will tell you either 4 are true coin, 4 are not Heavy and 4 are not light OR 8 are true coin Either way you've removed 16 unknowns.

At step 2(A) With 4 coins with 3 possible states. Weighing 2 v 2 of the unknown will tell you 2 are not heavy or 2 are not light and that can't be distinguished in one weighing. Which is why I weigh 2 v 1unknown and 1 known If the 2 unknowns are heavier: I learn 1 coin is true coin, 1 coin is not heavy and 2 coins are not light (5 pieces of information) Total 21 leaving 3 to be determined in one weighing. Easy Vice Versa for lighter If it balances I learn all three are true coins and the remaining coin is not true coin 6 pieces of information.

At step2(B) & (C) my results can tell me either All 6 coins weighed are true coins which is 6 pieces of information leaving 2 unknowns Or they can tell me that three of the coins weighed and the two not weighed are true coins, 5 pieces of information leaving 3 to distinguish in the final weigh.

I'm not sure if 13 coins is possible despite the fact that there are 26 states and 27 units of information gained. But i Haven't looked hard.

I am not sure, but I think my father made me do one variation of this kind of puzzle when I was a toddler (he liked teasing me with puzzles, I have quite a few to share). Anyway I think the answer is simple. You just divide them in 3 groups. a.(4-4-4) as (A, B, C) After weighting A and B, in case they are the same. We take C and have it divided into 2 by 2 pairs (F & G). You weight F pair. If they are the same, you just switch one of the coins of F with one of G. If they don't balance anymore, the coin added is the counterfeit. Otherwise it is the one left out. PS. I remember my father having told me this as 13 coins, and one of them being your own. You just had to take your own coin, as it is not needed.

Tally Game[edit]

According to the group sizes, I'll denote for example III IIIII II III with 2335. It doesn't matter in which order the groups are on the board.

I first tried to find a clean separation between winning and losing states on paper, but didn't get as far as I wanted.

Clearly, of the states that only have groups of one, an odd amount of them means the player to move will lose.

A set of paired groups (e.g. 11112255) loses unless they are all ones. The player to move first must break a pair. Their move can be matched by removing an identical portion of the pairing group (leading to a smaller set of paired groups), or if that would lead to only groups of one, removing one more or less stick so that it ends up an odd amount of ones.

Obviously any state that leads to one of the known losing states by one move is a winning one. States that only lead to known winning states are losing. This way, new rules can be inferred, one by one, but that isn't really fun as a puzzle. The simplest new losing state is 123. No losing state can be reached from it, but any move will have a follow-up that reaches a losing state. Now, states like 12x, 13x, 23x with x>3 are winning since they can be reduced to 123 in one move. This leaves 145 as the next losing state, also 246.

At this point, I was bored enough to let the computer calculate up to the initial state of 357. It seems all the losing states can be combined with each other and paired groups added or removed at will, although I don't have a proof. For example 123 + 145 – 11 = 2345. Maybe this could lead to a general way to determine the status of a state without any recursion or pretabulation of states.

Anyway, the computer told me that 357 is a winning state. In addition to my first two rules (ones and pairs), the smallest set of losing states you need to force victory is as small as {123, 145, 246, 347}. Whatever situation you end up in, you can always find a move that results in one of these states, as long as you make those moves.


This puzzle is a game called Nim, with the interesting extension of being able to split stacks instead of just take from the end. Being able to split stacks doesn't affect the outcome though; the Nim strategy still works.

The trick here is to write the game state in binary and consider the XOR function. Here, 3 5 7 would be 11 101 111 and the XOR of this is 001. First thing to note is that the game ends when the XOR is 000.

The key fact to use is that someone presented with a non-zero XOR can always zero it with the right move, and someone presented with a zero XOR will always disrupt that (with ANY move) and make it non-zero. You can always find a zeroing move by taking from the end of a stack (usually the largest, but not always).

Now the game rules say that the person who takes the last stick loses, but it's easier to first consider last-stick-wins. To win, all you have to do is present your opponent with a board XOR of 0 at each turn. They are forced to disrupt that to non-zero, and you can again respond by zeroing it. Since sticks are finite and are only being removed, you will eventually present them with a zero board and therefore win. This strategy doesn't need any look-ahead whatsoever, just take each stack in turn and flip the relevant bits; if the stack reduces in size, make that your move (there will be at least one that reduces).

As for last-stick-loses, you need to control the game by doing the same as above, but change the method at the last minute in such a way that your opponent is forced to present you with an XOR of 0. You need to alter the method when presented with one stack greater than 1 and the rest equal to 1. You need to either take all but 1 from the big stack or take the all of the big stack, whichever results in an XOR of 1 (basically leave an odd number of 1-stacks). You win!

-- TLH

I didn't exactly follow the above logic, but if you take all but 1 from the big stack, your opponent takes two from the end of the stack of 5, and now has achieved symmetry and can beat you without breaking a sweat. So your conclusion is wrong. I'm pretty sure Nix has it right.


You are correct, you did not follow :P

'The big stack' in that paragraph refers to a late game state where all but one remaining stacks are of size 1, and the other one is of size greater than 1 (the big stack). At this point you abandon the XOR 0 recipe and take from that big stack, to leave an odd number of 1-stacks. This will either mean taking the whole stack or turning it into a 1-stack.

For example, if presented with 1 1 1 7, you need to take all 7. If presented with 1 1 1 1 7, you need to take 6. The point is that, to get to that stage, you need to follow the XOR 0 recipe then abandon it in favour of leaving an odd number of 1-stacks.

Read that paragraph again, it does not refer to starting out. -- TLH

After playing a number of games out in my head, I can't find a winning strategy for player 1, which the above analysis seems to indicate should exist. Could you elaborate on the strategy, or perhaps play out a game as player 1? -Xen

My thought on this was as follows: At the simplest level, if you ended up with a block of 2 vs a block of 2, you want to go second, as whatever your opponent does, you can control the board so as he goes last. If you have a block of 3 vs a block of 3, the same argument works - and similarly for 4 vs 4 and more generally n vs n, you would always want to go second as you then can completely determine whether you or your opponent finishes first. For proof of this, simply mirror your opponents moves and create multiple smaller 2vs2 and 3vs 3 blocks whatever they do.

I then image the 3-5-7 game above as instead being split into the blocks - pairing off the 3 on the top row together with three from the row of 5, and the remaining 2 from the row of 5 with a further 2 from the row of 7. If I group these pairs off in my mind, and play them separately, I should have complete control over who finishes first or last in each of them, and manipulate that to my advantage. Hence I'd choose to play first here, and immediately cross off 5 in a row from the line of 7. Does this logic work? -Ed

Dumbass, MD[edit]

Dumbass, M.D.[edit]

Add another of pill A to your hand, then grind the four pills with a mortar and pestle. Divide the powder into two piles and take one pile today and the other tomorrow. ~Stephen

Well, my first thought would be to count all the remaining pills, so I know if I've got two A or two B in my hand.

My second thought would be to set aside all three, take one pill out at a time from my remaining supply, and use the time I have to sue my doctor for malpractice. However, I suspect that's not the solution you want. -John 23:39, 11 February 2009 (UTC)

Get out another A pill then cut all the pills in half and then take one half of each pill that day and the other half the next day. -dw

But then you might still take the wrong halves. My solution would be to split the 2*A and 2*B up infinitely, i.e. grind them into a powder, mix it properly, and cut that in half. Then you'd have roughly 2*AB. I don't know if the problem statement requires an integer solution and the pills have to be taken as a whole, can anyone clear that up? --Steve
I think the half solution is fine - after chopping you have (a,a) (b,b) (b,b). If you remove just the left halves, you know you have a full B pill and half an A pill in your hand. Then, you get another A pill, cut it in half, and add that to your hand. You now have a full A and a full B pill in your hand, with the remaining half pills being what you should take tomorrow (another full A and a full B). From then on, you return to your original routine, possibly after pouring food colouring into one bottle ;) - Tim J 01:17, 12 February 2009 (UTC)
You can actually make sure that you take the right halves. Take out another A pill, so you have 2 As and 2 Bs; put them in a row and cut them all in half. You don't know which is which, but you *do* know that each half is the same as the opposite half; they are in a row (a 4x2 matrix of halves now), so shift one row to the right by 3. Up to symmetries and rotations, the only options for arrangement are 1212 and 1122 (with 1 mapping to either A or B); try it out on paper and see what you get. You'll see that shifting one row of halves to the right by three (with wrap-around) in every case gives opposite pill types at the same position in each row. So you have 4 pairs of half-pills; take two pairs for the next two days, each pill will be 1/2 A and 1/2 B. The 'grind it all up into a powder and talke half the powder' solution is smart too; I don't know how precisely you can measure (or cut) for the sake of answering the riddle, but it seems to me that some cutting must be involved.Wsa 21:19, 13 February 2009 (UTC)
ED - On closer inspection, Tim J has it just as certainly as I do. As usual I'm overthinking things. Wsa 21:23, 13 February 2009 (UTC)

-_- don't we have 2 hands? A in one hand and B in another?

I'm fairly sure I have a guaranteed answer, as long as we're allowed to cut the pills in half (like the people above me are saying). I would take out another A pill, and line up the four pills on my table. At this point I have no idea which pills are which, but that doesn't matter. I then, very carefully, cut each pill laterally in half, so that instead of worrying about left and right halves, you now have four top and four bottom halves. But no matter how the pills are arranged in their line, if you then consume (say) all of the top halves, you must have two A-halves and two B-halves. Tomorrow, you take all the bottom halves as you finish your application to sue the doctor who gave you identical, ten million dollar, life-or-death pills, and breathe a small sigh of relief. Does that sound about right to anyone else? -Azukar

why does it matter if you cut them laterally? 1/2 of a pill is 1/2 of a pill. if you just add an A to the line, cut them all in half and take half, you will have the right amount, no matter which way you slice it. --psolms

The lateral solution has the advantage that it's also practical - one cut, and a sweep, and you've got the right amount. I don't think it's logically any different. Tim J 05:24, 18 February 2009 (UTC)
Azukar: I was really just reacting to @ED's solution, which seemed unnecessarily complicated. @psolms You're right, cutting laterally or whatever the opposite of laterally is (literally?) doesn't have any logical advantage, but @Tim J has it right: it's more practical and has less chance of screwing yourself over even further by mixing up halves somehow. -Azukar
That said, can we get some kind of confirmation from the originator of this puzzle, though? Randall or whoever first uploaded it? -Azukar

I came up with the same answer, add another A pill and cut them all in half, eat, then live another precarious day. I know this is the answer, but technically speaking, one is not supposed to cut pills in half willy nilly ( though I'm sure if the riddle said "identical pills right down to the dividing line in the center" the jig would be up. Now, if there is a line dividing the pill (some do, some don't), if you have consulted with a doctor, or the manufacturer, they may say it's ok. One must do this because the manufacturer may not guarantee that there is exactly 50% of the "medicinal" components in either half of the pill. The same goes for nicotine patches, gel pills, especially slow release pills, etc. Pills can be made of very low actual amounts of medicinal ingredients, and high amounts of filler, so the distribution of said medicinal ingredients may not be uniform. I was impressed that at least one person can conceive of something more expensive than printer ink. --Durak March 20, 2009 17:09 UTC

Add another A pill to equalise your remaining pills, then sell those 4 pills for $40,000,000. Doesn't matter how you label them as they're indistinguishable and it's not you who'll die from it. Use the money to buy new, labelled pills from your doctor. This avoids cutting pills in half. -- TLH

cutting, or gring the pills works fine, but there is a problem if the pills have a licquid centre instead of a solid one. in this scenario what would you do?? Simple answer -> Die.

+So long as we're interpreting the rules to our own advantage (e.g. liquid centers, precision cutting and so forth), I propose that so long as 1 of each type are taken (not necessarily ingested) each calendar day, you live. Therefore, add one more A pill and take them all at midnight.

Set the A pill and the 2 B pills someplace safe. Go back to taking your meds like normal. Your problem will either be fixed before your meds run out or you can renew it 2 days early. If you have to risk death, do it as late as possible.

Easier (in my opinion) solution: Cut each pill in 3 equal parts instead of 2. Make 3 groups of parts containing one part from each pill. You are now sure to have 3 groups each formed by 1/3 of A pill and 2/3 of B pill. Now grab 2 A pills and 1 B pill. Cut them in 3 parts each and add to the groups you formed before 2/3 of A pill and 1/3 of B pill. Now you have in each group 3/3 of A pills and 3/3 of B pills. Take one group a day.

Plenty of other properties to help identify: ~Jason Place all three pills into separate glasses of water - Two may behave differently. Surely you're allowed to take the pills with water. If not, consider vomiting into 3 glasses. See if the pills behave differently (Change color, fizz, etc.) I'd be tempted to try Non-destructive methods before grinding to dust in case the pills need to be time-released throughout the day. Mass - weigh each. Find someone with a mass spectrometer. You've got time. You can set these aside while you take another batch. Use a microscope, micrometer, and a profilometer. There may be subtle differences in the molds. Like an old typewriter, the lettering -if any- may not be perfect, mold parting lines may not be perfect, mold texture, exact sizes w/ molded parts tend to be consistent per mold, but not from one to the next. Each pill in one bottle may all measure the same to within a micron. Electrical properties, chk capacitance, resistance, dielectric strength, Magnetic props, reluctance. x-ray MRI Thermal emmissivity, pills from one bottle may "look" 5 deg colder on infrared. ultrasonic - chk speed of sound across pills

PEARLS! White Pearls and Black Pearls[edit]


Yragle the pirate has 100 white pearls and 100 black pearls. The white pearls are worthless, the black pearls are priceless. He will let you arrange the pearls in two sacks, and then after he mixes up the pearls in each bag and shuffles the bags he will let you pick a pearl from one of the two bags. How should you distribute the pearls between the two sacks to maximize your odds of getting a black pearl?

--again dunno if this is hard enough, but its fun

A single black pearl in one bag and all the rest in the other? Probability slightly less than 75%. I think I saw a similar problem in a Martin Gardner book. John Fouhy. 22:53, 11 February 2009 (UTC)

That's the correct answer in the puzzle's context. Realistically, though, you'll be able to tell which bag is holding nothing and which is bulging with pearls, so your odds are more probably 100% if you're paying any attention. (You could argue that he's giving you a random bag, but if he's the one choosing the bags, he'll give you the one with the white pearls.) -- 00:41, 12 February 2009 (UTC)

The bag is chosen by coinflip 06:56, 16 February 2009 (UTC)

My thought process was as follows: black/all: 50/100 50/100 in each bag, giving 25%, then 100/100 and 0/100 giving 50% then looking at the talk page: I initially understood the solution as written to be 1/100 and 99/100 which didn't make sense, before realising 1/1 and 99/199 gives almost 75%. I miss maths. design has left me dumb

The probability is 74.87%, which makes sense. After all, the one bag with only black pearls gives you a black 100% of the time, and you choose it half the time, which means that even if the other bag had no blacks, you'd have a 50% chance of getting a black. The almost-even chance of getting a black in the other bag is gravy at that point.

It's doesn't matter: the probability of getting a black pearl it's always 1/2. Suppose you have P balls in bag 1 and 100 - P in bag 2, the probability of getting a black pearl is calculated this way :

P1 = Pr( pearl = black | coin = heads ) = P/100

P2 = Pr( pearl = black | coin = tails ) = 1- P/100

Then since the coin is fair:

Pr ( pearl = black ) = 1/2 P1 + 1/2 P2 = 1/2

so this was in a way a tricky question . Bunder 00:35, 14 April 2009 (UTC)

Your formulae don't make any sense (to me anyway). If P1 is supposed to be the probability of getting a black pearl from bag 1 given that you're selecting from bag 1, then that would be (number of black pearls in bag 1) / (number of pearls in bag 1). You don't have a variable for "number of black pearls in bag 1"; and why do you still have a number 100 floating around in connection with bag 1? Or do you mean that P is the number of black pearls in bag 1? - If so, it looks like you're assuming that each bag must contain 100 pearls total, which isn't specified in the question. 13:53, 4 May 2009 (UTC)

Bit Algorithm[edit]

 int count(int n) {
   int c = 0;
   while(n) {
     n = n & (n-1);
   return c;

I don't know who submitted the above, but there are far better—just look at hamming weight (aka population count) on wikipedia for some examples. Heck, there's even an entire article in "Beautiful Code" about "the quest for a faster population count" (or something to that effect). -- 01:27, 25 February 2009 (UTC)

Yes, there are better, but this is the best answer given that it's both intuitive and correct.

n ^= n & -n;

also works. (n & -n) is useful if you want the least significant bit for some reason. -- 17:04, 13 March 2009 (UTC)

Wait -- which of these meet this critera from the puzzle:
"Your algorithm [...] should be in O(i) where i is the number of 1-bits inside n."
All of these seem to be O(log(n)) -- if you add a ton of 0s, the algorithm takes longer. Or are we assuming that & is an atomic operation taking constant time no matter the bit size?

<<< This is the sort of "I know a trick and I want to know if you know it too" bullshit interview question that makes me ask for 25% more salary before I go work for them. "Propose an algorithm" is just weasel words. There's no "algorithm" to be debated - you either know the answer and can write a correct solution down in seconds, or you don't and (having asked a lot of experienced coders if they know the trick of doing a clear-lowest-bit in the past) you will probably never figure it out short of brute-forcing a bunch of arbitrary operations together to see what works. Logical deduction is not involved, except the bit of your brain that says "sod this - I'm off to google for fast popcount tricks."

I think this page should have a separate section for "not interesting questions - do not ask these in interviews", and this is the poster-child. It would save us all some time. >>> - TomF

Don't recognise the coding language you guys are using (very few languages I know). Could you explain the steps of your algorithm in english and also the meaning of O(i)? SPACKlick

One-lane Highway[edit]

The number of clumps is a discrete random variable, so the expected value is the probability-weighted sum of the possible number of clumps. Call the expected value E(N).

If N=1, the expected number of clumps is always zero: E(1) = 0. If N=2, there are two possibilities: the car in front is faster or slower, resulting in 0 or 1 clumps. These are equally likely, so E(2) = 0/2 + 1/2 = 1/2.

If N>2, examine the three cars at the tail of the line. Each is either faster than the car in front of it or slower. Presumably the probability of each case is the same. If it is faster, it will clump with that car. If it is slower, it will fall back. Now remove the last car; the number of clumps is be reduced (by one) only if this car is faster than the car in front of it, and that car is slower than the next car. If this car was slower, it wasn't part of a clump, and if the car in front of it was faster than the next one, they still form a now-smaller clump.

So for N>2, E(N) = (3/4)E(N-1) + (1/4)(E(N-1) + 1) = E(N-1) + 1/4 = (N-2)/4 + E(2) = N/4


I don't believe this is the case... consider the speeds (in whatever units you prefer): 1, 3, 2, 4. Now, these will form one clump, travelling at a speed of 1... and removing the last car wouldn't remove a clump. However, your argument claims it *would* (4 is faster than 2, 2 is slower than 3). A more accurate criterion: the last car will fall back and become its own clump iff it's the slowest car of the mob... simply put, the slowest car of the mob will always be the head of the last clump. It's reasonable to assume that the last car will be the slowest with probability 1/N. So E(N) = E(N-1) + 1/N... that is, E(N) is the sum of the first N terms of the harmonic series. Also, a minor point: E(1) = 1, not 0 (one car will necessarily form one clump). Phlip 11:04, 12 February 2009 (UTC)
Wait... rereading... does a car on its own count as a clump or not? I guess not... that makes things trickier. But my "1, 3, 2, 4" objection still holds regardless. Phlip 11:05, 12 February 2009 (UTC)

Yes, my analysis was wrong. Counting a car on its own as a clump would certainly simplify things, though!

I'll have to re-think this. --Evan

As I see it, this problem reduces to 'Estimate the number of cars travelling slower than all cars in front of it'. For the first car, the probability of this will be 1, for the second 1/2, for the 3rd 1/4 etc. For the nth it would be 2^(1-n). So it seems to me that the estimated value for n cars would just be the sum of these from 1 to n, which is just 2-2^(1-n). It's interesting how if this is right, even as n approaches infinity the estimated number of clumps is never going to surpass 2. Not entirely sure about my second assertion though :s --Luke

I am not an expert at probability, but it seems to me that the probability for the third car is 1/3. If you have three numbers, the probability of the first being the largest is 1/3, not 1/4. In that case, the answer becomes the sum of the reciprocals of the integers from 1 to 100. This assumes that a single car by itself is a clump. -Tiax

  • This one problem seems... specially ... problematic to me. Look: to start solving the problem you need to determinate the chance of velocity of random car 'A' be higher or lower then velo of car 'B'. One would assume it is '50-50' but it doesnt look like so... for example, whats the probability of car 'A' being 10mph slower then 'B' that can go from , say, 1 to 50mph? If the car 'B' is at 15 mph the chance is 0. My point is, chances of one car being faster then the other are higher then one slower then the other, cause cars cant be at 0 speed and can't have minus speed (cant go reverse, since its 'one way highway). If this is '(Another interview question)' I hope it is for traffic engineer job appliance test interview ;) 00:35, 14 February 2009 (UTC)

  • I worked out the solution for n=1,2,3,4 and the sequence goes 0,1/2,5/6,13/12, or the sum of 1/i for all i from 2 to n. It seems self-evident to me that the sequence will continue in the same manner, although I would be hard-pressed to prove it; this agrees with what Tiax said. -Paul
  • Phlip - my take on it yields the same recursion yours does; only difference is that E(0) and E(1) are both 0. My take on it is at Chasing Cars for the truly bored... DukeEgr93 01:10, 15 February 2009 (UTC)

I think I've found the easiest way to prove the harmonic series result. In the 1/n of the arrangements of n cars in which the last car is the slowest, it falls behind, so the number of clumps is the same as if the last car were not there. In the other (n-1)/n arrangements, the last car is part of a clump containing the slowest car and zero or more other cars. If there are other cars in the clump, it would exist without the final car. Thus the presence of the last car creates a clump only in the 1/(n-1) of these arrangements in which the clump consists of the last car and the slowest car. So a clump is added in ((n-1)/n)(1/(n-1)) = 1/n of the possible arrangements, and E(n) = E(n-1) + 1/n. Since E(1) = 0 we have E(n) = 1/2 + 1/3 + 1/4 + ... + 1/n for n>1. -- Evan

  • Ohhhh - that's good. The only way adding a new car to the end will form a new clump is if (a) the new car is not the slowest and (b) the penultimate car was the slowest of the n-1 cars. As an aside - how does XKCDB not have math tags? DukeEgr93 15:14, 15 February 2009 (UTC)

Here's my take. We can consider the relative speeds of the N cars as integers from 1 to N. Thus the situation is represented as a permutation of {1,2,...,N}. For each i, 1 <= i <= N, let X_i be a random variable which is 1 if car with speed i is at the head of a clump, and 0 otherwise. In our permutation world, this means X_i is 1 iff the numbers 1,2,...,i-1 come before i in our permutation. The probability of this happening is 1/i (since each of the numbers 1,2,...,i has an equal chance of appearing last relative to the others), so E(X_i)=1/i. By the linearity of expectation, we have E(N) = E(X_1) + E(X_2) + ... + E(X_N) = 1 + 1/2 + ... + 1/N, which approaches log N as N -> infinity. -rzh

Yes, computer simulation confirms that the expected number of clumps is definitely always the sum of the harmonic series to the nth term. Even more interestingly, though, is that I've come across a shocking realisation as to why this is the case. It turns out that for any clump of size x, the expected number of clumps that size is always 1/x, so long as x<=n. This explains why for n cars the expected number of clumps of any size will be 1 + 1/2 + ... + 1/n, and inadvertantly solves the problem of what happens if you don't regard a lone car as a clump. The expected number of clumps would then just the the harmonic series sum minus one (yes, DukeEgr93, math tags certainly would be useful). I've manually worked this out for all n<5, and shown by simulation that it holds for all n<20, but I lack both an elegant proof or the capacity to think of one at the moment, so there you go. At least now we know the answer. --Luke

Another approach that gets you the harmonic series (which is definitely the right answer) is to consider adding the cars to the highway at random in order of speed, from slowest to fastest. The slowest car, alone, is a clump. The next slowest is either ahead of or behind it, so it forms a new clump with probability 1/2. (And expected value 1 * 1/2 = 1/2.) The next slowest car has three possible positions (in front, behind the front car, last), and forms a new clump only in one, so that's 1/3. And so on -- car N only forms a new clump if it's placed at the front of the pack, with probability 1/N. --dj

I just thought of a solution that at first seemed the exact opposite of the consensus, but is actually pretty close once you do the math. In my case, I assume that one car is indeed a clump.

Let's call the number of clumps for N cars C(N). We start by looking at the slowest car. It, and all the other cars behind it form a clump. The expected position of the slowest car amongst the N is (N+1)/2, so the expectancy of the number of cars in front of it is (N-1)/2. All the cars in front will clump around independently of the "slow clump", so they have their own expectancy C((N-1)/2) of clumps. Now, we have a recursive relation: C(N)=1+C((N-1)/2).

For example, we have 15 cars. The expected placement of the slowest car will be eighth, so cars 8-15 will clump together. Now we are left with the front 7. They have their own slowest car, whose expected placement is 4th, clumping 4-7 together. The next expected clump is 2-3, and then 1 alone. While this "expected placing" doesn't seem like rigorous mathematic proof, it has been known to work remarkably well for such problems.

Solving the recursion equation, or just looking at the example, leads to an answer of C(N)=log2(N+1). What bothers me is that the harmonic sum diverges like a natural logarithm and not a log of base two, more specifically: Sum(1/N)~ln(N)-0.577.. which would mean that for large number I'm off by a constant factor of ln(2). Why this is so I'm not sure. Perhaps the true "expected" position should be 1/e places from the end to even out the true probability.--Yashkaf 19:38, 23 March 2009 (UTC)

Yashkaf, I can see two problems with the way you set up your recursion. First, you sometimes need to know the expected number of clumps for a non-integer number of cars, which doesn't make sense. Second, you are commuting expectations around in a way that I am not sure is justified (e.g. using the expected position of the slowest car in place of its actual position.)

The way I think of this problem is pretty similar to what people have already said. Consider the starting positions of the cars as a permutation of N, with 1 representing the slowest car, 2 the next slowest, and so on. Then the number of clumps is just the number of right-to-left minima in this permutation. These are counted by | signless Stirling numbers of the first kind. That is, c(N,k) is equal to the number of permutations of N with k right-to-left minima. These are known to satisfy the recurrence c(N+1,k) = Nc(N,k) + c(N,k-1), which can be proven in a number of ways, including ones that are equivalent to some of the proofs above. Then the expected number of clumps is 1/N! * sum_k c(N,k)*k. This can be expanded using the recurrence to 1/N! * sum_k (kN+1)c(N-1,k) = 1/N! * sum_k c(N-1,k) + 1/(N-1)! sum_k k*c(N-1,k) = 1/N + 1/(N-1)! * sum_k k*c(N-1,k). So by induction the expected number of clumps is 1/N + 1/(N-1) + ... + 1/2 + 1. -- 01:52, 26 March 2009 (UTC)

We have N cars, the expected location of the slowest car is in the middle thus forming one group from the back up to the mid point (we have at least 1 group), consider all the cars ahead of this group as the new subset M, perform the same thing and we'll have another group that goes up to the 3/4 N mark from the back for 2 groups. We can do this as far as till the next subset X is at least 1 car long. So for N starting cars we'd expect X number of clumps where 2^X = N giving X = lnN/ln2. (This makes more sense for N being a large number and doesn't make sense at all for N = 2 as in that situation we'd expect 1.5 groups where as my formula would give 1) - -- EDIT: just saw someone already posted a similar answer ... damn ... (another problem with this method is that it tends to overestimate the number of clumps as verified by simulation, the reason as far as i can work out is because the the average number of clumps is different than the clumps resulting from the average positions of the cars ... that is the only way to get N clumps is if the cars were lined up in order of speed but there's (N-1)! ways to get 1 clump)- Dominic Leung 31 Aug 2009

2 envelope exchange problem[edit]

  • So, assuming m and 2m are in the envelopes, the expected value when picking one at random would be 0.5 m + 0.5 2m = 1.5 m. Once you pick an envelope, there is some value x in the envelope. The expected value of a swap would be 0.5 (x/2) + 0.5 (2x) = 1.25 x.
Hmm... Only thing I can think of is that there are two independent variables each with two states - which envelope is picked and whether they are swapped. Each has an equal probability (0.5 per envelope and 0.5 chance of swapping) so
E = p(pick m & ~swap) m + p(pick 2m & ~swap) 2m + p(pick m & swap) 2m + p(pick 2m & swap) m
E = 0.25 (6m) = 1.5m
E_~swap = (p(pick m & ~swap) m + p(pick 2m & ~swap) 2m)/p(~swap) = (0.25 * 3m) / (0.5) = 1.5m
E_swap = (p(pick m & swap) 2m + p(pick 2m & swap) m)/p( swap) = (0.25 * 3m) / (0.5) = 1.5m
Maybe? I feel like swapping should not matter since, were there two players, there's no way it is preferable for both to swap... DukeEgr93 16:08, 15 February 2009 (UTC)

So what's the probability distribution? If you tell me it's uniform and finite, there's an easy solution. If you know what's in the envelope, swap if it's below half the maximum value. If you don't, well, it doesn't matter. If you tell me the distribution is such that the density decreases as m goes to infinity, I rather suspect that it will work out that it doesn't matter whether you swap or not (as there's more than a 50% chance you'll get the lower value if you swap). If you tell me that the distribution is uniform and infinite, I haven't the faintest bloody idea what you're talking about, and I know perfectly well you can't tell me which real number is the probability that m is between 1 and 100. Thornley 03:01, 18 February 2009 (UTC)

Unless you know the probability distribution, this is just like The Necktie Paradox below. You either picked m or 2m. If you picked m, you gain m from a swap, otherwise you lose m, for an expected value of m/2 - m/2=0. If you know the probability distribution p(x) (the integral of p(x) from a to b is the probability that a <= m <=b) and you can look in your envelope, then you can do better. You can come up with a strategy and use it to define a function S(x), such that when your envelope contains x, S(x)=1 if you will swap, otherwise S(x)=0. The expected return of your strategy for a given m is [2mS(m) + m(1-S(m))]/2 + [mS(2m) + 2m(1-S(2m))]/2 = m(3 + S(m) - S(2m))/2. Since the expected return of either never switching or always switching is 3m/2, the expected advantage of your strategy for a given m is (S(m) - S(2m))m/2. Thus the expected advantage of your strategy in general is integral(xp(x)(S(x) - S(2x))/2)

For example, if p(x) = 1/n for 0 < x < n and zero otherwise (uniform, finite) then the expected advantage is integral(x(S(x) - S(2x))/2n) from 0 to n) = sum(b^2 - a^2)/4n over all ranges (a,b) where S(x) > S(2x) minus sum(b^2 - a^2)/4n over all ranges (a,b) where S(x) < S(2x). The optimal strategy is S(x) = {1:x<n}, for which S(x) > S(2x) from n/2 to n and S(x) < S(2x) nowhere, giving an expected advantage of (n^2 - n^2/4)/4n = 3n/16.


All this math is silly. The paradox implies an unknown distribution. The paradox arises when you assign a dollar value to just one envelope and you don't know if its the greater or the lesser amount. No one has posted a resolution yet. -Ari

  • I certainly have, but since it was the least interesting aspect of the problem, I only spent two sentences on it. Here it is again: "You either picked m or 2m. If you picked m, you gain m from a swap, otherwise you lose m, for an expected value of m/2 - m/2=0." As I said under The Necktie Paradox, if you don't know the probability distribution, then knowing the dollar value of just one envelope is useless. Trying to incorporate it into your analysis gets you tangled up in the (unknown) probability that your known value is the lesser value. --Evan
  • If you think that all the math I did for the case when the probability distribution is known "is silly," then you are incorrect. I've successfully tested, via software simulation, my solution for the case in which the distribution is uniform and finite. --Evan

For any reader who is sure that I must be wrong, based on an analysis such as the one found at Wikipedia, I should clarify that my strategy-based analysis fails for the discrete distribution given there, and probably for the exact same set of other possible infinite distributions described on that page for which P(X=a) > 0.5P(X=a/2). This does not, however, invalidate either my approach or the Wikipedia page's conditional probability approach for the infinitely large and often interesting set of distributions for which that peculiar property does not hold.

In fact, distributions for which the "paradox" arises are very weird distributions indeed. I think that there cannot be a physical system that approximates these distributions without losing the paradoxical property due to having a finite upper bound. For example, if you set k=0.998 in the distribution given by Wikipedia, then there is a better-then-half probability that the envelope contains more money than there are particles in the universe. Set it near its lower bound of 0.5, and there is a small (but not vanishingly small) probability that the envelope contains more than a quadrillion dollars.



Again, it becomes a paradox when you plug in a number, until then its not a paradox. We're assuming each envelope is 50/50 to be greater than the other. Plug in a number and do the arithmetic, its quite simple math, and you'll see that you haven't solved the paradox.


"assuming m and 2m are in the envelopes, the expected value when picking one at random would be 0.5 m + 0.5 2m = 1.5 m"

Extend that statement to cover both envelopes; the expected value of each envelope is 1.5m. Now go through the situation - you pick an envelope. It has expected value 1.5m. You are offered to switch to an envelope that also has expected value 1.5m. Hence, there is no advantage or disadvantage to switching.

Furthermore, consider that assuming the value of yours is x leads to the expected value of the other being 1.25x. Then you could flip your selection for an expected 25% rise in payout. But then you could flip again for another expected 25% rise in payout. This is clearly absurd, so the error must be in the initial assumption. That is, fixing the value of one and working with an expected value for the other in terms of the first was an invalid start, since the first is dependent on the same random variable. We have to do as above and consider them both as expected values.

Finally, feel free to add dollar values. Say $10 and $20. The expected value of each is $15 and that's all there is to it. As a point of interest, you can also generalise the problem to having two envelopes each independently containing $10 or $20 - the expected value of each is still $15. The problem just rules out the cases where they both have $10 or both have $20.

To summarise, I'm saying that the 'switching is better' argument contains an initial error in fixing a value that is still considered random, so there ought to be no paradox. -- TLH

TLH - Initially you treated both envelopes as unknows,then skipped to treating them both as knows. Neither of these is paradoxical, or really relevant. The issue arises when you look inside the first envelope you chose and found, say, $20. You then calculate the expected value of the other envelope. 40×.5 + 10×.5 = $25. This is higher than 20 and causes you switch. This logic will apply to **any** initial value, so you don't need to even open the envelope. You avoided this by assuming to know neither or both envelopes, which doesn't answer the paradox. -Xen

The problem is not paradoxical at all. People just don't see the invalid assumption they make when solving it.

Say I fill five **pairs** of envelopes in the following manner: In four pairs I put $5 in one and $10 in the other. In the fifth pair, I put $10 in one and $20 in the other. I pick one of these pairs at random, and give it to you to play this game.

When you pick an envelope, you have a 50% chance to have the lower envelope, and a 50% chance to have the higher envelope.

You also have a 40% chance to have $5, a 50% chance to have $10, and a 10% chance to have $20.

Finally, if you were to open your envelope and see $5, there is a 0% chance you have the higher envelope, and you should switch. There is a 100% chance you have the higher envelope if you see $20, and you should not switch. But if you see $10, there is an 80% chance that you have the higher envelope. The expected value of the other envelope is .8*$5+.2*$20=$8, so you should not switch.

The point is that the 50% chance that you are higher or lower applies only if you do not attach a value to the number. If you do, even if it is the unknown "m", you also need to know the chances that the pair was filled with (m/2,m) compared to (m,2m). And you can't calculate an expectation if you don't know all of the possibilities. - JDJ

Manhole covers[edit]

I think I've seen this question already somewhere... My logical guess is that as circle minimizes the area for linear bounds, you'd waste less iron for the cover, which is very expansive. Manufacturing costs should also be actually lower for round holes, since they can be drilled easily, are more stable structurally because they don't have edges (are uniform, cylindrical). Should I also mention traditional values?... 00:42, 14 February 2009 (UTC)

Actually, I believe it's due to the fact that any other (feasible) shape would be able to fall through it's appropriate hole, when positioned at the correct angle.

  • I think it is a combination of being easier to roll and not falling into their hole. Could also make them general Reuleaux polygons but...that would be hard. And finding shafts with general Reuleaux-polygonal cross-sections would be a good trick. DukeEgr93 01:17, 15 February 2009 (UTC)

I've heard that it's because of 1. Rolling capacity 2. Impossibility of falling through the hole and 3. It doesn't matter which way you put the cover back on, since it has circular symmetry.

  • in fact, some manholes in the UK are square, and some are made up of two triangular sections (therefore rectangular). so really, this problem is rather silly. -- 06:24, 16 February 2009 (UTC)

If we only wanted to make sure the manhole doesn't fall through the hole, any shape of constant width would work. Since we're casting the cover and the piece of metal that we'll embed in concrete for the manhole to rest on, I don't think any convex shape is much harder to manufacture. We have a theorem that states that the Reuleaux triangle has the least area of any curve of constant width. This is about 10% less area than a circle with the same width, so we'd use approximately that much less steel if we chose this shape (assuming we'd use the same width). But then a few of the other good properties of a circle that are mentioned above would be missing. In reality, I've seen a 7-sided Reuleaux polygon being used (I think this was on someone's weird manhole cover picture blog :). I think that was very artistic and beautiful, for a sewer cover -- max

Because rotational symmetry allows them to be rotated in order to lock / screw them into place. - Taisto

So they don't fall into the hole. -rriker

Because manholes are round. 07:23, 18 February 2009 (UTC)

Manholes aren't "drilled," they're cast from iron or steel or whatever the manufacturer pleases—you'd need one hell of a hole saw to drill a manhole, and then you'd still have to engrave the logo/emblem somehow. Oh, and so long as you've got something to create molds with, you can create any shape you might desire. At that rate, the premise of the question—that other shapes are, in fact, easier to manufacture—is quite silly.-- 01:32, 25 February 2009 (UTC)

Why not equiangular triangles? --?

Suppose you had an equilateral triangle that measured 1 meter on each side, and was .05 meters deep. The smallest cross-section for this would be a rectangle that measured .866 meters in length by .05 meters. This rectangle easily fits within the equilateral triangle hole, within .116 meters from any of the sides, assuming no lip. Therefore, the reason they don't use equilateral (equiangular) is because they still fit within the hole. --Mark

I've spent a couple of years in the infrastructure trade, and yes there are other types and sizes. as a general rule, the rectangular ones are used when a larger access hole is requires. Say you need a lenght of 2m but don't need a big width the they use a triangular section cover which is the interlocking rectangle/square shape dicussed above. its easy to lift and replace and lighter than a massive circular manhole would be. Circular ones are used on road surfaces and for a few reasons. If you put the manhole back incorrectly, and a car drives over it, the manhole cover won't spin/fall and no traffic accident. The manhole cover is circular so it won't fall down the manhole when you are putting it back, roadsurface manholes are notoriously deep (+2m) and its a pain dragging a cover back up. third reason is safety, if you leave the manhole open and it falls shut, it won't come straight down the hole and smack you on the head. (this is the main reason i've been informed)

You can make a triangular cover that doesn't fall through. Buldge the sides out using a compass at each corner and draw it, so that the shape is constant diameter at every orientation. It's not a circle, try it.

You're describing a Reuleaux triangle, which is an example of a shape of constant width. There are infinitely more such shapes, but none are as easy to manufacture as a circle. Even if this is not the case, no other shape rolls as easily, and all must be oriented before placing back in the hole. 20:11, 21 March 2009 (UTC) has answers. - oxy

Many shapes that prevent the cover from falling into the manhole are possible. But only a circle allows the cover to be easily rolled away, and rolled back, without having to align the shape of the (heavy) cover with the shape of the hole. Anybody who has tried to put the cover on the hole will appreciate why this is an important factor. -JDJ

Traffic Jams 2[edit]

  • I think the center of mass of the jam moves backwards. While the accident is in place, the cars in the jam are moving at speed 0 while cars behind them are accumulating; the center of mass of the jam at this point is moving backwards with each car that piles in. Once the accident clears, assuming there are enough cars in place that they all don't just gun it, the front of the jam loses mass as cars peel away while the back of the jam still adds cars, meaning the center of mass of the jam actually moves backwards faster. The jam will clear only if the people in the front are so frustrated that they start leaving faster and with smaller delays than the cars getting clogged. DukeEgr93 15:44, 15 February 2009 (UTC)
    • On some satellite images you can actually observe this happening -- the traffic jam crawls backward along the highway like a living thing. If the flow of traffic remains roughly constant (so that cars going out are replaced by cars coming in) it can last forever!
    • Another really interesting visual of this can be found here Shockwave traffic jam recreated for first time
  • It creates a standing wave that moves backwards. There has been a fair amount of research on this type of jam. The wave eventually shrinks through on and off ramps.
  • The center of mass moves backwards if the cars have length, otherwise its fixed.
  • The center of mass actually moves forward at 65 mph, if all drivers have perfect reaction time and acceleration. Reaction time and car length control the speed; acceleration controls dissipation (and it always dissipates backwards).

Delicious Cake[edit]

Delicious Pi[edit]

Are we allowed to use capital punishment? Enforces cake discipline and possibly leads to two parties getting half a cake each.

Person cutting picks last.

The first person cuts the cake into two portions. Each subsequent person selects any portion, and divides it into two portions. The last person does not cut, but selects a portion, out of all available portions. They then select their portions in reverse order (see below, this is incorrect) , but no one may select a piece not descended from their cutting.

The first person will cut out the correct size for him, leaving the majority out. The second will cut the same size out of the majority piece, and so on, until the second to last cuts the remainder in half. No one will make an unfair cut, since that would make some pieces bigger and some smaller; Those who cut after him would get the bigger pieces, leaving him with the smaller.

Anyone who makes a mistake gets a smaller share, with those cutting after him getting the larger share; Error punishes only the erroneous.

3-person example: A cuts first, slicing out 1/3 of the cake. B takes the 2/3 piece and divides it evenly. Perfect distribution. If A were to cut the cake in half, B would cut a small slice out of one of the halves. C would take the untouched half, B would get half, minus a sliver, and A would be left with the sliver. If A makes his cut fairly, and B divides his off-center, C takes the largest piece, B is left with the smaller piece he cut, and A gets the fair piece he cut.

If different portions of the cake have different relative value, then assume perfect knowledge, logic, and skill in cutting. The same rules apply.

  • Actually, there's a mistake here. Assuming the cutting order goes A then B, we know that A has an incentive to cut the cake into a 1/3 piece and a 2/3 piece to ensure that she would get an equal piece; however, there is no reason B should cut the 2/3 piece into equal pieces. Say B liked person C more. Then, B could cut the 2/3 up unevenly. Then, C would pick the biggest piece, B would pick the 1/3 piece that A originally picked, and A would be left with the small piece. The way to remedy this situation is to actually have A choose the cake second. Now, if B wants to ensure that she gets an equal piece, she has to cut the cake evenly.
  • So, if cutting is in the order A then B, then choosing happens in the order C then A.
  • Not if the person couldn't choose a piece that they didn't cut (Except for person C, of course)


Alternative solution:

Assume there are N people.

The first person divides the cake into N pieces. The other N-1 people select (but do not take) the piece they would want, in any order. The cutter takes and eats the piece left over. Now recursively run the solution again with the remaining N-1 people and the remaining cake. Recurse until you get down to two people, then use the "I cut, you choose" method.

It's always in the cutter's interest to divide the cake as equally as possible; if there's a "smallest" piece, the cutter will be left with it.

This solution is, in my view, easier to understand, but the necessity of reassembling N-1 pieces into a completed cake for redivision makes it less practical in the real world.

- Graeme

  • Perhaps score the cake instead of completely cutting it? Then you just smooth the icing over for each subsequent step.


Alternative solution:

You let each person cut a piece out of the cake. Each cut piece is shown to the people that don't have cake yet (except the cutter). If someone wants the piece of cake, the piece goes to that person. If no one wants to have the cake, the piece goes to the cutter. Then the next person gets to cut.

This will even work with more favorable pieces of cake. If you want a piece that you really like, you will have to cut it smaller or risk losing it to someone else.

This isn’t a completely specified answer: when more than one of the waiting persons want the cut piece, who gets it? Note that if the order is pre-determined, I might strike a back-room deal with the guy who has priority; I’ll cut a huge (greater than 2/N) piece, if he promises to share with me (perhaps using the N=2 method). If it’s random, I might strike similar deals with all others before cutting. (Though this supposes that deals can be made, since the participants already agreed to follow a procedure.)


Another Solution:

Take a knife and move it slowly over the cake. Whoever (including the cutter) thinks the piece is as large or larger as a third of the cake yells "stop", at which point the piece is cut and given to the caller. This works for every number of people, even if they have different priorities. The one who is last and doesn't yell stop is also garantied to get a piece he thinks is larger then 1/3rd of the cake.


A solution for the case of two people with different opinions of what makes the cake valuable (A likes frosting, B hates it):

Each person divides the cake into two halves that they feel are equal. A would have a small half with lots of frosting and a large half with less frosting, while B would have a large half with more frosting and a small half with less frosting. At this point, by definition, each person would be equally happy with either half by their division, so they each take their smaller half. This leaves a small portion of the cake in the middle. The process is repeated ad infinitum until the remaining cake is insignificant. Note that this doesn't work with dishonest people, and is not the most efficient division. A more efficient division would start with a smaller division than halves (such as each person dividing into 3 pieces and taking the smallest. The best solution would be with an infinite number of pieces)


There is a very interesting article on this problem in Notices of the American Mathematical Society ("Better ways to cut a cake" by Steven J Brams, Michael A Jones and Christian Klameler, Notices vol. 53 num. 11 p. 1314, Dec. 2006). They define a number of different desirable qualities that an algorithm may have. These are

Envy Freeness - Each person Thinks he got the best-or-equal deal.

Efficiency - No other allocation is better for anyone without being worse for someone else

Equitability - Each person gets the same value (according to their own valuation-function)

Strategy-Proofness - A person maximizes their worst case outcome via honesty (so lying always hurts the overall solution, but if your algorithm has this it also hurts the individual that lied.

For two players the normal Cut-and-Choose method is Envy Free, Efficient and Strategy-Proof but not equitable (Playing optimally, the person that cuts can't ever get more than 50% under their valuation, so usually the one that chooses does better). They provide an alternate method that provides this third property (which is basically the above solution except that it involves an impartial judge that the players tell their valuation functions to, instead of an infinite sequence of new 50/50 points).

In the case of three players they have an example that implies that Equitability and Envy-Freeness are incompatible (for their assumed limitations on how many cuts you get). There is a 3-person 2-cut envy free method and some envy free methods for 4 people, but no known envy-free method for more that had bounded number of cuts. Brams,Jones and Klamer do give an Equitable procedure for any number of players, though. They also show it is strategy proof (A truthful player gets at least 1/n of the cake-value no matter what the other players do, and a lying player may get less).


In a policy analysis class, I had professor who (rather satirically) used cake division as a way of demonstrating a dozen different versions of equity. As with everything in the class, the intended message got across (there are many different and often opposing ways of viewing fairness, so be aware) but most people left with a more cynical message in mind (fairness can and will be twisted to whatever supports the desired result).

In this case, I'd say have all three people hold a knife over the cake, with each knife meeting in the center. They are free to move their knife to whatever position they choose, and will get the piece to the left of their blade, but may not actually cut until all three agree. Since any move can be easily matched, and no one gets anything until there is agreement, they will either reach a fair deal, or no one get's cake (or there will be a knife fight, but if that's the case, they have bigger issues)


You cut, we choose. One person is selected to cut the cake. The remaining two will flip a coin to decide who chooses a piece first and then second after the cuts have been made. The cutter is left with the final piece. The only strategy for the cutter to maximize his result is to cut equal pieces. The same is true for any number of people (assume some other strategy to randomize the choosing order). The important thing is that each of the people who do not cut will have a chance to choose and the order will be selected at random after the cutting, which avoids any coalition between the cutter and a single chooser. I suggest that the cutter is the one with the most steady hand or the best spacial awareness so that they have a shot at making equal cuts.

A Special Place[edit]

The first, obvious place is the North pole. Walk a mile south, and however far you walk to the east or west, and the pole is always one mile to your north.

The infinite collection of places are near the South pole. Think of the circle one mile in circumference that lies at nearly 90 degrees south latitude. From any point one mile north of the line, you would walk to the line, circle the Earth once, then walk back to your starting point. Now think of the nearby circle with a circumference of one-half mile: one mile east or west on that line circles the Earth twice, leaving you back where you started.

In general, for each integer n > 0, define L(n) as the set of points on the line of latitude, south of the equator, of length 1/n miles; define S(n) as the set of points one mile north of the points in L(n). Starting at a point in S(n), we walk south 1 mile to a point in L(n), then circle the Earth n times, returning to the same point, then walk back north. There are a countably infinite number of sets S(n), each containing uncountably many points.


What about a globe where the circumference was four miles, ie: a radius of 4/pi.

Since the globe has a radius of 4/pi, any combination of three mile long lines and two 90 degree turn projected onto the surface will result in returning to the same spot, provided the angles are both clockwise / anti-clockwise.

To summarize, there is an infinite collection of latitudinal circles near each pole for which every point on the circle satisfies the problem. In addition, the North Pole satisfies the problem. Finally, the South Pole and the portion of the globe less than or equal to one mile from the South Pole actually satisfies the wording of the problem, because the question is phrased "if you walk south 1 mile . . . ," which is an impossible premise at those locations, thus making the if-then statement true there. 06:26, 16 March 2009 (UTC)

I may be taking this completely the wrong way, but we had a perfectly smooth earth (so rivers valleys etc are gone, and a perfectly spherical earth, as such then this removes the slight bulge in radius around the equator. so as such, wouldn't any point on the sphere have the same geography as the north (or south)pole and thus anypoint on the earth fill the criteria? (this is more a question than an answer so please confirm or deny how accurate i am)

Every point on the sphere is geometrically identical, but we defined a set of coordinates on it such that a given (arbitrary) point is "North." That is sufficient, I think, to distinguish every point on the sphere, assuming we take clockwise and counterclockwise ("East" and "West") from above to be distinct. Besides, think about it: Obviously if you are standing at the equator, walk one mile south, one mile east, and one mile north, you will no longer be at the point where you started; in fact, you will be approximately a mile east of that point. 18:53, 17 March 2009 (UTC)

The sphericallity (a real word?) of the globe is irrelevant in determining the poles. The poles are defined by the rotational axis. Granted, rotation is the reason for the equatorial bulge, but this is an example of common cause, not cause and effect.
The axis of rotation is the way we define the poles on the Earth, but the reason is not important. The fact that we define poles at all implies that each point on the Earth is geographically distinct. 20:13, 21 March 2009 (UTC)
Oh, and obviously you need a Prime Meridian, too. I sort of forgot to mention that. 00:58, 14 April 2009 (UTC)

One of my favorite. I always puzzle my friends with it, but slightly changed. Instead of asking where they are, they get attacked by a bear and should tell its color :)

That's a classic version, although typically people give the erroneous answer that the way the problem is stated implies one must start on the North pole, when in fact there are infinite circles upon which one could be, half of which are on Antarctica, where there are obviously no polar bears. But admittedly, if one satisfies the conditions of the problem and is then attacked by a bear, it is probably a polar bear, and they are probably within a 1.159 miles of the North Pole, because they certainly aren't at the South Pole. 00:57, 14 April 2009 (UTC)

The Necktie Paradox[edit]

There certainly is a problem: two different meanings of "the value of my necktie" are used interchangeably. In the faulty analysis, "I lose the value of my necktie" means "I lose the value of the more expensive tie" while "I win more than the value of my necktie" means "I win more than the value of the cheaper necktie", or more accurately "I win the value of the more expensive tie."

We have two ties, T1 and T2, worth t1 and t2 dollars respectively, where t1 < t2. There is a 50% chance that you have either one. If you have T1, then you will win t2, but if you have T2 you will lose t2. Now the expected value of taking the bet is t2/2 - t2/2 = 0.


This doesn't resolve the paradox. If we assign a dollar value to one tie but don't know if it's the more expensive one, we still have the paradox. To avoid confusion, let's label the tie in unknown state (we don't know if it's the pricier one) TA. If TA has a price of $10 then there are two cases. If TA is T1 (the cheaper tie), then you gain more than $10. If TA is T2 (the more expensive tie) you lose $10. If the probability is 50/50, you do indeed have an advantage. This is the same as the envelope problem. Evan's analysis is correct until you assign a dollar value to one tie.


  • Ari, I don't understand your objection. What does your "If we assign ..." have to do with this problem, as stated? You seem to be saying that I am correct until we change the problem into a different problem. Even then, your analysis is incorrect; it can be paraphrased as "if my tie is cheaper, I will win the value of the more expensive tie, but if my tie is the more expensive tie, I will lose the value of the more expensive tie." You are confusing yourself by changing the value of "the more expensive tie" between the two cases. Just as in the envelope problem, unless you know the probability distribution for the values of the ties, knowing that the value of one of them is $10 does not help you at all. --Evan
  Evan, this is a paradox because depending on how we approach the problem, we arrive at two different conclusions.  Your analysis
  is correct and leads to the conclusion that there is no benefit to the game.  My analysis is also correct and leads to the
  conclusion that there is a benefit to a player of the game, hence the paradox.
  My approach is quite simple and does not involve changing the value of "the more expensive tie."  The only assumption required
  is that whatever the price of Tie 1, Tie 2 is 50/50 to be more expensive or cheaper, in other words, it assumes an infinite
  uniform distribution, which is quite reasonable based on the question.  Just try it out.  Pick any value you want for the first
  tie.  Then do the arithmetic.  
  If you still don't believe this approach is valid, consider this.  Let's say you and I were actually playing this game and a
  third party read the game to us.  In other words, the situation is identical to the problem.  I happen to turn my tie over and
  realize the price tag is still on the tie.  I haven't gained any new information about whether my tie is more or less expensive,
  and yet the situation now fits into my analysis (i.e. if the price tag says $10, I'm 50% to lose $10 and 50% to win more than
  $10).  If you want to argue that the discovery of the price tag fundamentally changes the game then you're in trouble; if we
  agree that both ties have an infinite uniform distribution of possible prices, then finding the price tag on one tie gives you
  absolutely no new information, since literally any price on that price tag leads to the same analysis.


  • Ari, the only way your assumption could be true would be in case of a uniform price distribution infinite in both directions, ie the prices of the neckties could be negative. --Djak

What's the problem here? Each person has a 50% chance of winning, and 50% + 50% is 100%. --anonymous

This is a special case of the two envelope problem in which they cannot see the contents of their envelope. I suggest we just merge it with that one. -- DanielLC

  • Absolutely, the expected value that each man walks away with is the same; (tie 1 + tie 2)/2 just as with the envelopes. -- TLH
The statement "I have a 50% chance to win" is wrong. And the seeming paradox vanishes when you compare "I will win a tie worth more than my tie" to "I will lose a tie worth more than the other tie"...
The envelope riddle in its current form doesn't make any sense - if you think they're both the same, it's the one that should be removed.

No matter how much the neckties cost, they could cost the same, in which neither man would win. Therefore the probability is less than 50% (assuming a finite price for the tie)

Re: above - this is true, the ties could cost the same. I think people are simplifying the argument by just admitting the the odds of winning are equal to the odds of losing. The odds of a draw are incalculable with the given data, but we could say it has some (x)% chance, and winning and losing each have a ((100-x)/2)% chance.

Further, I think most of you have the right idea, but the first explanation I think is the closest, (From Evan, t2/2 - t2/2 = 0). I think everyone is a little guilty of overthinking the problem. The flaw is in how the men calculate their expected return: In either case they place no value on the lesser tie. If my tie is worth 10$ and yours is worth 5$, I don't lose 10$, I only lose 5. Conversely, you don't win $10, you only win 5, because you lost your $5 tie in the process.

Using Evan's variables, we have two ties, T1 and T2 with respective values t1 and t2 and WLOG we assume that t2 >= t1. The winner gains (t2-t1) while the loser loses (t2-t1). Since the odds are equal, (say 0.5 each), the expected return is (t2-t1)/2 - (t2-t1)/2 = 0. This definition acknowledges that the ties might be of equal value, in which case t2-t1 = 0, and so we get 0/2 - 0/2 = 0, which is the same as before. --aemmott

A Night at the Opera[edit]

Anyone who sits in their intended seat (other than the last person) can be ignored, as since no-one sat in the seat before they got in, they might as well have been in it all along.

The first person has a 1/N chance of picking their own seat (and then obviously the last person gets their seat). They also have a 1/N chance of picking the last person's seat (when they obviously don't). In any other case, someone else has to randomly pick a seat with a 1/N' chance of picking the first person's real seat (last person gets own seat), a 1/N' chance of picking the last person's seat (last person gets wrong seat), and in any other case someone else has to randomly pick a seat, go to the start of this sentence.

If this isn't sorted out before the penultimate person, they have a 50% chance of picking the 1st person's seat and a 50% chance of picking the last person's seat, and the branching probabilities finally end.

At any stage, the chance of a successful exit is the same as the chance of an unsuccessful exit, so the final chance after everything is cancelled out for the last person is 50%.

(Unless of course, there's only one person and only one seat, in which case they have a 100% chance of getting it right)

The obvious extension of the problem: given N seats, what are the chances that the Kth person in line gets the right seat?

    • I think the probability that the kth person gets their own seat, out of N, is 1/N if k=1 and (N-k+1)/(N-k+2) if k>1 DukeEgr93 20:21, 17 February 2009 (UTC)


I propose a variation; If this happened every night for an infinite number of nights, what percentage of patrons will have been sitting in their assigned seat (assuming each patron visits the opera only once)? Thoughts?

Three Men and a Bellboy[edit]

  • "Now, the three men have each paid §9, a total of §27. With the bellboy's §2 this only amounts to §29, where did the other § go from their §30?" - The stimulus? DukeEgr93 18:38, 17 February 2009 (UTC)
For purposes of the puzzle, let's say that each man enters the hotel with §10. They each pay §10 for their room, so now the landlord has §30 and the men are all broke. Then the landlord remembers the special offer, and gives the bellboy §5 to take to the men - at this stage, the landlord has §25, the bellboy has §5, and the men have nothing. The bellboy keeps §2 and gives the rest to the men, so the location of all the money is:
§25 with the landlord
§2 with the bellboy
§3 with the men
So, of course, all §30 is accounted for. The 'disappearance' of one § is due to an error in the puzzle's logic. We have to look at the whole puzzle as one exchange, not a series of exchanges. The puzzle assumes that all §30 involved was the total amount of money that changed hands; but ultimately, §3 went back to its original owners. Thus, only §27 was displaced; §25 went from the men to the landlord, and §2 went from the men to the bellboy. -- 20:07, 17 February 2009 (UTC)
I found the above explanation a little confusing but perhaps I'm saying the same thing. When the puzzle says "the three men have each paid §9, a total of §27" That is $25 for the room, AND the $2 the bellboy received. So the part of the question stating "With the bellboy's §2 this only amounts to §29" is gibberish, the bellboys 2 is in the 27.

Right, the explanation of the puzzle contains a misleading factual error. The $2 is added to the $27 when it should actually be subtracted.

Actually, it should be ignored, not subtracted, otherwise you would be 2 dollars short. (27-2+3=28)
Well, it should actually be subtracted from the $27 the men have paid to equal the $25 the manager has. This problem is all about signs, and is basically the root of double-entry bookkeeping in accounting.
Why add another 3$? Stop after the subtraction: 27-2=25 -- the price of the room. -- 13:24, 24 January 2010 (UTC)

I feel the following may be relevant to this puzzle:

I don't think this is a particularly "counterintuitive" or "tricky" puzzle. The only tricky part is that you deliberately mislead people with the $29. Which is fine - but I don't think it fits the spirit of this page.

I think it is tricky and counterintuitive and well worth having on this page because people use tricks like this to steal money from people. Yes ti's obvious when you sit down and write it out properly

M = Man 1 m = Man 2 n = Man 3 B= Bellboy H = hotel/Landlord T = Transaction

 M | m | n | B | H | T
$10|$10|$10| $0| $0| Start
 $0| $0| $0| $0|$30| Men pay $10 each to hotel M-$10, H+$30
 $0| $0| $0| $5|$25| Landlord gives $5 to BellBoy as change
 $1| $1| $1| $2|$25| BB gives $1 to each man. M+$1 BB-$3

Correct sums

30 = Start(M+m+n) = Paid(H) = Return(B+H) = End(M+m+n+B+H)

27 = Start(M+m+n)-End(M+m+n) = End(B+H)

25 = Return(H) = Start(M+m+n)-End(M+m+n+B)

The Four Numbers[edit]

Consider a*b*c*d. We can obtain this product from our partial products in three ways:

(a*b)*(c*d), (a*c)*(b*d), (a*d)*(b*c)

Two of these pairs exist among our five, and one is missing. We therefore find two pairs of numbers in our five which have the same products. In this case, it's 2*6=12 and 3*4=12. The remaining pair must also have the same product, so it is 5*(12/5)=12 -Tiax

No, just because you found 12 doesn't mean that's the only possibility. The problem forgot to state that they are positive integers (otherwise their is no unique solution). Taking that into account, see that you are given 2, 3, and 5 as products. These primes indicate that there must be a 1 in the set of four. The six products are therefore: 2, 3, 4, 5, 6, 15. According to my solution, the above uses (1*2)*(2*3)=12 and (1*3)*(1*4)=12, doubling up factors in both cases. -Matt

2, 3, 4, 5, 6, 15 doesn't make sense to me. As I said above, you need to have a constant value for a*b*c*d, but I don't see any way to get that with the 15. Suppose that we call 15 a*b. We have five options for c*d. These yield the values for a*b*c*d of 30, 45, 60, 75 and 90. In none of these cases can we pair off the remaining four to obtain the same product.

Also, are you sure it's supposed to be only integers?


There is no solution for integers. The primes indicate that, necessarily, 1, 2, 3, and 5 are a, b, c and d. However, none of these multiply to 4. --Mark

The four numbers are definitely *not* integers. There's no solution for integers. In particular, the missing number isn't integer, either. --Quiss

Matt's solution can't be right -- he is mixing up the products (the numbers given in the problem) with the original a, b, c, and d. Mark is also correct -- if the solution were limited to integers only, the existence of three primes among the products would force the fourth original number to be 1, and make the existence of the 4 among the products impossible. So by contradiction, the original a, b, c, d cannot all be integers. The more I think about it, the more I think Tiax is right, the sixth product must be (12/5). So now... can we come up with four numbers a, b, c, and d such that their pair products are 2, 3, 4, 5, 6, and 12/5?

Before I go on, I want you to note that it doesn't matter which products (ab, ac, ad, bc, bd, cd) we assign to 2, 3, 4, and 6 as long as we keep the fact that ab*cd = 12, ac*bd = 12, and ad*bc = 12 true. It's only going to affect the final distribution of the names 'a', 'b', 'c', and 'd' among the four numbers we eventually find. (You can prove this, if you doubt it, by messing around with variable substitution for a while, but I won't prove it here, because that's long and I'm tired. Suffice it to say, it's true.)

Assume for the minute that ab = 6 and therefore cd = 2. Thus ab/cd = either ac, ad, bc, or bd, because one of those other multiples must be 3. Exactly which is irrelevant, as I mentioned above. Let's pick 3 = ac. Then 4 = bd. Now a whole bunch of relations are possible. 2ac = ab, thus b = 2c. 1 + cd = ca, thus a = d + 1/c. I'm too tired right now to come up with a solvable system, but I think that from ab = 6, cd = 2, ac = 3 and bd = 4 we should be able to either:

a) find a, b, c, and d, and prove that either ad or bc = 5 (which one this is DOES matter because it adds another, non-symmetric constraint to the distribution of names, and I don't know whether it'll be ad or bc just from glancing), or

b) find a, b, c, and d, and prove that neither ad nor bc = 5 (in which case the premise of the question is false), or

c) show that there are no such a, b, c, and d that satisfy the system I began above -- although I doubt this to be true. -Riker

Thinking about what I said above again, you could also add constraints like bd + ab / cd = {either ad or bc} to bring the 5 into it. Or get simpler and write bd + 1 = {either ad or bc}, or ab - 1 = {either ad or bc}. But you might have to write up two different systems (one assuming that ad = 5 and one assuming that bc = 5) and try to solve both of them separately. -Riker

You had to go ahead and do that, didn't you? I don't know if it's the only solution, but I got (in arbitrary order) a = 4(3/10)^.5, b = 5(3/10)^.5, c = (3/5)(10/3)^.5, d = (10/3)^.5. Thus ab = 6, ac = 12/5, ad = 4, bc = 3, bd = 5, and cd = 2.

For people who prefer decimal representations, a ~ 2.1909, b ~ 2.7386, c ~ 1.0954, d ~ 1.8257.

The way I solved this was by a stroke of luck: I made a 2x2 "magic square" of products (the entries in the squares are, clockwise from upper left, a, d, b, and c), with expected products of columns, rows and diagonals written in around the exterior of the square. I chose the columns to be (left to right) the pair 12/5 and 5, and the rows (top to bottom) to be 4 and 3. I figured that since 12/5 and 4 were related by an almost-factor of 4, we could put 4X in their shared box, and (3/5X) in the box beneath it.

From here I went to the other column, which was easy: the upper one had to be (1/X), while the lower one had to be 5X. Diagonally, these multiplied to 20X^2 and (3/5)X^-2. Knowing that these had to equal 6 and 2 (respectively), and more specifically that one was three times the other, I solved for the equation 20x^2 = (9/5)x^-2. That's where the factor of (10/3)^.5 comes from.

I hope you've enjoyed this presentation. I'm going to go sleep. --Mark

OK, the way I approached this problem was as follows:

you have four numbers: a,b,c,d

Assume the produce cd is not listed.

That means you can say that 2*3*4*5*6 = 720 is the product of 2 cubes and 2 squares

Thus, (a^3)(b^3)(c^2)(d^2) = 720

There are no integer solutions to this problem, so that's one way of proving there are no integer solutions. --Nathan

I can confirm that Tiax is correct and that Mark's solution is unique. Once you have 12/5 as the final product you can determine what the value of each pair is. You can then use these numbers to find b, c, and d in terms of a. Multiply these 4 terms together and set it equal to 12. Solve for a, then solve for the rest of the numbers. - Hannibal

Let the Pair to be found be cd, an obvious statement to make is cd = (ac*bd)/ab by the same logic cd = (bc*ad)/ab. Equating these we obtain (ac*bd)/ab = (bc*ad)/ab this gives us that ac*bd = bc*ad the only solution to that from the numbers given is that this (3*4) = (6*2) =12 and ab = 5 thus giving cd = 12/5 -Donnie

I just wanted to add that using the approaches outlined above you actually do arrive at a (unique) solution for a<=b<=c<=d, which in my opinion is a=2*sqrt(2/5); b=sqrt(5/2); c=6/sqrt(10); d=sqrt(10). If you play around with them you should find that they work out nicely. -Firionel

Mark is nearly right, but Hannibal is wrong that the solution is unique (unless he means the unique solution for the 6th product). There are 2 solutions for a < b < c < d (if any two were equal, you would have 2 pairs of repeated products, which we don't have)

as has been previously established, the 6 products are 2, 12/5, 3, 4, 5, 6. We know that ab = 2, ac = 12/5, bd = 5 and cd = 6, but there are 2 solutions derived from bc = 3 and bc = 4, with ad = 4 and ad = 3 respectively. We also know that bd/cd = b/c = 5/6, thus b = 5c/6, b^2 = 5bc/6 and b = (5bc/6)^0.5.

Case 1: bc = 3. [Note: this is Firionel's solution] define {e,f,g,h} as {a,b,c,d}*(10^0.5), thus ef = 20, eg = 24, fg = 30, eh = 40, fh = 50, gh = 60. We still know f = (5fg/6)^0.5 = (5*30/6)^0.5 = (25)^0.5 = 5, so we can solve e = 4, f = 5, g = 6, h = 10. Substitute back for a = 2*(10)^0.5/5, b = (10)^0.5/2, c = 3*(10)^0.5/5, d = 10^0.5.

Case 2: bc = 4. [Note: this is Mark's solution] define {i,j,k,l} as {a,b,c,d}*(30^0.5), thus ij = 60, ik = 72, il = 90, jk = 120, jl = 150, kl = 180. We still know j = (5jk/6)^0.5 = (5*120/6)^0.5 = (100)^0.5 = 10, so we can solve i = 6, j = 10, k = 12, l = 15. Substitute back for a = 30^0.5/5, b = 30^0.5/3, c = 2*(30)^0.5/5, d = 30^0.5/2. -Pervach

Here is the elegant solution, though all credit goes to a guy named Ashish who I got this answer from: let a*b*c*d=P then (ab)(cd)=(ac)(bd)=(ad)(bc)=P , i mean , there only those 3 such ways of multiplying in pairs of 2 from all six possible product pairs ( ab,ac,ad,bc,bd,cd) now since we are given 5 products .. so atleast two of above three can be formed from these 5 so that both have equal product( equals to P ) .

in our case we find out that 2*6=3*4=12 so P=12 so 3rd one is 5*x=12, x=12/5 -Devin -- 03:20, 20 January 2012 (EST)

--- Here the six numbers are (a*b, a*c, a*d, b*c, b*d, c*d) : Multiplying all we get a^3 * b^3 * c^3 *d^3 i.e. (a*b*c*d)^3 The five numbers are 2, 3, 4, 5, 6 and let sixth number be x, therefore after multiplying all we get : (2* 2* 2* 2* 3* 3* 5* x) for this to be a cube x = 3* 2* 2* 5* 5 i.e x = 300. -Da3m0n

The original numbers are sqrt(1.6), sqrt(2.5), sqrt(3.6) and sqrt(10). To find that you just have to logarithm the equations and get: log(a)+log(b) = log(a*b) = log(2) and log(c)+log(d) = log(c*d) = log(6) ... Then you can use normal linear algebra to solve it. --Etoplay

The six poisoned wells[edit]

The Six Poisoned Wells[edit]

Is it possible to tell if you have been poisoned or cured? If the knight drinks from well 1 and the dragon gives him well 2, can the knight tell if he is still poisoned and should look for a cure, or is he uncertain until he dies? I'm assuming the latter case.

One way for the dragon to always survive is to drink nothing beforehand, but after drinking the knight's poison, drink twice from well 5 and then drink from well 6. If the knight offers any poison but well 5, it will cure it, and the second drink will be cured by well 6. If the knight offers well 5, the first 2 drinks will do nothing and well 6 will cure it. If the knight offers clean water, the drinks from wells 5 and 6 will cancel out. I'm not sure if there's a similar strategy for the knight, because he doesn't have the cure-all of well 6.

I don't believe it's possible to guarantee a win. Anything the other side offers can be canceled out by choosing some appropriate drinks either before or afterward. Therefore, if there was a winning choice of poison, one player would know that the other would always pick that strategy and would be able to pick the drinks before or afterward that would cure the winning strategy. -- 18:47, 18 February 2009 (UTC)

In a similar vein, the knight should show up sick with well 1. Then afterward drink twice from 4 and finally from 5. If the dragon gives him 2-6, then it will cure 1, knight will get sick with 4 and cured with 5. If the dragon fives him lake water, then 4 will cure 1, get sick with 4 again and cure with 5. It appears there is a fool proof strategy to live, but none to die.

  • This solution assumes that if you drink two glasses of poison, one glass of antidote will save you (i.e. quantity does not matter). It should be made clear, whether or not this is the case.
    • If quantity doesn't matter, the puzzle is dumb, since both the dragon and the knight will conclude that they can not poison each other, and decide the duel in a game of backgammon instead.
    • If quantity does matter, the puzzle is dumb, since it is equivalent to rock-paper-scissors.
      • Anyway, the problem statement sucks, because the hint tells you the "solution" :(
        • I agree, it's the only thing counter-intuitive about the puzzle. Anyone care to remove it? -- 06:44, 20 February 2009 (UTC)
          • Strategically speaking showing up with water from the lake is the best option. It is not poisonous, yet as the dragon you'd go to well #6 to clear the poison, and die from the strongest poison.
            • No, it wouldn't. The dragon doesn't just go to well #6, he first drinks from well #5 twice, and then once from well #6, curing himself in any situation, as explained above. Also, I agree this is a vague puzzle.
              • Took the liberty to remove the hint and also redefine the question to "is it possible to survive this duel". -Quiss

The guaranteed winning sequence for either character is 1,X,1,5. -- Hairy Phil

Along the same lines, and to minimize ingested poison values, 1,x,1,2 does the job just as well. I would also call it t he guaranteed surviving sequence rather than winning.-JG

Not just so. The dragon can easily survive with X, 1, 6: he need not start by drinking a poisonous draught. -Jon Werts

If the knight gives the Dragon lake water- any well the dragon drinks from will poison him.

The dragon can guarantee the knight dies by giving him well 6 water. If the knight gives the dragon any well water it can cure it by drinking from well 6, but if it was lake water that would kill him unless he drinks from another well first. However the confusion comes from what happens if he drinks multiple poisons that do not cure each other first (what if someone was given well 5, but they first drink 2 before they drink 6?). Unless I'm misinterpreting the rules, than the second post is the correct solution.

  • The dragon does not have that guarantee because the knight can drink from well 1 before the duel begins, in which case the well 6 water would cure him. The dragon can counter this by giving him fresh water or well 1 water, so the knight must add on the safeguard of drinking from well 1 and then well 2 after the match, which will nullify everything. (same works for the dragon, just imagine well 6 doesn't exist)

- the dragon has another guarantee. X,5,5,6. After drinking whatever the knight gave him, he could drink from well 5 twice then drink from well 6. If the knight gave him lake water, drinking from well 5 would poison him and then drinking from 6 would cure anything. If the knight gave him 1-4 then 5 would cure it the first time and he would be poisoned again from the second drink, then well 6 would cure that poison. If the knight gave him 5 then the 2 drinks would simply do nothing and well 6 would cure him again.

The dragon assumes that if he gives the Knight water from the 6th well, there is no antidote. But the Knight anticipating this, drinks water from Well 1 first and then drinks the water the dragon gave him. So the knight lives. The dragon also assumes water from well 6 can cure any poison. The Knight anticipating this too, gives him fresh water from the lake. The dragon drinks water from well 6 and dies. Of course the above solution assumes that the Knight is more intelligent. Actually there is no way to ensure one's survival because the outcome of one's decision depends upon the decision of the other making this a case of Game Theory.

Using the assumption that quantity of poison/antidote do not matter, there is one clear win situation for the Dragon - 5 X 6. He drinks from well 5 before the duel. Regardless of what the knight gives the dragon, the dragon will ALWAYS still be poisoned after the duel (5,Lake; 5,1; 5,2; 5,3; 5,4; 5;5 all result in the dragon having a poison of level 5 in its body). Then, after the duel, the dragon drinks from Well 6 to cure the poison that he gave himself before the duel. The knight, on the other hand, has no such solution because there is no way to guarantee his state after the duel.

  • Not true, see the 1,X,1,2 solution above

Unfortunately, JG, under a strict interpretation of the rules, 1,X,1,2 is not a viable survival solution. The rules say that drinking the same poison twice in a row is the same as drinking it once. However, drinking Poison 1, then drinking Lake Water (I'll call it "0"), and then drinking more Poison 1 will kill you (1,0,1,2). This makes 1,X,4,4,5 a much better solution under the circumstances that the rules are interpreted this strictly. -Matthew

Under a strict interpretation of the rules 1x445 is indistinguishable from 1x45, meaning if you're given 1, 11445 turns into 145, which ends at poisoned 5. the second 4 doesn't re-poison you if you are cured.

Survival Strategies for the man are 1x12, 1x13, 1x14, 1x15, 2x13, 2x14, 2x15, 2x23, 2x24, 2x25, 3x14, 3x15, 3x24, 2x25, 3x34, 3x35, 4x15, 4x25, 4x35, 4x45,

To further this, If twice in a row is identical to Once then follow the logic
[11]11111 [11] collapses to 1
 [11]1111 [11]collapses to 1
[11]111 [11] collapses to 1
[11]11 [11]collapses to 1
[11]1 [11] collapses to 1
[11] [11]collapses to 1
Therefore if two in a row has the same effect as drinking one, necessarily more than 2 has the same effect of drinking 1 even without the added hint.

Survival strategies for the dragon are x16, 1x12, 1x13, 1x14, 1x15, 1x16, 2x13, 2x14, 2x15, 2x16, 2x23, 2x24, 2x25, 2x26, 3x14, 3x15, 3x16, 2x24, 3x25, 3x26, 3x34, 3x35, 3x36, 4x15, 416, 4x25, 4x26, 4x35, 4x36, 4x45, 4x46, 5x16, 5x26, 5x36, 5x46, 5x56 5x6

Given both parties have multiple guaranteed survival strategies, neither party can win. If we assume that neither party can pre-drink, then The dragon wins using x16 and the man uses x15 and loses when the dragon gives him 5 or 6 (2/3) of the time.
The real trick to winning the duel would be to not pre-drink and faff for an hour hoping your opponent pre-drank poison.
Ultimately I feel we've ended up misinterpreting the question somewhere because this isn't a puzzle...

The 3 houses and 3 supply stations[edit]

I don't think this one is in the spirit of the other questions; a "trick" of some sort is required (e.g., running the gas lines through/under a house, etc). In the spirit of the question (at least for mathematicians!) the answer is that it is impossible the graph <math>K\sub{3,3}</math> is not planar, i.e., cannot be drawn on a plane with no intersecting lines. I vote for deletion or at least reworded to ask "Is it possible..." --JoelG 07:20, 20 February 2009 (UTC)

  • I support this cause.-- 07:25, 20 February 2009 (UTC)
  • Agree --Luke

reworded --author

The problem with the problem is that the definition for "crossing" is so vague. If the problem takes place on a plane, it seems logical that "crossing" must mean "intersecting" (unless one allows for solutions that involve circumscribing one wire within another wire, to wire the houses so that the three carry the powerplant's electricity as a series circuit, ones that seem kind of bs), whereas in the real world it's not as easy to define... namely since there are some three dimension figures for which I believe this would work. Assume that we can find some sort of naturally occuring torus on earth, such as a naturally formingland bridge (which certainly exist, and since we are given no restriction on the length of the wires/pipes we can certain get to from the house), is it considered "crossing" to, for instance lead one win through the arch of said land bridge, then lay the waterpipe through the bridge's opening (or whatever combination)?

In other words are we allowed to take the sheet of paper that we draw this problem on, attach some kind of tube thing at two separate points, forming an arch over the paper, through which a pipe may be fed through? It all depends on what it means for these gas/water/electricity lines to "cross"... they don't "intersect" per se, but if we send a paper airplane flying over a street, we do, in common parlance, say that it has "crossed" the road.- Rothul

Yeah, it is possible, but only on a 3D surface. I've seen a solution on a donut shape, where one runs through the hole. It's not possible in 2D - trust me, I've used tons of paper to try to solve it. -Catherine

Two Janitors, One snowstorm, two carts[edit]

Two snow-based factors influence the speed of the janitors. One is the contribution of lateral momentum to the snow which accumulates (even momentarily) on their carts. The other is the fact that their path is becoming covered in snow. The first does not cause any discrepancy in speed between the janitors, as both lose an identical quantity of speed in bestowing momentum to the snow. Therefore, the discrepancy (if any) lies in the second factor.

Which method better allows for retention of cart speed? The more massive cart will (generally speaking) be less effected by snow on the ground, as it will have retained its forward momentum; the less massive cart (where the janitor was removing snow) will have lost momentum in the snow scooped onto the side of the path.

Therefore, the lazy janitor is following the correct strategy.

One complication to this problem involves compaction of snow on the ground, an effect similar to that of friction. As the more massive cart will necessarily compact the snow to a greater degree than the less massive cart, it will experience more friction on the surface of its tires, as well as a greater amount of energy lost in the compaction of the snow. At a glance, this will have less effect on this speed of the carts than the earlier-described second factor, and so is ignored in this treatment.--Mark

-- I feel in the spirit of the question that the answer is "no difference". Once you start considering the effect of snow accumulating on the path, it's got to be inappropriate not to consider friction - both are "real world" phenomena, and without external knowledge it's not obvious to me that the friction effect will be smaller than the pushing-snow-on-the-path effect. JoelG 23:47, 22 February 2009 (UTC)

-- I'd like to offer another perspective. Assume that all snow falls vertically, so that if 1kg of snow falls on a cart of mass 10kg travelling at 10m/s (numbers chosen to exaggerate effect) then the final velocity would be 9.09m/s. The momentum of the system is still 100kgm/s. The tidy janitor removes the 1kg of snow quickly, keeping a velocity of 9.09m/s and reducing the mass of the system (the cart + janitor + any unremoved snow) back to 10kg and thus the momentum of the system to 90.9kgm/s. Another 1kg of snow falling on the cart would reduce its speed to 8.26m/s. The lazy janitor does not remove the snow, so that its mass is now 11kg and the system's momentum is 100kgm/s. Another 1kg of snow falling on the cart would only reduce its speed to 8.33m/s. Therefore the difference is that the tidy janitor discards accelerated snow, reducing the momentum of his system. By using the increasing weight of his system to his advantage, the lazy janitor will lose speed less rapidly. So sad to see hard work go to waste...--TKG 21:13, 23 February 2009 (UTC)

  • Could a smart janitor speed himself up by sweeping snow backwards? Or would it be better to keep it onboard to take advantage of the extra weight?
  • The lazy worker slows down first, assuming finite energy. {Neither, if we assume infinite energy.} In a frictionless system, momentum and ground conditions are meaningless, absent collision events. Assume 1k/s snowfall, 10kg starting weight, 10m/s starting velocity. 1 second passes. The lazy worker 50 joules to maintain velocity. The diligent worker spends 1 joule to move 1kg total of snow 1 meter to push it off the side of his cart. The diligent worker is constantly clearing the snow as it falls, thus spending 1 joule per second to keep his cart at its initial weight, thus requiring no additional energy to continue the forward motion (no friction = momentum conserved). The lazy worked must spend 50 joules per second to maintain momentum. Ta! Some random internet guy 22:46, 24 February 2009 (UTC)

A Very Good Predictor (Newcomb's Paradox)[edit]

The answer to this question remains disputed. One side argues that it is always logical to also open Envelope B, because your choice of action does not effect the contents of Envelope A. The other side argues that the type of person who would only open Envelope A will benefit athousandfold, and that it is categorically impossible to alter one's own tendencies while engaged in the game.

Essentially, this "puzzle" challenges peoples' notions of free will. I'm sorry if it is inappropriate for this page (not having a set-in-stone solution), but it provoked many a fun argument over breakfast. It's a paraphrased version of Newcomb's Paradox--Mark

Open both envelopes. Your choice itself should not affect the outcome. If it turns out that "A Very Good Predictor" truly is precognizant", then in the words of a Chinese Zen Master, "Wu" ("Mu" for Japanese). The question itself is flawed because it is assumes the possibility of "true precognition" (a physical impossibility). ~John

I think it's fair to throw away any thoughts about free will (and many other things) when you see that the computer has predicted your every move over the five years. I'd feel very confident opening only Envelope A. By opening both I'd be essentially doubting the predicting ability of the computer by choosing to win the million only when the so far infallible computer is wrong, just to secure $1k. Apart from being stupid from an empirical sort-of viewpoint, I can't think of a single reason why this decision wouldn't be just as predictable as everything I have done up to this point, and therefore an almost certain return of only the $1k. Maybe I'm missing something or just stuck in one way of thinking? This doesn't feel like a paradox to me at all. -Nix

I'd open both. Then I get either $1,001,000 or $1,000 - win/win. Who gives a toss what the computer predicted?

The strategy that wins is to open only A. I'm not so attached to rationality that I would not abandon it when I'm in a case where it's a losing strategy. Then again, the outcome of this hypothetical (assuming an omniscient computer) is predetermined anyways, so I guess I'm lucky that I'm one of those that gets $1000000 :)

It's not a classic paradox, because it assumes that supposedly fundamental properties of the universe (that the future is not knowable, and/or that the present cannot affect the past) is false. In the hypothetical universe presented by this question, it seems that, for all practical purposes, what you do now can influence what the computer chose in the past, therefore, opening only envelope A ensures that the computer predicted you would do so, and therefore opening only the one envelope is correct.

What if the computer was only 99.99999% accurate? While it'd be difficult with present technology, it wouldn't necessarily contradict any universal laws.
It’s a bit more complex in that case, depending on your reading of the problem. The usual (read: in my experience) meaning of “100%” is that the computer is always right (as opposed to simply all predictions we tested were accurate). In other words, the question is about what to do given the certainty of the prediction, not evaluating how accurate the prediction is given past predictions. The almost-accurate case is different because the meaning of the percentage isn’t as clear. If, e.g., the computer is always right, but any answer it gives has a uniform chance p (=1E-7 in your case) of being displayed wrongly (the chance being independent of the prediction asked), then it’s relatively simple: the expected value of one-boxing is p×0+(1-p)×1E6$, that of two-boxing is p×1.001E6 + (1-p)×1E6. If my calculations are correct, the expected value of one-boxing is better than that of two-boxing for p<0.4995.
However, if the mechanism for error is different (e.g., not specified), then it’s no longer implicit in the terms of the problem that the causality is violable; there remains the possibility that the 1E-7 errors arise in cases where there is circularity in predictions (e.g., predicting the actions of actors aware of the prediction’s result). The essence of my comment, I guess, is that the answer depends on whether prediction errors depend on the kind of prediction or not: if they don’t, see above; if they do, you need to know how this affects this particular prediction. --bogdanb

Come on, this one is easy. I would first open only A. If and only if it contains $1,000,000 I would open B, and take the other $1000. The computer cannot make any mistakes. But in this case, if it had predicted I would open only A, because of that I would open both envelopes, but if it had predicted I would open both of them, I'd only open A. This would mean the computer cannot predict my actions, it would be a paradox for the computer, not for me, and the computer would go in an infinite loop or crash or something like that. This means that the contents of envelope A would be in superposition, or more probable: the game would never have been initiated and you wouldn't have to choose, so the paradox doesn't happen and well, then the game could be played so the paradox DOES happen and... this brings us into the realm of time and causality paradoxes.

Another weird thing: This paradox thing won't happen when, after you find out A contains the note, you get angry because you got $0 and angrily open B to get the other $1000...

As Cptn Janeway said: Time travel paradoxes - it all gives me a headache.

A valid point. Altering the question to prevent such gaming of the! --Mark

So the way I look at this problem is that the predictor isn't perfect (it is stated in the origional description of the paradox that the predictor can make a mistake) then it has some accuracy (we may not know what it is, but it's something that could be measured) Take the following cases.

The very bad predictor: The contestant in the game doesn't know it, but we were on a budget and couldn't afford a future predicting super computer, so instead we flip a coin, heads we put the money in envelope A, if it's tails we don't. In this case your expected payout is 1000000 *.5 if you pick a or 500000 and 1001000 *.5 if you pick A and B or 500500. It's better to pick A and B.

The Not So Good predictor: The last game was kinda lame, because the predictor was always wrong, so picking both A and B was the best choice, but we still don't want to spend a lot of money, so before the contestant is posed the big question, I go online and look up all his comments in Newcomb's Paradox discussions, and use these to decide weather or not to put the money in the envelope (For the sake of argument the contestant is unaware of my method and can't use it to his advantage). It's not a great predictor (people rarely act the way they say they will) but it gives us a slight edge, and we can now predict with 60% accuracy. I don't think that such a predictor is unrealistic (team of FBI profilers or even avrage responce of 1000 people could probbly do this well) In this case the payout for picking A only is 1000000 *.6 or 600000, and the payout for picking both is 1001000 *.4 or 400400. Even with this fairly poor predictor, because the money in envelope B is so small, it works out to your advantage to pick just A.

Just my thoughts on the issue.


I'd open only B. Predict that, Mr. Fancy-Pants Predictor! -Tiax

     You can't, the rules state it's either A or A and B, you can't pick B by itself. -- 08:15, 4 March 2009 (UTC)

If the Computer is 100% accurate, then the answer is that you can't choose one way or another. The problem assumes you don't have free will. If the computer is just 99% accurate, then there is assurance you have free will, and the computer is just really good at guessing. In that case you may as well take both envelopes, because the contents of envelope A aren't going to change after the fact. --- If you assume that 100% accurate prediction precludes free will then (I think) free will becomes impossible. Any decision either has a reason for it which you follow every time (making predictions possible to make at 100% accuracy, even if it is very difficult), or ones that are defined either arbitarily or which could go either way, perhaps due to competing influences. This is not easy to predict acurately, but makes your actions random chance. This reduces all actions to either certain or random (I would quite like someone to point out the flaw in this argument, it depresses me). If however, you accept that decisions can be both 100% predictable and done by choice, the problem disapears. Anyway, relating this to the puzzle, we know that the computer can predict in advance the outcome of anything random in advance, otherwise the computer fails as soon as your life is affected by anything random (or there is nothing entirely random in the universe, in which case none of your actions are truly random either, and you will always act the same way in the same situation). Therefore, if your uncle is telling the truth, which is entirely possible, you pick box A, and gain £1M. If he is not telling the truth, then either he put the money in box A, you pick A, you get 1M, or he didn't you pick A, and you can be insufferably smug towards your uncle about tricking the computer for a mere £1000 (serves him right). Therefore, as long as you value proving that the prediction was wrong at between £1000 and £1M, the correct nswer is always A. In my case, I do, so I would pick A.

The problem's assumptions about free will are tangential: its explicit concern is reverse causality. That is to say, the problem does not assert that the Predictor's prediction for this game must be correct, it merely asserts that all previous predictions made by the Predictor have turned out to be true. Therefore, there must be no reverse causality in play.
-If you're the type of person who would take both envelopes because the contents of Envelope A aren't going to change based upon this, you'll almost certainly find that the Predictor guessed as such, and you'll have to make do with $1,000.
-If you're the type of person who would take just the one envelope, you'll also almost certainly find that the Predictor guesses as such, and you'll have to make do with $1,000,000.
Now, that's not saying you have no free will: your free will is in the decision of what type of person you are. If you wish to apply this directly to the game, consider what would happen if you deliberately acted contrary to your normal belief structure, and try and bear in mind the difficulty of that task.
-You're a one-enveloper, but inside the room you find a way to spontaneously change your life philosophy. You question your deepest, darkest views of the nature of existence. You take both envelopes, and make out with $1,001,000. That's a net increase of .1% over your previous winnings. Not very rewarding, in the grand scheme of things.
-You're a two-enveloper, but inside the room you find a way to spontaneously change your life philosophy. You question your deepest, darkest views of the nature of existence. You take one envelope, and make out with $0. That's a thousand dollars you just lost, out-of-pocket. Not rewarding at all.
While you can exercise your touted free will in this game, please be aware that doing so won't be a rewarding endeavor. --Mark

I'm not that good at explaining things, and the philosophical nature of this discussion doesn't help, so I'll just say the same thing several times in different ways and hope you can understand it.

As an Eternalist I figure if you say that you might as well pick A and B because the computer's choice is already certain, then you might as well say that it doesn't matter, as how much money you get is already certain. Causation is a simplification. It's correlation that really matters. Before I make my choice, I don't know what the computer predicted. My choice gives evidence of what the computer predicted. As such, I should make the choice that gives the most evidence that the computer predicted that I'd only choose A.

You people seem to be acting like there are three states: Uncertain, Unknown, and Known. Anything Uncertain, you can change. Anything Unknown, you can't change, but you don't know. Known is self-explanatory. It makes no sense to me. There is only one future. It's every bit as real and unchangeable as the past. There is only Unknown and Known. When you make a decision, you learn what you decided. That information allows you to predict other things. If you decide to drop a ball, you know it will fall. If you decide against it, you know it won't. Make the decision that tells you what you want to know.

The probability that the computer predicted that you'd only choose A given that you only chose A is higher than the probability that it predicted that you'd only choose A given that you chose both, therefore, you should only choose A. --DanielLC

I have to throw a wrench into the works. This prediction would be different from all previous predictions for 2 reasons. 1. You have never previously known about the predictor, and the knowledge that your actions are being predicted may change your decision/your decision making process. 2. Even more, you are making a decision based on what is being predicted. Thus it is even more likely that either you or the predictor would come to a different decision/prediction. Now, the predictor may be able to cope - but we don't have evidence for that--the previous 100% accuracy does not tell us if the predictor will likely be accurate here. Even if the predictor is accurate under such circumstances, you might get into a decision making loop. Normally, you would choose only A (or A and B), but you know that the predictor knows that, so you can safely pick A and B. But you know that the predictor would know that you might change your might, so you'd better not. But you know it'd know (etc etc etc). And maybe you'd know when you'd stop the cycle, but maybe you ended up falling asleep and accidentally grabbing one or the other of the envelopes. - Stephen

No matter whether you know about the predictor, if it is 100% accurate surely no matter what second-guessing you attempt it will still predict correctly? In which case take A because you get more monies. I don't see this as removing free will, however - the computer knowing your every future move does not remove your ability to decide what to do - you still chose to do it, and had the option to choose the alternative. If the computer is not 100% accurate, it depends if you're a gamblin' man (or woman) or not. If you pick A/B you're guaranteed the 1000, but if you pick A, you then have a (accuracy %age) chance of getting the larger sum. Are you feeling lucky? - Simon

Re: Free will - The Heisenberg Uncertainty Principle means that, until you choose which envelopes to take, envelope A contains both the million dollars and the note, thus your decision is not only about whether to open envelope B, but also whether to collapse the wavefunction to the million dollars or to collapse it to the note. Note that the "cat" in this situation includes the contents of the envelope, the computer, and your Uncle. I'm still considering whether it also includes parts of yourself that you have not yet observed. - Pervach

Probability, and a barrel of balls[edit]

I posted this problem a couple of days ago... perhaps it doesn't flow with the mojo of this site... feel free to delete it. Although, since posting it, I've come up with (what I think is) a nifty recursive solution, but I can't quite see / explain the logic of it, yet. I'm antsy to see if someone can do better. If this problem still exists in another couple days, I'll post my weak-ish recursive solution.

I think I have a non-recursive (but with a summation) general solution for the probability of having seen exactly X balls after K takes. Obviously 0 if K < X or X > N, otherwise N! / (N-X)! / N^K × { sum A=1..X: [ (-1)^(X+A) × A^K / A! / (X-A)! ] }. For having seen all the balls, just replace X with N. That eliminates one term but I can't see an opportunity for any real simplification. I wrote an almost complete proof using induction as well as verified that the formula works in some basic cases, but I've been known to make mistakes... All that is too much to copy here and frankly, a mess. I'd love to see an elegant way of getting the solution (even just for X=N), or even how to simplify this one further.

Anyway, I'd consider removing this problem from here. I wouldn't call it a puzzle as much as a math problem, and quite heavy at that unless I missed a serious shortcut or one settles for the most basic recursive solution. Or maybe add a note in the problem description? --Nix

It seems to me like the problem shouldn't be too difficult, but I am having problems with it. The sample size has NK permutations, for each pick there are N possibilities and this is repeated K times, but I now question my initial conclusion that it therefore has NK / K! combinations, since that depends on how many of the balls you picked were the same and how many were different. The number of successful combinations is 0 for K < N (obviously), and I believe NK-N otherwise. If this is true, my initial guess for combinations must be false, because the probability for K > N would be K! / NN, which often gives results greater than one. Any ideas? 21:16, 12 March 2009 (UTC)

I think i've figured out, for K >= N fixed we have that the outcomes that doesn't shows all the symbols are N(N-1)^K (N times the outcomes that doesn't contain a fixed symbol) so the outcomes that contains all the symbols are N^K-N(N-1)^K and thus the probability is 1 - N((N-1)/N)^K (which goes to one as K goes to infinity which is good because on an infinite trial eventually the N balls will appear). Bunder 00:01, 14 April 2009 (UTC)

The solution I found is P(K,N) = (K choose N)/(N+K choose N). This distribution has the correct asymptotic conditions.

Derivation: If you imagine a set of N urns, each distinguishable by number, and K indistinguishable balls, then the total number of ways to draw a set of K ping-pong balls is equivalent to the number of ways to place the K indistinguishable balls in the N urns. (the number of balls in each urn corresponds to the number of times you draw the ping-pong ball of that specific number.) The number of ways to do this is (N+K choose N), therefore the total sample space is (N+K choose N).

Finding the number of outcomes with all balls drawn at least once is equivalent to the above, with the additional requirement that all urns have at least one ball. Thus, we can pre-allocate one ball per urn, then treat the problem as allocating the remainder of the balls as we did earlier, with no constraints. There are K-N balls left to place in N urns after pre-allocation, thus the event space is (K choose N).


Here is a plot for 10 balls:


--Preetum Nakkiran

I think Preetum's logic is correct. The number of ways to place K distinguishable balls in N urns is (N+K-1 choose K). When you need at least one ball in each urn, K becomes K-N. Therefore, the correct answer is


This formula gives the correct answer when N = 1. P(K,1)=1 (as expected).


The bartender[edit]

The daughters are 2, 6, and 6. Knowing the product and sum of their ages doesn't solve the problem for the mathematician, which means that there must be at least two possible ways of distributing the factors of 72 which leads to identical sums. These are 3, 3, 8 and 2, 6, 6. The last bit of information tells you that there is a "youngest" daughter, which implies that the 3, 3, 8 combination can be ruled out. Of course, this is also debatable - even when twins are born, one is older, one younger. --Mark

That’s why I’ve written that age must be thought of as an integer in this problem. As some people pointed out to me, parents often make a clearer distinction about who’s older when they have twins, so this is a weak point of the problem. I still like it though. --OP

In the puzzle, which i've seen before, it says that the sum of their ages equals something. In this case it would be 14. But in this version of the puzzle posted on the page, there's no saying that it could be 14. Why can't their ages be 1 8 and 9? All the conditions are still met. Multiplying Gives 72, There is a youngest (Which in this case doesn't matter) and you can still add their ages up to an integer, which is the only restraint on the house number (i don't know of any negative house numbers). I'm pretty sure the puzzle is supposed to include the fact that the house number is 14. Does anybody see a problem with my logic? --Mat

Wrong talk section: fixed. *coughs* We assume the mathematician knows where the bartender lives. If the mathematician knew they lived in a house with house number 18, he would reach your same conclusion: that their ages are 1, 8 and 9. However, we know from the statement of the problem that the mathematician is unable to determine their ages from the house number and the product. This means that there must be at least two sets of three ages with sum to the same number "A" and multiply together to yield 72. The only number A which satisfies this condition is 14, and the two sets of three ages are 3, 3, 8 and 2, 6, 6. The bartender's last statement allows the mathematician to determine which of these is the correct set, based upon the existence of a "youngest child." --Mark

It's foolish to "assume" the the mathematician knows where the bartender lives. (If he knew that it's likely he'd already know that he had daughters and their ages) The Puzzle should mention that the bartender gives the mathematician his address, but that even with this information the mathematician can't figure it out. So the puzzle doesn't have to include the fact that the house number is 14, but it DOES need to include that the house number is known by the mathematician. Something like this should never just be left to assumption, especially since the puzzle involves two people who are apparently not very familiar with each other. --Kyle

I definitely agree with Kyle here. The missing statement that the mathemetician somehow knew the address of the bartender is the only thing that kept me from getting the answer. If it were worded "Then I should tell you that the sum of their ages is equal to the street number of this bar," then the mathemetician could step outside and observe the street number. This would allow an elegant way for the mathematician to learn the sum of the ages without the puzzle solver knowing. --Greg

I agree with Kyle & Greg. If there's one more person, I think he/she should delete these 3 comments and change it. --Michael

Actually I originally meant it the way Greg stated it, but somehow I made the assumption that the family lives in the house of the bar, which is not very likely if you think about it. I’ve changed the puzzle to the sentence Greg suggested. If someone feels like cleaning up this discussion section, he/she can delete these comments, as the problem should be solved now. --OP

Having solved this, I can't help but feel a little bit short-changed, because the final insight which allows the mathematician in the story to solve the problem (i.e. the existence of a 'youngest' daughter) is the same one that we use ourselves. This means that the final sentence "Now the mathematician knows their ages" is redundant, when it could potentially be used for an extra twist: if there had been more than one sum that had multiple solutions, but only one of those sums led to a unique solution on hearing that there is a youngest daughter, then the fact that the mathematician was able to solve it becomes critical information for us. So, here's a meta-puzzle for you (which I haven't solved): is there a way of re-posing the puzzle with a different product and/or number of daughters so that it satisfies this petty unreasonable whim of mine? 03:05, 16 August 2009 (UTC)

  • How about this? The product of the three daughters' ages is 360. Then (assuming that my 3:30am pencil-and-paper calculations are correct) there are five pairs with equal sums: 36/5/2 and 30/12/1 (sum of 43), 20/6/3 and 15/12/2 (29), 15/6/4 and 12/10/3 (25), 12/6/5 and 10/9/4 (23), and 10/6/6 and 9/8/5 (22). The mathematician knows that the sum is in fact 22, so he deduces from the youngest daughter comment that the daughters are 9, 8, and 5 years old rather than 10, 6, and 6. None of the other pairs feature twin daughters, so this should be the unique solution. Ravi12346 08:43, 29 August 2009 (UTC)

The father[edit]

Assuming the typical human gestation period of nine months, and assuming that a fertilized embryo counts as a child (debatable), and assuming that age can be considered a negative quantity (debatable), and assuming that a genetic link is enough to establish "fatherhood," the father is standing naked in the restroom, wondering if he should tell the girl that his condom broke. --Mark

Since the daughter will be 5 years, 3 months in six years, the mother is just now starting her second trimester and is 20 years, 3 months old. So no, she was legal and (arguably, I suppose) not a "girl". And I think the problem should be rephrased to "In six years, A mother's age exactly five times the age of her child and she will be 21 years older than her child. Where is the father?" That phrasing will avoid the current problem where the daughter's age seems to be defined as negative, which isn't exactly an ordinary definition.
I don't understand the "Where is the father?" question, though. What is the implication here? 05:56, 14 March 2009 (UTC)
Either "having sex with the mother" because the child's age is -3/4 years (nine months), or "in prison" because the child's age is constant --J0eCool
Yeah, I somehow added wrong. That's the correct solution. 20:08, 14 March 2009 (UTC)

This puzzle is dumb.

I thought this was a fun detour from what you initially expect when you first read the question, regardless of some of the debatable discrepancies. --Greg

Just thought I'd point out that the question is flawed in that the fact that the daughter is being conceived does not imply any particular, nontrivial location for the father. --Ryan

Another problem: the 9 months is counted from when Mom had her last period, not from date of conception. Further, there is significant variation around the expected birth date: some kids come months early. I was 4 weeks late myself. => Dad could be anywhere. I would also quibble that if you're going to pretend we know the exact day from the answer, we ought to be informed that Mom is 21.00 years older. Boo. --DW

Gestation time is 40 weeks not nine months, so the father is at the pub, blissfully unaware that he knocked some chick up last week. Also, someone needs to look up what father means. --SirEel

The mutilated chessboard and dominoes[edit]

Each domino would have to cover one black square and one white square - so in total they should cover 31 white and 31 black squares. But the mutilated chessboard has 32 white and 30 black squares (or the other way around) so they can't be used to cover it.

Now imagine that we colour the squares of the chessboard red, green and blue, in the following arrangement:









Any 3 by 1 tile must cover one square of each colour, so in total they cover 21 red, 21 green and 21 blue squares. There are 21 red, 22 green, and 21 blue squares in total. So the leftover square must be green. That must remain true even if we rotate the pattern above by ninety degrees, so the only places we can have leftover are those marked with *s in the following pattern:









Two Dice[edit]

The answer is 1/11. The question can be formulated equivalently as follows: Two six-sided die are rolled. Given that at least one of them turns out to be six, what is the probability that both turn out to be sixes? There are exactly 11 equiprobable cases (rolls) that yield at least one six: (1,6),(2,6),(3,6),(4,6),(5,6) (6,1),(6,2),(6,3),(6,4),(6,5) (6,6) One of these yields two sixes. --Tennenrishin 15:33, 3 March 2009 (UTC)

P(A|B) = P(A&B)/P(B) = (1/36)/(11/36) = 1/11

This problem is not tricky in terms of the math but the wording appears to be intentionally misleading. I'm not sure that's fair. A better wording would be "I roll one dice and it is a 6, if I roll a second dice what is the probability that they are both six?" This isn't a hard puzzle but more of a math story problem. I could imagine someone reading this and thinking it was assumed that the first dice was a six and therefore did not need to be incorperated into the probability assessment. 02:04, 11 March 2009 (UTC)

I am not a grammar nazi, but please, the singular is "die" and the plural, "dice."
Actually, no. That would be an even worse wording. With your wording, the probability would actually be 1/6. The reason it is not 1/6 in the original wording is because EITHER of the dice can be a 6. If I roll one dice and it's already a 6, the second one has a 1/6 chance. (I'm tired of arguing so if you don't believe me run a simulation).

This is a slight variation of the Two Beagles problem. I suggest deleting it. -- DanielLC

A Single Toss[edit]

I made this one up myself - hope you like it. Let me know if you need any clarification. Harfatum 22:28, 4 March 2009 (UTC) (Luke is the first one to get the first part, nice job)

I made a few technical clarifications, and added the discrete question. You can solve the continuous question without integrals if you view it as a limit of discrete questions and use summation identities. Harfatum 19:33, 17 March 2009 (UTC)

It sounds like you are stating it is possible to transform an integral into the limit of a Riemann sum, which is hardly surprising. I still think something needs to be added stating some method of determining it's edge. I mean, is the edge normally distributed? I maintain that without further qualifications, I would not play for any amount of money because I would assume that with such a bent up coin the man is probably intentionally trying to rip me off. 19:39, 18 March 2009 (UTC)
What I am getting at is that if you write down the discrete probabilities, then you can get a rational polynomial function of n for the expected value (for discrete p-values in 0/n, 1/n ... n/n), which you can eyeball and say goes to 2/3 when n is large. The edge? What exactly do you mean? I changed this guy from shady to confused, to emphasize that we have no idea if he's trying to rip us off. I could reframe the problem if it's causing confusion. Harfatum 20:46, 18 March 2009 (UTC)

$0, of course. Call the likelihood of it landing on heads "1/N". Just because it landed on heads for me doesn't mean it didn't land on tails for the N-1 other people he has propositioned so far. Therefore, since N can be arbitrarily large, any amount paid to this person can result in an arbitrarily large net deficit. Therefore, this game is not worth playing for any price, other than "free," unless a cap is set on "N." This is true for any set of witnessed flips, even if it lands heads-up 100 times in a row (although such may lead you to expect "N" to be rather small). --Mark

I'm going to say 2/3 of a dollar: Attempt to find the probability that in any given toss the coin will land heads. Given that we know that after one trial it did land heads, I think it's fair to bend things a bit and assume that we're going to get a linear distribution - that is, the chances of it being a fair coin are double the chances of it being only 25% weighted towards heads, and half the chances of it being fixed for heads. Because all probabilities have to sum to one, we get a continuous probability distribution (of probabilities) f(x)=2x for 0<x<1, and so the expected value for the probability of heads coming up will be the integral of xf(x)dx between these limits, which is 2/3. If you get paid 1$ for each success, and the probability of success is 2/3, then the expected payoff will be $2/3. --Luke

Eh, that's a decent way of looking at it, except that it seems reasonable to claim the man is intentionally trying to rip you off. It really is more likely the coin is fixed for tails than fair.
And Mark is right to some extent in that without knowing the distribution, we can't answer this. We also don't know, for example, if the coin has a memory, which would really complicate things. Since it's so bent, it might even bend more each flip, completely randomizing things! Obviously in the real world, it is best never to bet on such a coin, since anybody trying to get you to bet on it is way too shady to trust. 06:09, 14 March 2009 (UTC)

Outside of theory, just some food for thought: |You Can Load a Die, But You Can’t Bias a Coin. -- Andrew

I've worked this out myself before. There is a slight error in the problem. If you can assume nothing about the coin, you cannot ever get a probability. What I did was assume the probability of the coin landing on heads had a constant probability distribution. -- Daniel Carrier

I agree. The problem seems to be asking us to choose an uninformative prior probability, but there seems to be disagreement about how to choose such distributions. (wikipedia link) It isn't obvious to me that the uniform distribution is especially reasonable. --Allen 02:54, 8 May 2009 (UTC)
Maybe we can rephrase the puzzle to a coin found on the street. Pick it up, play the game with a friend. I would not choose a flat prior probability, but it is a bit more reasonable to choose something close to it. -- 17:41, 16 July 2012 (EDT)
After informally discussing it with a few friends, I think one does need to specify something about priors to make this problem tractable at all. Assuming a linear priori distribution is not obvious, and different assumptions lead to different answers -- specifying the a priori distribution in the problem does make it easier, but it's necessary. Also, it's simpler if you do away with the guy offering the game -- he has every incentive to cheat -- and say that you find a bent coin on the street, flip it once, and it lands Heads up. That avoids people speculating on the shady guy's strategy (e.g. only offering the game ifor certain probability ranges; only offering it if the first toss really does show heads; and so on).

Bananas and a Hungry Camel[edit]

Just found this problem today. Think I've got a good answer but not too sure about the proof. I'm thinking about running a genetic algorithm on it tomorrow to see if the computer finds anything better. --Luke

I got a result of 533 bananas. I'm pretty sure this is optimal but have no formal proof. While there are more than 2 000 bananas, it takes 5 trips to move all of them: move a full load, go back, move another full load, go back, move the rest. That's 5 bananas per mile, therefore 200 miles consume the first thousand bananas. It doesn't matter if this is done in steps of one mile, one banana (1/5 mi), or all 200 miles at once. From here on, only 3 trips are needed, therefore the farmer gets 333 miles (and one third) with the next thousand bananas and is at 533 miles with 1 000 bananas left. The rest goes with one trip which uses up 467 bananas, leaving 533 bananas to sell (534 if we count the fractional mile and the camel only needs to eat after moving).

The farmer is also left with one hungry camel with no food for the trip back home. If he would also like to get the camel back home, I think he'd need about 3 667 bananas to start with (leaving nothing to sell) or more. This is with leaving bananas along the way for the return trip, and is obviously cheaper than going 2 000 miles straight (which would take 7 673 bananas). --Nix

What are you all talking about? The puzzle says that the market is 1000 miles away. The camel can carry 1000 bananas at once and it eats one banana for each mile it walks. So at the end of the 1000 miles, there wouldn't be a single banana left, because the camel would have eaten them all. Therefore the answer is 0. --Brusko

You're depositing bananas out in the desert, Brusko. Via Nix's solution, which I agree is best, you take 1000 bananas out 200 miles, drop off 600 of them, head back 200 miles, reload, take 1000 bananas out 200 miles, drop off 600 of them, head back 200 miles, reload, take 1000 bananas out 200 miles, load 200 of the 1200 you've dropped there, etc., etc. --Mark

The problem as it is currently written doesn't say when the camel has to eat the bananas, only that it has to eat 1 per mile it goes. That means that I could get 1000 bananas to the market, return, and then feed the camel 2000 bananas when we get home. I assume that the question is mis-worded though, and that is should be fixed to read "but needs to eat a banana to refuel immediately after every mile he walks" or some such. --Stephen

Its more than clear, Stephen. If I you say that you need one bread/day for food, its not ok to give you 365*10 after 10 years? Is it? Or maybe to stuff you with 7 breads in your throat every Monday or so. --Stalker

I think that the maximal number that can be delivered to the market is 400. Here's how you do it: 1. Take 1000 bananas from home (leaving 2000 behind). 2. Carry them 400 miles into the desert (600 bananas left after this trip). 3. Leave 200 bananas behind. 4. Return home (eating the remaining 400). 5. Repeat 1-4. Now we have 1000 bananas at home and 400 in the middle of the desert 6. Take the remaining 1000 bananas. 7. Walk 400 miles to the temporary storage point. At this point the camel carries 600 bananas. 8. Load the 400 bananas on the camel (which is now fully loaded with 1000 bananas) and head to the market (600 miles). 9. The camel eats 600 bananas during this trip, leaving 400 the farmer can sell.


I independently arrived at Nix's answer, 533 bananas. I'm convinced that strategy is optimal and Nix's explanation is correct. This is a good illustration of the teapot principle of mathematics. How do you make tea if your teapot is on the floor? Put it on the stove. What do you do if it's on the counter? Put it on the floor, because you already know the solution for that problem ;-) If we had 1,000 bananas at a distance of x miles, the solution is obvious. So, how do we get from 3,000 bananas at 1,000 miles to 1,000 bananas at x miles? We need an intermediate step of 2,000 bananas at a distance of y miles. Nix correctly pointed out that transporting more than 2,000 bananas requires five one-way trips, so we lose 5 bananas per mile, so point "y" is 200 from the start, 800 from the finish. Now the problem reads "You have 2,000 bananas 800 miles from the market." That's an easier problem. Transporting more than 1,000 bananas requires three one-way trips per banana, which gets us another 333 miles. Now we are 467 miles from the market and we have 1,001 bananas. Leave one behind and you can get 533 to market. That's my best attempt at a formal proof. --Ralph

The general form of solution for this sort of puzzle is as follows.

For each mile, the minimum number of bananas to cross it is simply, 2*Roundup(Bananas/Capacity,0)-1. However this doesn't take into account things like with 1001 bananas you're better off leaving 1 behind and doing 1banana/mile for 1000 miles than 3bananas per mile for 1 mile and then 1b/m for998. Essentially if you've left 1 or 2 bananas behind, going back for them isn't worth it.

To take account of the times when it's more efficient to waste bananas you need a test

Bananas = 3000
Capacity = 1000

Do Until NoBananas

   pRate = 2 * Round_Up(Bananas / Capacity) - 1
   RateLow = (Bananas - 1) Mod Capacity
   Ratetest = RateLow < 2 And RateLow < pRate

   If Ratetest Then
       Rate = pRate - 2
       Bananas = Bananas - (Rate + RateLow + 1)
       Waste = Waste + RateLow + 1
       Rate = pRate
       Bananas = Bananas - Rate
       End If

   Distance = Distance + 1
   NoBananas = Bananas = 1
Distance = Distance + 1 
Print Distance

The above doesn't work for 1 banana, because that's a special case but after that it works fine and calculates how many bananas you leave behind. Note this won't always give you the most efficient route to the end distance, it'll give you the guaranteed furthest you can get. For instance Around the 1,000,000 Banana mark. Everything from 998,294 to 1,000,294 bananas will get you 4432 miles. However, At 998294 bananas you've eaten 998294 bananas, but at 1000294 you've eaten 1000288. Clearly you've unnecessarily used 1994 bananas and necessarily wasted 6. I've only gone 4456 miles (1047432 bananas) and it's probably 2100 or so bananas for the next mile.

I don't know how to do the maths on a sum like that for a simple formulaic expression, it'll probably need to be some form of rounded/floored logarithm.

Would love to see some logic on return journeys, which effectively allows you to leave bananas at D+(D-d) miles into the journey. - SPACKlick


This problem seems to be a special case of a related problem above.... "Probability, and a barrel of balls". A general solution, although unverified, has been offered.

The answer is 23. If you want to work it out the easiest way is to answer the reverse problem, ie what is the probability that out of a group of n people what is the probability that 2 people do not share the same birthday.

The calculation is (ignoring leap years) 1 * (364/365) * (363/365) * ... * (365-n/365). When n = 23 this number is less than 0.5, so there is less than 1/2 chance of 2 people NOT having the same birthday, or conversely there is a greater than 50% chance that 2 people WILL share the same birthday.

"How many people do we need to have a probability of 1/2 that each day of the year is a birthday ?" Can be solved using standart counting methods of combinatorics. We are distributing indistinguishable balls (people) among distinguishable urns (days), to get all the possibilities distributing people among the days in complete randomness. If we are dividing the number of possibilities doing this in a surjective (each day is hit AT LEAST once) way by the first number we get the probability (assuming humans are born equally often on all days of the year; also neglecting leap years). Let n be the number of people, then the whole equation will be Obviously there are no solutions if there are less then 365 people (which shouldn't surprise us, as you can't get 365 birthdays in the year with only 364 people).

((n-1) chose (365-1)) / ((365 + n - 1) chose n) >= 0.5

If you care, solve for n. If you could prove nominator growing slower than denominator for all n > 365, there were no solution. Else somebody could go ahead and write a program solving that thing ;)

update: after some fiddling with java I got a result that with 191677 people, we have a probability of over 50% that each day in the year is a birthday, which I find quite amusing.^^ -- Mel

Something is wrong with your formula. Using simple simulations of 1000 random trials each, I got the results that with 2500 people it's already 66%, with 3000 90% and with 2000 only 20%. Also, shouldn't the case of n=365 yield 365!/365365, which it doesn't?
To get an exact number I used my formula from "Probability, and a barrel of balls". Barring any problems from floating point arithmetic (I know, could be avoided), the exact point of 50% is between 2286 (49.941%) and 2287 (50.037%) people. I'm still interested to see that simpler formula for this. --Nix
yeah, I was wrong. I was assuming that putting all birthdays on one day has the same probability as for example putting half of them on day#1, and half of them on day#2, which is of course incorrect. After treating all the people as individuals I got the same number as you did, via using (365! * Sn,365)/365n, where Sn,k is the Stirling numbers of 2nd kind. --Mel

I think the problem can be more easily viewed as, 'what is the average number of times you must roll a 365 sided dice before you see all 365 numbers come up'. which is [sum from (i=0 to 364) of 365/(365-i))]= 2364.64 -Peter

Is this a leap-year? If so it would make the number of times significantly higher. -Ian

The Bee Problem[edit]

Easiest way to solve this is to recognize that the biker travels 2 m/s, for 20 m. Therefore, the cyclist arrives in 10 s. If the speed of the bee is 5 m/s, and it also travels for 10 s, then the bee travels 50 m, even though its total displacement remains 20 m. --Mark

Bee travels 2.5 times as fast as the bike, so for the bike to go 20m the bee goes 50. --Luke

You're both wrong- the answer is obviously infinite. --Zeno

I hope the person on the bike was wearing a helmet.

Math Puzzles[edit]

Would it be appropriate to create a new page or a new section of this page devoted to math puzzles, problems, and proofs, rather than logical ones? I think these don't really belong mixed in with the logic puzzles, but there are plenty of good ones, several of which I can think of off hand that are difficult at many levels of math. For example, can you find an exact value (in terms only of natural numbers, +, -, *, /, and ^) of the sine of one degree? What is this value? Can you prove it? 06:19, 14 March 2009 (UTC)

Cereal Box Game[edit]

Every time I see this game on the box I try to determine the probability of winning, and I just don't know how to approach it. Combinations and permutations are pretty easy, but the twist to this puzzle is that you may pick out a color more than once.

For example, on row 3, you need green, blue, and yellow to advance. You may pick a green, a blue, another green, another blue, yet another blue, and so on. You continue until you have at least one of each color shown, or you pick one of any color not shown. You could pick a hundred greens and blues, but a single red, orange, or purple ends the game.

I hope this is a worthy puzzle for the page. -Taren

I think the probability is 1/6 * 1/15 * 1/20 * 1/15 * 1/6 = 1/162000 , where each of the probabilities are the chance of winning each level. The probability of winning each level is equal to the probability that the allowed colors are the colors to appear first. These probabilities are of course the same for all combinations of colors and therefore the probability is 1 divided by the number of ways to select this many colors out of 6 possible colors. For example, for level 4 this would be the number of ways to select 2 out of 6 colors, which is equal to 15. This way you can safely ignore the twist of picking out the same color more than once. - K

Reverse the problem. For each stage, what is the chance of losing? You can guarantee that you have a 1/6 chance of picking any color at any time, so you don't have to keep track - therefore the probability of losing the first round is 1/6. Second round: 2/6. Winning the game = 1-(1/6 * 2/6 * 3/6 * 4/6 * 5/6)

I got 1/162000 as well. A simple mathematical explanation:

To pull the first color of the first row, you have a 5/6 chance to get it, and a 1/6 chance to lose. The next color has a 4/6 chance to get it, a 1/6 chance to ignore the draw (drawing the previous color), and a 1/6 chance to lose.. This can be simplified to a 4/5 chance. The next cerial is done the same way, but with a 2/6 chance to ignore the draw, and is simplified to 3/4. The pattern continues. The odds for the first row: 5/6 * 4/5 * 3/4 * 2/3 * 1/2 = 1/6.

The second row has a 4/6 chance to get the first cerial. 3/6 chance to get the second cerial, 1/6 chance to ignore the draw, and a 2/6 chance to lose. A similar pattern: 4/6 * 3/5 * 2/4 * 1/3 = 1/15

third row: 3/6 * 2/5 * 1/4 = 1/20

fourth: 2/6 * 1/5 = 1/15

fifth: 1/6

multiplied together = 1/162000

I think you are wrong! You are calculating the probability, that you win the game by choosing every color only once a stage! - protos_drone

No, they account for it. It doesn't show up in the calculations because it has no effect on the game. If you pull a yellow in the first trial and then pull another yellow literally nothing has changed in the problem. For this reason you can simply ignore it.


(all trigonometric calculations are in radians)
0 = 0
cos(0) = 1
tan(cos(0)) ~= 1.5574....
sqrt(tan(cos(0))) ~= 1.2480...
tan(sqrt(tan(cos(0)))) ~= 2.9892...
sqrt(tan(sqrt(tan(cos(0))))) ~= 1.7290...
tan(sqrt(tan(sqrt(tan(cos(0)))))) ~= -6.2710...
abs(tan(sqrt(tan(sqrt(tan(cos(0))))))) ~= 6.2710...
floor(abs(tan(sqrt(tan(sqrt(tan(cos(0)))))))) = 6

Square roots sound like cheating to me, since you aren't allowed to use any twos. I guess I would feel a bit more comfortable with ln, though, since that can be defined by an integral rather than just meaning loge. Regardless, I feel fairly certain the easiest way to solve this is indeed to start with cos 0 and work from there. 18:44, 17 March 2009 (UTC)

Another option is exp(cos(0))=2.7... , ceil(exp(cos(0)))=3 , ceil(exp(cos(0)))!=6. This avoids the square root, but uses the exponential function. I wonder whether there is a solution without resorting to the ceil() or floor() functions. - K

You can get any number of the form sqrt(p/q) just using, for example, sec, arccos, and arctan: let F(x) = sec(atan(x)), G(x) = sec(acos(x)). Then F(sqrt(x)) = sqrt(x+1), G(sqrt(x)) = sqrt(1/x). So if p/q has the continued fraction expansion [a_0; a_1, a_2, .., a_n], then sqrt(p/q) = F^(a_0) G F^(a_1) G ... G F^(a_n) (0). --Eigenray

I just realized that arcsin doesn't need to be restricted to its principle branch. One of the values of arcsin 0 is 2pi, so floor(arcsin 0) is a solution, kind of. I would also like to add that the problem needs more constraints on which functions are allowed. Obviously the easiest way to solve this as stated is simply to use the successor function six times. 19:51, 18 March 2009 (UTC)

If square roots are off the board, than the successor function is definitely off the board. Agreed that the set of available functions should be more clearly declared. I propose the set that cannot be trivially defined using the aforementioned forbidden numbers, such as sqrt(x) = x^(1/2), and S(x) = x + 1. On the other hand, someone here can probably drum up an infinite sum or integral definition of just about every function available that we normally don't think of as straight arithmetic.
Well, "the square root of x" is simply a notational alternative to "the second root of x," whereas "the successor of x" is completely different from "x + 1." Besides, you have it backwards; addition is defined recursively by the successor function, not the other way around. The successor function is far more fundamental. This is obvious when defining "one." One is typically defined as the successor of zero, not zero plus one, which would be totally circular. 03:36, 26 March 2009 (UTC)

Here's a roundabout solution involving only ceiling, trig, and ln: ceiling(ln(tan(tan(tan(tan(tan(tan(tan(tan(ln(arcsin(cos 0))))))))))))! That is, the factorial of the ceiling of the natural logarithm of the eight tangent (the tangent iterated eight times) of the natural logarithm of the arcsine of the cosine of zero equals six. I'm pretty sure there is a brief solution using a sum or integral from n=0 to infinity, but I haven't found a good function to use for that yet. 20:10, 18 March 2009 (UTC)

The Peano axiom successor function S


Nerd65536 02:03, 19 March 2009 (UTC)

Nerd, I already pointed that out. Clearly the implication is that the successor function is not allowed, although this isn't stated in the problem.
And by the way, that is actually the definition of six, interestingly enough. 6 := S(5), 5 := S(4), 4 := S(3), 3 := S(2), 2 := S(1), 1 := S(0). 19:38, 21 March 2009 (UTC)


without using anything that could be written in a form which would include numbers. I highly doubt that you can reach exactly 6 without using succ(x) or ceil(x) --Mel

Yeah, that's much nicer than the trig one I found: floor(tan(tan(arcsin(tan(sin(arctan(sin(arctan(ceiling(tan(cos0))))))))))))! = 6. Still, I'm not entirely satisfied using the ceiling function. While it can be defined as the least integer greater than the argument, I generally see it defined as the least integer (aka floor) function plus one. Either way, I still feel the ideal solution would be a sum from zero to infinity. 19:38, 21 March 2009 (UTC)


log*(x) indicates the iterated logarithm. log*(arccos(0)) does it nicely, as arccos(0) is a multi-valued function which evaluates to 2(pi)n + (pi)/2. For a large enough n (which my calculator can't calculate), log*(arccos(0)) = 6. --Mark

You're all trying way too hard on this one. You can solve it with pre-algebra. 0^0=1, right? So 0^0+0^0+0^0+0^0+0^0+0^0=6 --Greg

Greg, 00 is generally undefined, not equal to one. But more importantly, you can only use a single zero, whereas you used six. Otherwise cos(0) + cos(0) + cos(0) + cos(0) + cos(0) + cos(0) = 6. 19:38, 21 March 2009 (UTC)

--- I think this is the shortest solution so far:


using the main branch of the arccos [0,pi].

  - Yakov, Rehovot.
I disagree with the above solution since it uses an implied -1 multiplied to the cos(0). Here's a short one using just trig functions and a floor.
This solution is shorter: (0!+0!+0!)! --Dan

I think this is the longest solution so far: sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan sec arctan (0). But at least it is exact :) --Eigenray

Eigenray, that is a peculiar result, but definitely my favorite so far. It seems that by iterating sec(artan(x)) you can obtain any integer. If we let f(x) = sec(arctan(x)), and let fn(x) be a functional power (simply the function iterated n times), then fn²(0) = n. I will try to keep this in mind.
Noting this, I propose we change the puzzle to "Using only trigonometric functions and a single instance of the number zero, derive a forumula to calculate any number n as a function of n. For added challenge, prove this by induction. 01:20, 23 March 2009 (UTC)
I really like the above solution because of the way it works:
sec arctan(0) = sqrt(1)
sec arctan sec arctan(0) = sqrt(2)
sec arctan sec arctan sec arctan(0) = sqrt(3)

Repeat this pattern to the 36th level and you get sqrt(36) or 6.
This can be defined simpler as a sequence:
a(0) = 0
a(N+1) = sec(arctan(a(N)))

Thus a(36) = 6.
This also implies you can get any positive integer from 0 using this sequence. Cool, huh?
Kabo is back with your induction proof!
Given: a(0) = 0. a(n+1) = sec arctan(a(n)).
Prove: a(n) = sqrt(n).
Case n = 0 :
a(1) = sec arctan(a(0)).
a(1) = sec arctan(0).
a(1) = sec arccos(1). *
a(1) = sec arcsec(1).
a(1) = 1
a(1) = sqrt(1).
Case n = x :
a(x) = sec arctan(a(x - 1)).
a(x) = sec arctan(sqrt(x - 1)) by induction.
a(x) = sec arccos(1/sqrt(x)). *
a(x) = sec arcsec(sqrt(x)).
a(x) = sqrt(x).
Thus a(n) = sqrt(n); QED.
* The jump from arctan to arccos (and arcsec) requires knowledge of their relationship to each other (rather than pressing that silly key on your calculator =P). If we have a right triangle ABC with sides (opposite angles) a, b, and c, the hypotenuse being c and the angle we care about is A. arctan will deal with the ratio a/b. If we let a = sqrt(x - 1), then b must be 1. Because a^2 + b^2 = c^2, it follows that c = sqrt(n). arccos deals with the ratio a/c (1/sqrt(x)); thus by definition arcsec deals with c/a (sqrt(x)/1).
Another way to think about it is to start with a right triangle with sides a, b, and c. a = 1, b = 1, c = sqrt(2). Then as the sequence progresses, c becomes the new a, b remains 1, c is sqrt(a^2 + b^2) over and over again. (You could start with a = 0, b = 1, and c = 1, but where's the fun in having a triangle with 0 as a side?)

The only problem I - personally - have with using sec, is that sec(x) basically just means 1/cos(x) --Mel

But the puzzle explicitly states that trigonometric functions are allowed. Strictly speaking, even cosine is defined using an infinite sum that involves several integers. Hell, even addition, multiplication, and exponentiation for natural numbers (which definitions are necessary to define the operators for rationals, and thence for reals) are defined recursively using a zero, which would break the "one zero" rule. Obviously at some point we have to ignore the "definition" of a function and focus on the actual notation. "sec x" is not shorthand for "1/cos x", it is simply (usually) defined as such. Furthermore, it is often defined separately as a quotient of side lengths of a right triangle or an infinite series, not based on cosine. 03:30, 26 March 2009 (UTC)


Hello! Maybe I get this wrong, but for me the simplest solution, which should work everywhere, is that everything with 0 in the exponent is 1, even 0:

0^0 + 0^0 + 0^0 + 0^0 + 0^0 + 0^0 = 6

^ means "exponent", not an C operator ;-)

--Stsz 11:12, 2 April 2009 (UTC)

There are two reasons this solution is invalid. The first is that 00 is not generally defined as 1, as I stated above, although this is sometimes the case. Most of the time, 00 is an indeterminate form since convincing arguments could be made for it to equal any real number, much like 0/0. The second reason is far more important: The question asked for how to get 6 with only a single zero and no other numbers. You, however, used 12 zeroes. Otherwise, cos 0 + cos 0 + cos 0 + cos 0 + cos 0 + cos 0 = 6. 04:36, 3 April 2009 (UTC)


I'm just trying to think outside of the box here. log(x*x*x*x*x*x) base x for all x such that x>cos(0) is 6. It's almost cheating because of the x*x*x*x*x*x for x^6, the set notation, and the use of variables, but the directions were nowhere near explicit. Also, dx*x*x*x*x*x / dx evaluated at x=cos(0) would give 6. Once more, the use of a variable is probably prohibited. -Luonnos

The first one seems a bit iffy, only because you define x using a zero, then repeat x six times. I could simply instead say 6 = x + x + x + x + x + x, where x = cos(0).
I like the second one, though, because here x isn't simply a substitute for a number, it's actually a variable in a function. You only need one number at which to evaluate your whole derivative, so it seems a bit more legitimate. I personally prefer (sec(arctan(sec(arctan(sec(arctan(sec(arctan(sec(arctan(sec(arctan(sec(arctan(sec(arctan(sec(0))))))))))))))))))!, though. 21:58, 7 May 2009 (UTC)
Oh, and for clarity, your notation should look more like d/dx|x = cos 0(x * x * x * x * x * x). Right now it looks like dx/dx|x = cos 0(x * x * x * x * x),which isn't even a derivative and is equal to x5. 05:02, 14 May 2009 (UTC)

Cant we just add 0! (factorial) 6 times?
  • sigh* Again, the point of the puzzle is to obtain a result of six using ONLY A SINGLE ZERO, and no other digits. 0! + 0! + 0! + 0! + 0! + 0! = 6 requires six zeroes, not one. 04:56, 14 May 2009 (UTC)

Let S be the set of complex numbers whose norm is 0!, and let L be this set's projection onto the reals. Then 6 = n!, where n is the (sigma) sum of natural numbers up to the Lebesgue measure of L.

If the sigma sum is not allowed (I use it because it is regarded symbolically as not requiring the intervening numbers even though they are implicitly computed), then factorial should not be either (for the same reason). On second thought, factorial can be regarded as independent of the computation n!=n(n-1)... and instead as the order of the permutation group on n letters, which liberates it of dependence on all the smaller numbers. If there is an equivalent characterization for a cumulative sum, then the above solution should suffice. Gosualite 17:17, 31 July 2009 (UTC)

Here's a solution without using a zero: Let A be the class of finite nonabelian groups, and let N be the set of all orders of elements (nonabelian groups) in A. Then 6 = inf N. Gosualite 06:01, 1 August 2009 (UTC)

If arbitrary undefined variables were allowed, a solution would be


Of course I'm guessing this is not the case. 22:30, 15 October 2009 (UTC)

I found this one which is as short as I think is possible without negatives and only trig+ln: ceil(cosh(sinh(acos(0))) Of course, depending on what you want to allow you can also get ceil(sinh(acos(0)))! I found the above with python+mpmath, using the following functions: sin, cos, tan, csc, sec, cot, asin, acos, atan, acsc, asec, acot sinh,cosh,tanh,csch,sech,coth,asinh,acosh,atanh,acsch,asech,acoth ln,floor,ceiling A couple others: 0 : 0
1 : cos(0)
2 : ceil(acos(0))
3 : ceil(sinh(acos(0)))
4 : ceil(sinh(sec(cos(0))))
5 : ceil(sinh(sinh(acos(0))))
6 : ceil(cosh(sinh(acos(0))))
7 : ceil(sinh(cosh(acos(0))))
8 : floor(tan(coth(sin(cos(0)))))
9 : ceil(tan(coth(sin(cos(0)))))
Given the operations above, these SHOULD be the most succinct possible, even including negating as a function.

the hardest interview puzzle question ever[edit]

There is no actual problem statement. It just asks "What is the solution to this problem?" Without defining what is meant by "solution," there is no correct answer. 18:47, 17 March 2009 (UTC)

This seems to be a joke, so I'd recommend its deletion.

It comes from here. Anyone who reads IWC would instantly know that the answer to this one is the Allosaurus.

I don't this you should delete this puzzle. I think it's important for people to be able to identify that they don't actually have a problem that requires a solution. - Baz

Fair enough, but there are probably more clever examples than this one of questions with misleading preambles. This one recycles all the other problems on the page in such a way as to make it obvious it's not a 'real' problem. -P

Since the question is nonexistent, the answer defaults to 42. -- 18:17, 18 March 2009 (UTC)

Agreed, the answer is 42. --Greg

I suspect none of you know how to take a joke! I thought it was funny, and definitely worth keeping. And I'd have to agree - the solution has got to be 42. --T.

Upon reading this problem, I made a bet with myself -- that someone would say "42" in the discussion. We're too predictable. -- Qris

The answer is obviously 1 * Sh , where Sh is the Shimmler constant and equals the value one must multiply the found result with in order to have the correct result. Alternatively it can be the text string "This is the answer to that problem." -- Eroen

"What" is the solution. When the problem is read out loud, you can't tell if there's a question mark-so it's just a statement. --Satvik

How about this one: There are 99 prisoners in a prison, each wearing a red or white hat, and they can see all of the prisoner's hats except their own, and they each have 13 coins one of which is counterfeit and does not weigh the same. The duties of warden are shared between a chicken and fox who cannot be left alone with each other or the chicken will eat the fox, and a woman who does not know the color of her own eyes. One of them lies, one always tells the truth, and one alternates true and fox statements. If any prisoner knows the color of their own hat and which coin is counterfeit and if it is too heavy or too light they will be set free. The woman says the guard who alternates true and false statements is in charge of the prisoners with red hats, the fox tells the prisoners with white hats that he will allow them to use a balance scale on the coins three times each, but if they use it more times they will be put to death, then says that the woman will put to death any red hatted prisoner who uses it. The chicken says that every third time a prisoner uses the scale a new prisoner will be admitted and if the population reaches 200 they will all be put to death, then says that any prisoner who uses the scale without the permission of 40% of the other prisoners will be put to death. The woman then addresses one specific prisoner and says to him, in the hearing of the other prisoners "there are exactly 49 other prisoners wearing the same color hat as you." What is the solution to this puzzle?

Assuming that the pirates are at sea, the most prevalent solution in the problem is salt water. -- Alex

I really think it should be deleted. It does not describe any problem to be solved so it is clearly not a puzzle. It is, at best, a joke, which, though it appears to be "clever" to some people, still has no place in a page that lists actual puzzles. --Daryl

The Spinning Log[edit]

This answer is a bit lengthy, likely longer than necessary, but hopefully it's thorough enough and clear enough to understand. Here it goes, brace yourself.

Let us assume that the log is treated with some sort of perfectly effective water sealant, so it absorbs no water, and let us assume that there is no friction of any kind in this system. That means no friction between the the log and pins, pins and walls, log an seals, log and water, or log and air. This presents, as far as I can tell, an optimal version of this scenario.

For the sake of argument we'll forget that we already know perpetual motion machines are impossible because of the law of conservation of energy. We'll suspend our disbelief.

For this answer let us assume we are looking at the end of the log and that the from our position of observation, the right side is the side with the water, and the left side is the side without water.

The idea behind this perpetual motion machine is that the water will continuously push up on the right side of the log since it's lighter than the water and is trying to float thus causing the log to spin counterclockwise.

To understand why this would not work, we need to look at how buoyancy itself works. When a lighter than water object (lets say a hollow ball) is submerged it is pushed up by the water apparently in spite of gravity. In fact the opposite is true. It is gravity pulling down on the water that pushes the ball up. The kinetic energy to lift the ball comes from an equal volume of water falling to occupy the space where the ball just was. As the ball moves up, the evacuated area is filled with water from above it; therefore, this water is falling. When the water was above the ball it had potential energy that is exchanged for kinetic as it falls to fill the void left by the ball thus providing energy to lift the ball. The opposite is true for a heavier than water object (lets say a brick), but it's still the same principle. As the brick falls through the water, it is filling space that was once filled by water. As the brick falls, it is providing energy to lift the water to fill the space it just evacuated.

Now back to our spinning log example. If the log were to spin, it would not be evacuating any space for the water to fill. Even though a different part of the log would be filling that space, it would still be the same space it was filling before. There would be not downward movement (falling) of water to convert potential energy into kinetic, so there is no energy to cause the log to spin.

Unrelated Gee-Wiz Info: The water will be imparting a constant force to the log, but it will be pushing on the entire "wet" surface with equal force (a property of fluids), so the net force will be directly left, straight toward and through the pivot axis. It's not really relevant to this problem, but it might be interesting to note. --Greg (Written 2300 Tokyo Standard Time, 21MAR2009)

I may be wrong since I haven't done any serious physics in a while, but the water doesn't push the entire wet surface with equal force like you said. If it did, consider if the log was free in the water, what would push it up if not the water pressure? I'll avoid using conservation of energy at all and just look at the forces. The force from water pressure at each surface point points directly against the surface. For a cylinder, that is always towards the axis. That can't cause rotation, even if the magnitude of the force varies. The net force also has an upwards component, but the force is still through the axis. If the object wasn't a cylinder, the forces could actually spin it a little until it found a balanced position in which the net force is through the axis. Without friction or other extraction of energy it would still continue moving, but that's beside the point. Proving that energy couldn't be extracted from such a system continuously might be difficult using just the forces, but the case of the log is easy. --Nix
You're right about the difference of forces across the surface of the log. They all go through the axis, though, like you stated. That's what I meant to say, but it was late. Thanks for the correction. --Greg

Coaster Game[edit]

I suppose I could just place a coaster one coaster in the center of the table. Whereever you put a coaster, i have one answer spot, which lies directly opposite of the one you just placed. More work needs to be put in the question what is necessary for the table so that this solution works. (One or more symetry axxis? Square? Rectangle? What about a isosceles triangle?

I think that for this solution the table needs to have a point-symmetric shape. If you want to use a symmetry axis, you can’t cover it all with your first coaster, so your opponent has the possibility of placing a coaster on the axis, for which you have no answer spot. So I think square and rectangle would work, even an ellipsis should work, but the isosceles triangle probably wouldn’t work with this method.

- The table is circular. - DH

If coasters can't overlap, then I will assume that overlapping coasters ends the game. So I will choose to go second and place my coaster on top of theirs, thus ending the game. Since I placed my coaster last, I will win. -TC

Truth, lies and switching[edit]

The easiest solution i could find goes as following: I call the three men #1, #2 and #3. First I ask #1, what #2 would answer, if I asked him for the direction that #3 would give me, if I asked him for the way to my destination. (After sitting down at a table and scribbling on a piece of paper,) #1 will point in one direction, which is not depending on whom I asked (whomever I would ask I would always get that direction). The only thing it depends on is, whether the switcher starts with lying or telling the truth. (If he starts with lying the direction they gave me IS the path I want to chose, if he starts with telling the truth I should take the other route, if I plan on continuing my existance). To find that one out, I will now ask #2, what #3 would answer if I asked him, whether #1 was lying. If that answer is yes, I need to take the other path, if it is no, I am taking the path #1 told me to take.

Maybe question 2's wording isn't the best as English isn't my native tongue, but I think it's understandable? --Mel

I personally think it is somewhat easier if the questions are reversed:
1. Ask number one: "What would number two answer if I asked him whether number three were lying?"
2. Ask number three: "What would number one answer if I asked him what number two would answer if I asked him which path led to my destination?"

If the answer to the first question is "no," I should take the path indicated in the answer to the second question. Otherwise, I should take the other path. 04:23, 26 March 2009 (UTC)

The first poster actually has it right, the trick is to keep the switcher from switching, this is done by asking the second question in past tense. The reversed order technique used by the second poster can also work, but the second question would have to be modified to: Ask number three: "What would number one have had answered, if before I asked my previous question, I asked him what number two would answer if I asked him which path led to my destination?"

This leads to the second question being a bit clunky, so I prefer the first order.


I believe this can be done in only 1 question. As pointed out above, if you were to ask "Would you answer yes if asked 'is path A is my destination?'", truth and lies would both answer the same, revealing the destination: yes if path A is the destination, no if it is not. Switcher, would, of course be useless. So you add a conditional to the question "...answering with the same personality you are now". Since switcher assumes the personality of either truth or lies, if he were forced to act as if he always answered that way, he will either answer EXACTLY like truth or EXACTLY like lies. Hence, the question:

"Would you answer yes if asked, answering with the same personality you are now, 'is path A is my destination?'"

  • It seems i was confused by how the switcher works, although it doesn't effect my answer. If he were to answer truthfully the first round, does that guarantee he answers false the second round? And do truth and lies know how he would answer on any given round?
  • If the switcher answers randomly then you need two questions; if the switcher behaves randomly (as in the question) then yes, you only need one - either wrap the question with a future tense question as you've done, or go for "Is it the case that (this path is my destination) xor (you are lying)?".
If the switcher were to answer randomly regardless of question, you'd need an initial question to weed the switcher out before using the question above. Approach the first man, point to the second man, and ask "Is it the case that (he is the switcher) xor (you are the liar)?". If the answer is yes, the switcher must be either man 1 or man 2. If no, the switcher is either man 1 or man 3. Either way you identify someone who isn't the switcher. -- TLH

--- The simplest method I find is to ask one of the three "which path would at least one of the others suggest to you to take?", both the truth teller and liar would indicate the dangerous path, and the switcher would have to suggest either or neither I would think, as indicating one would be a part truth, so two questions, directed at to people should suffice, one if you're lucky. is this correct? cjm

One question: If instead of this question, I asked "Do I want to go down path A?" would you say yes? This question is answered with the truthful answer to "Do I want to go down path A?" -Phoenyx


1) ask #1: "does #3 tell more lies than #2?"
if the answer is yes, choose #3 for the next question.
if the answer is no, select #2 for the next question.

now you know that #x, the one you selected, is not the switcher.

2) ask #x: "is path A the wrong one according to the other man who is not the switcher?"
if the answer is yes, choose A.
if the answer is no, choose B.

More prisoners[edit]

The way to do it that actually follows the rules: The first person opens up the first fifty. He has a 50% chance of getting it correct. Then the second person assumes that the first person got it correct. Because if he is right then he knows that one of the ones in the first fifty is definitely not his, because it is the first guys. And if the first guy got it wrong, then it doesn't matter if he too is wrong. So he opens boxes 51-100. He has a 50/99 chance of getting it right, given that the first guy got it right by guessing in the first fifty. Then the third guy comes in and guesses the first fifty, with a 1/2 chance of getting it right. And the fourth guy then guesses the last fifty with a 49/97 of getting it right, etc. SUMMED UP: every pair of prisoners, guesses all 100, so that each box is guessed as equally as possibly at all times.


No, it's not as good as .5^50. Prisoner 1 has a 50% chance by opening 1-50. Given that they survive, prisoner 2 opening 1-50 has only a 49/99 chance because one of the boxes they're opening is definitely wrong. Prisoner 50 only has a 1/51 chance. The actual probability is (50!*50!)/100!
Odd prisoners select 1-50, even prisoners select 51-100 gives the same chance by a different route. I can't see a way to improve on (N!N!/2N!) for picking N of 2N boxes. -- cim

You can get a probability of at least 1 - log(2) == 31.6%. Each prisonner opens their own number, then the number they drew from that box, then the number they drew from that box and so on... Obviously if they reach an open box they have already drawn their own number. The probability of everyone surviving is exactly the probability that there is no cycle of length at least N+1 in the permutation of the 2N boxes. The number of of K cycles (K > N) is P(2N, K)/K, the total number of permutations with a K cycle is therefore P(2N, K) * (2N - K)!/K = (2N)!/K, the total number of permutations with a cycle of length at least N+1 is (2N)! * (1/(N+1) + 1/(N+2) + ... + 1/(2N)) = (2N)! * (H_(2N) - H_N). H_k are the harmonic numbers. The probability of success is therefore 1 - (H_(2N) - H_N). The limit of H_(2N) - H_N is quite easy to determine. Rewrite as sum{K=1...N, 1/(2K) + 1/(2K-1) - 1/K} = sum{K=1...N, 1/(2K-1) - 1/(2K)} = 1/1 - 1/2 + 1/3 - 1/4 + ... . Integrate 1/(1 - x) = 1 + x + x^2 + ... and substitute in -1 to get H_(2N) - H(N) --> log(2). Therefore the probability of everyone surviving tends (decreasingly) to 1 - log(2). -- Chard

Chard's solution is the one the original puzzle designer intended, I believe. The fascinating thing about this solution is that it's so un-intuitive - why should picking the box number written on the scrap of paper you've got be "better" than a blind guess? After all, the scraps of paper are themselves random.
As it turns out, in the sense of being a predictor for an individual prisoner, the scrap isn't better - it's only better for the group, in the aggregate. I wrote a Monte Carlo simulator to test both strategies (random and methodical) and ran 10,000 trials. For the group, the number of successful outcomes was indeed around 3,160 or 31.6%, much better than the random system (which didn't result in success in any of the trials I ran). But the number of individual "hits" (in the sense of an individual prisoner discovering his/her own box) was nearly identical, well within random distribution. The discrepancy comes from the idea that in the methodical answer, if there is a cycle of length N+1 in the permutation then a higher than average number of prisoners will never find their own box. In the particular form of the puzzle, that's a good tradeoff to make; failures are failures, whether spectacular or slight, so making all failures spectacular ones in order to boost the probability of success is a good strategy. (I'd be willing to post the code somewhere if anyone is interested.) -- Kris
The other nice property of this method is that it guarantees a lower bound for the probability of survival even with 100 thousand boxes! Rechecking the numbers though 1 - log(2) is 30.6% not 31.6%. Is it possible to do better? Is it possible to prove that you can't? -- Chard.
It isn't possible to do better, and we can prove it. The easiest way to think about this is if we start off with no paper in any of the envelopes, but we have the power to magically put the numbered pieces of paper into any empty envelopes. We'll do this for each envelope just before it is looked into for the first time (in such a case, we'll say that a prisoner is looking in a fresh envelope). Suppose, however, that we completely waste this power by always just choosing at random among the numbers we have not yet put in any of the other envelopes.
We may as well allow each prisoner to search until they find the envelope containing their number, though we'll still say that they fail if they have to look in more than 50 envelopes.
Note that since the prisoners can't communicate, it doesn't matter which order the prisoners go into the room, so we may as well call them in in any order we like. Let's proceed in two phases:
Phase 1: As long as there is an empty envelope, we always make sure to call in a prisoner whose number we have not yet put in any envelope.
Phase 2: Once all the envelopes are full, we call in all the other prisoners in any order we like.
If we run out of prisoners before filling all the envelopes, we just fill the remaining envelopes any old how.
During phase 1, we construct a permutation p of the numbers from 1 up to 100 like this: if the first fresh envelope prisoner number k opens contains the number n, we set p(k) = n. After that, if he opens a fresh envelope containing n and then the next fresh envelope he opens contains m, we set p(m) = n.
Since we just assign numbers at random, each arrangement of numbers is equally likely. For the same reason, we are equally likely to construct any permutation. But if the permutation we construct contains a cycle of length more than 50 then the prisoners fail. So the prisoners' chance of success is at most the chance that a randomly chosen permutation of the numbers from 1 to 100 contains no cycle of length more than 50, as we hoped. --Nathan
So there can only be at maximum one cycle of length more than 50. The janitor then just hat to split the cycle into two smaler ones, right?


If the prisoners know the order in which they will enter, then a very effective strategy would be to time the exit to match up with the location of the next prisoner's box, if discovered. For example, prisoner 1 goes in knowing he will search all the odd boxes, but rather than exiting upon finding paper 1 (if he doesn't nothing matters beyond this point anyway) he searches for paper 2. If he finds it, he times his exit to be on an odd numbered minute (if time pieces are unavailable, they can keep track of time and simultaneously remain synchronized by going through a song in their head, exiting at specific verses depending on the results of the search) and as such the following prisoner will know whether to search all the evens or all the odds, depending on whether the timing of the first prisoner's exit indicates success or failure. So long as the first prisoner finds his paper (a 50/50 chance) the rest are guaranteed to pass the test, and they all go free. of course, while this doesn't directly violate the no communication rule, as the next prisoner is simply observing the entrance and exit of the previous prisoner, as would happen anyway and would give at least some small amount of information with or without a planned system, it is certainly pushing it. And it is dependent on the prisoners being able to observe each other entering and exiting, as well as being given allowed at least some small amount of control over the time they have, and knowing the order in which they will be going

In three ways[edit]

Original rules[edit]

  • Change "i--" to "n--" --Afarnen 18:00, 29 March 2009 (UTC)
  • Change "i < n" to "-i < n" -- Chard
  • Change "i < n" to "i + n" -- Chard

Sure, it's a cop-out, but have you noticed that the program will print 20 dashes as-is? (It won't print exactly 20 dashes, but that wasn't specified. *g*)

I fixed this, as thinking of this almost caused me to check the answers before I had thought of the actual third solution. Personman 01:54, 13 July 2009 (UTC)

21 Dashes[edit]

  • Change "i < n" to "i & n" -- Chard
That doesn't work. 0 & 20 == 0 so it doesn't print anything. "~i < n" works on most platforms. --Nix
Explanation: ~0 = -1, ~(-1) = 0, ~(-2) = 1, ..., in general: ~i = (-i) - 1, for i <= 0. -- CrystyB 05:40, 9 April 2010 (UTC)

(Can you also change to "int i, n = -20;"?)

No; i has to be less than n for the loop to execute. Player 03 01:40, 8 April 2009 (UTC)
  • change "n = 20" to "n = 21" and "(i = 0; i < n; i--)" to "(i = 0; -i < n; i--)". The problem states "change or add", as opposed to "change xor add" a character. --dttri (

1 Dash[edit]

To make a start, this could be done with 2 characters by changing 'for' to '//for'. This comments out the for loop and executes the bracketed code just once. Is there a 1-character way to skip this line? -- TLH

Here's a 1 character solution. Change the first line to "uint i, n = 20;" -Tom

Yes, if you define uint to mean unsigned int. Is there a compiler or standard header that does that automatically? The intended solution is probably putting a ; between the ) and {, although it's platform dependent as well. Even if int wraps from negative to positive like it does on common platforms, if int happens to be 64-bit it will take a long long (!) time to execute the loop. --Nix

Another 1 character solution: Add “;” after the closing parenthesis of the for. Depending on the platform, it might take a while before actually printing. (With 64-bit ints, it might take a year; I’m not aware of any compiler that makes “int” more than 64 bits, but for 128 bit ints this would exceed the current lifetime of the universe.)

I know: Change i to a pointer (because pointers are unsigned). There will be compiler warnings, though: int *i, n = 20; ...

Infinitely many dashes[edit]

Replace “<” with “=”. -- bogdanb

Replace "i<n" with "!i<n" -- 11:12, 14 June 2011 (EDT)

0 dashes[edit]

This was mentioned above by Nix, in the "21 Dashes" subsection: "i & n" will bypass the loop entirely. -- CrystyB 05:40, 9 April 2010 (UTC)

Preceding code[edit]

These can be done by redefining operator--. Changing operator-- it to mimic operator++ will print 20 dashes, and changing operator-- to assign 20 will print one dash.

I believe one dash can be done rather easier by preceding the code with the single word unsigned; the i-- should then wrap i from 0 to 2^b-1 on a b-bit system. Since n=20 is possible, the system must have at least 6 bits (int is signed), so i should become at least 63 (when unsigned) and therefore break the loop after one execution.

-- TLH

printf("--------------------");exit(0); Personman 02:00, 13 July 2009 (UTC)

Defective Clock[edit]

The answers, I believe, are 4, 6, 6, and 6. I'm less sure about the first one, but here's my reasoning: Only 4 numbers use the lower-left bar, and only one of them (2) has less than five other bars. Thus, if you only see that bar lit and know that exactly 4 bars are broken, the last number has to be a 2. For the others, since 2 is the only number that doesn't use the lower-right bar, if you know that bar works and it isn't lit, the last number has to be a 2.

For c), all 7 can be broken as, if Tom notices the tens digit change during the 60 sec wait he will know the unit is on 0. This would also be true of d) except that it would be impossible for him to not know which bars were broken if he knew all of them were. JD


The answer above answers the problem for a given specific minute. An interesting variation of the problem would be to solve the questions such that the person looking at the clock can know the right time independently of the time that he walks into the room (considering that the answer could be zero for some of the questions).

I agree with a. but my working was as follows.

7 broken, all numbers all lights off, indistinguishable. 6 broken, 1 or 0 lights on, all except 8 can display no lights, and all of the 1 lights can be made on each number where they exist. With 5 bars broken you can have 0,1 or 2 bars illuminated. All combinations of 2 exist on 8 and either 6 or 9) the 1s are all variously makeable [can provide list on request] and the 0's are trivial for 1,2,3,4,5,&7 as each has less than that number of bars. 4 broken, rarest light is bottom left, with that on only, 2 is the only bottom left with 5 lights.

b) Agreed on 6, 2 is the only number with bottom right not illuminated, so if the clock is blank and you know the bottom right one works, 2 is the unit.

c&d have the same solutions) 6 bars could be broken, If only the top right bar is working it only turns off going from 4 to 5 minutes past the 10 and only turns on going from 6 to 7 minutes past. Similarly the bottom right only turns off going from 1 to 2 and only turns on going from 2 to 3

There are more solutions to C SPACKlick 16/4/14

Pizza Paradox Puzzle[edit]

1/4, round isn't relevant. Where is the point of this riddle?

Assume all the coin tosses were done ahead of time - shouldn't matter right? Now choose the people corresponding to each round. More than half these people are in the final round. So if you are called there's more than 50% chance you're a winner. On the other hand let's say you find out what round you're in. Then the your odds of winning are (number in your round)/(expected number chosen). Since the expected number chosen is infinite, you have zero chance of winning.

I think:

  1. "If you are selected but don’t win, you can’t be selected again"
  2. "You are sitting at home when you get a call – you have been selected to play the game. What is the chance that you will get a free pizza?"

The games after this one don't matter, because if you won, you have your pizza and the game is over, if you didn't win, you can't be selected again. (#1)

The question doesn't ask for what the chance that you get a free pizza is without any assumption. The question asks, what the probability is that you win, given that you HAVE been told that you were chosen for THIS round. (#2)

--> chance is that THIS dice throw comes down as HH, not HT TH or TT, so chance is 1/4. (a little more elaborate than the first answer)

Okay if somebody calls me up, and I go to the mystery place of the game and watch two coins being tossed, the chance of winning is 1/4 irrespective of what has already happened, or what will happen in the future. The only reason the chance seems to be greater than 50% is because the game is designed to increase the number of people playing it after each round, whereas most probability problems generally have a defined number of players. It's a little bit like going to a casino and betting money in a 50/50 game of chance, then doubling your bet if you lose, and so on and so forth. Assuming an infinite amount of money, you will eventually win back the amount you first bet, but the chance of winning any particular round is still 50%.

I'm not clear on the rules here. If some but not all of the people in a round get two heads, do those people get pizzas even though the game goes on? Or does no one get a pizza until the final round, when everyone in the round gets pizza? 22:41, 19 April 2009 (UTC)

There is a single two-coin flip that decides for all people in a given round. Harfatum 09:22, 20 April 2009 (UTC)

As mentioned, the game is designed a lot like the Martingale betting system. It is guaranteed to produce more winners than losers (given an infinite population), which means that if you are called up after the game ends it would indeed be an over 50% chance that you are a winner. The chance of winnig a particular round is irrelevant. When called up mid-game it's all that matters. -- Fnursk

This is a MUCH more clever and difficult puzzle than it appears to be - it is similar to the coin toss problem. Imagine the question rephrased as follows: if you were to play one game in a random 'slot' in the game - what is the probability you win? Or, simpler, if there were an infinite number of games, what percentage of the players win? If you use an infinite series, you will get an answer of 66.6%.

How can this not be 25%? I can buy that 67% win and still say that you have a 25% probability of winning. You are faced with a single round in which there is a 25% probability of you getting pizza (along with everybody else in that round) and a 75% probability of losing and the round going to one with twice as many people.
That, on average, 2/3 of the people win is plausible, but that happens because in each successive round there are twice as many people, like doubling a bet. But also like doubling a bet, each individual bet is still the same probability as it always was. I am virtually certain 25% is the correct answer. 04:40, 26 April 2009 (UTC)
"You are sitting at home when you get a call – you have been selected to play the game. What is the chance that you will get a free pizza?"
Think about that question carefully.
Ah, I see. When you get the call, the game has not yet begun; you're selected to be in some round of the game that may or may not occur. The question makes it sound like you're being selected for the nth round and the previous (n-1)th rounds have already happened. This could be worded better.

I believe the answer is 25%, the same probability as a HH coin flip sequence. That's because the outcome of the game (i.e. which round will win) is unknown at the time you are selected, so the only relevant question is: "What is the chance of _MY_ round winning?" Even though you know that >50% of the entire pool of eventual players will get pizza, there's a 75% chance the winners will have played in subsequent (larger) rounds to your (known finite) round. On the other hand, if someone tells you: "The pizza game was played to its conclusion in a finite number of rounds; and you were somewhere on the list of players", then there is a >50% chance you were in the last round (because it was larger than all the others combined) and thus won a pizza. The confounding detail is that the expected number of participants in a single game is actually infinite.

This puzzle shares something in common with the Dirac Delta Function[7], in that one cannot simply "sweep under the rug" the scenario where the coins simply never come up heads-heads, because the unimaginably vast (infinite) number of players who are affected by that scenario. The product of the infinitesimal probability that the game never ends, with the infinite number of pizzaless players that will be affected by that case, actually yields the same percentage of players as all other cases combined. So if you have been notified only that you were part of the game, without the information that the game ended in a finite number of rounds, there's actually a 50/50 chance that you were part of the scenario that the free pizza was _never_ awarded. The other half the time, when the game ends in a finite number of rounds, your chances are 50%, which balances out to 25% overall, (not coincidentally) the same probability as flipping a fair coin heads-heads.

Not coincidentally is right... my method was a lot easier :P. I really like this explanation, although I disagree that it's all that much like the Dirac delta "function". While you don't actually need to consider the zero probability result which affects infinite people in your calculations, that such a result is possible is obviously important. People who think this system of choosing contestants improves an individual's probability of winning beyond 25% should by the same logic think a Martingale betting system improves your expected income in gambling.
As for the fraction of the people playing who won, I think the calculation is the weighted average of all possibilities, which is:
         1   n  3i       2i                      1    n    (3/4)i
 lim    --- sum --- * --------    =      lim  ------ sum  -------
n->inf  n+1 i=0 4i+1   2i+1 - 1         n->inf 4(n+1) i=0  2i+1 - 1

Which can't be right, because it seems to sum to 0. Well, I'll correct it later. Either way, the probability that YOU win is 25%, as stated. 04:08, 2 May 2009 (UTC) You should be multiplying outside of the sum by 1/4.Harfatum 07:00, 3 May 2009 (UTC)

I feel bad for all those people who don't get any pizza. 20:29, 10 July 2009 (UTC)

I read the question differently. The way I read it, you have been asked to be a part of the 'pool' of possible people for the game. In that case, I see the chance of you getting a pizza to be the sum of the probabilities of being selected for a given round AND the probability that that round wins pizza. I agree that any round gives 1/4 odds of winning a pizza, but since the pool is defined as being almost infinite, the odds of being selected for the round is functionally zero in every case. In this situation, you have no chance for pizza.

The round that wins will always account for slightly more than half the population. Therefore, if you participate, you have a slightly better than 50% chance of getting a pizza. Confusion arises because you naively want to analyze the puzzle as: "I get a pizza if the coin lands on heads twice. Therefore, there is a 25% chance that I will get a pizza." This ignores the fact that each coin flip is witnessed by a different-sized population, so the chance of your *observing* the coin landing heads twice is in fact a little better than 1/2. The correct way to analyze the situation is "what is the chance that I will *witness* a coin landing on heads twice, and therefore subsequently recieve a pizza?" Since the game is arranged so that there will always be a slightly larger number in the winning round than in all previous rounds combined, the answer is obviously 1/2. If you want to look at it mathematically: each round n contains 2^(n-1) participants, and by the nth round, inclusive, ((2^n)-1) people will have played. When everyone plays the game, and round n wins, the pizza king has to give away 2^(n-1) pizzas. Suppose, though, that he gave you a choice: you could either get a pizza if your round won two coin flips (seemingly a 1/4 chance), or you could get a guaranteed 3/8 of a pizza right now, seemingly a better deal. If everyone chose the guaranteed 3/8, the pizza king would only have to give away (3/8)*((2^n)-1) pizzas-which is to say, ((3*(2^(n-3)))-3/8) pizzas. The second expression is always smaller than the first, and as n becomes large the ratio of the second to the first approaches 3/4. Therefore, it is absurd to say you have a 1/4 chance of getting a pizza: you must have a 1/2 chance.

One last way of thinking of it: suppose the pizza king made videos of himself flipping coins twice until in one video he got two heads. Then he got a whole bunch of people together. Half of the people were shown the video of him getting the two heads, and then got a pizza. People in the other half were broken into smaller groups, each of which was shown one of the videos where he got less than two heads. Everyone in this second group was then sent home without pizza. This is obviously analagous to the situationb here presented, and each player obviously has a 50% chance of seeing two heads and getting a pizza. By changing the number of observers so that half will see a success, half will succeed. The coin-flipping is a red herring.

I believe a clue that the intended answer is 50/50 is that the king used 2 coins instead of 1, as using 1 coin would have caused both explanations to return the same answer. The second coin was likely added to prevent this.

This got me really confused. One thing came up right away is that I think the puzzle question is about conditional probability: "What is the probability that you win, conditioned on your having been selected to play?", or P(W|S) if W="win" and S="selected to play". (If I'm wrong about the meaning of the question then the rest of my math is beside the point.) The reliable way I know to answer questions like that is to set up the joint distribution on W,S and then compute.

To make it easier for myself, I started with a 1-round game: same rules as usual, but if the first round isn't a winner, the game stops. Also, I assume there is some population of potential players, and players are selected in each round uniformly from the people who have not played yet. For the 1-round game, we can assume that there is only 1 potential player WLOG (if we we interested only in P(W|S). In that case, P(S) = 1 because you are always selected, and P(W) = 1/4 from the toss. P(W,S) = P(W) always because you can only win if you play. So P(W|S) = P(W,S) / P(S) = P(W) / P(S) = 1/4 / 1 = 1/4, which also happens to be the obvious answer.

Then I did a 2-round game, with a population of 3 (to guarantee enough players). For this version, I compute the probability based on how many rounds the game takes, 1 or 2 (R=1 or R=2). So the probability of getting picked is P(S) = P(S|R=1)P(R=1) + P(S|R=2)P(R=2). There is a 1/4 chance of R=1, and in that case, you have a 1/3 chance of being picked. Otherwise, there is a 3/4 chance of R=2 and in that case you are always picked (because all 3 players are picked). So P(S) = 1/3*1/4 + 3/4 = 5/6. If the game goes 1 round, if you are picked, you win (because the game has to be won in round 1 to have a 1-round game). If the game goes two rounds and you are picked, to win you must be not picked for the first round (p=2/3) and the second round must be a winner (p=1/4) so P(W) = P(W|R=1)P(R=1) + P(W|R=2)P(R=2) = 1/3 * 1/4 + 2/3 * 1/4 * 3/4 = 1/12 + 1/8 = 5/24. So P(W|S) = 5/24 / 5/6 = 1/4, the obvious answer again.

I'm pretty sure that for any finite game of N rounds, the result is the same, and I did also get 1/4 using Monte Carlo for several game sizes. (I also wrote down the full joint for the 2-round game as a table and counted off cells to make sure I was doing my conditional probability right.) So if the game is at all well-behaved, I'd expect the same result for a game with unlimited rounds, especially because P(game goes on forever) = 0, so the set (game goes on forever) should be ignorable for all probability calculations that don't directly refer to that set by itself (or a subset of that set).

But there is a problem, which I think means the analysis above cannot really be extended to the unlimited case, which is that for the unlimited case, there has to be an infinite number of potential players. And there is no uniform distribution over a countably infinite set. So I think that either there is something missing from the problem (the distribution of how players are selected) or that the problem is ill-founded but doesn't want you to easily see that (by implying that players are selected uniformly when that is not possible).

It is possible to ask a version of the question by saying "what is the probability that a player chosen uniformly at random (from among those invited to play) has won?" In that case, for a 1-round game, the answer is 1/4. For a 2-round game, 3/8 (1/4 chance of a 1-round game, in which case the chosen player has won = 1/4*1 = 1/4, plus 3/4 chance of a 2-round game, in which there is a 1/4 chance of a win, with a 2/3 chance of selecting a winner (2 winners of 3 entrants) = 3/4*1/4*2/3 = 1/8). I believe that for an n-round game it is 1/2 - 1/2**(n+1), so the limit as n->+inf is 1/2. I think the "game goes on forever" doesn't cause a problem in this case, because that has probability zero, and zero winners, so when we include it in the probability calculation, it shows up as 0*0.

Note that for both the finite games and limit cases, "the probability that a player chosen uniformly (among people selected to play) at random has won" is not equal to "the probability you will win if selected to play the game". This is very strange to me, but it really does come out to 1/4 and 3/8 for the 2-round game no matter how I calculate it. About the most I can figure out about it is that "selected to play" is just a different event from "chosen randomly from selectees", so they are just different. :-)

The case "game goes on forever" _does_ cause a problem as formulated. If you assume that the game terminates ("Assume that Head Head will happen eventually"), I believe 1/2 is the only right answer, though it seems hard to formalize that case without distorting the math. (One way to do so is by having the host flip all coins ahead of time, and only start calling people once he hits Head Head. This guarantees that nobody is called if Head Head never happens, which resolves the problem without affecting our question: That anyone was called implies that this didn't happen, so it isn't a possibility for our puzzle.)

Otherwise, there are multiple interpretations to the game that give you different results (from anywhere between 1/6 to 1/2) based on what exactly you do in the case that heads heads never happens. If you just say that in that case "nobody wins", then your chance of winning is arguably 1/6th, as can be shown if you consider this variation of the game:

[1] Before round N: Game works as stated
[2] At round N: Everybody in this round is called, but nobody receives a pizza.

If you take the limit of N -> infinity, you'll see that 1/6th of the calls are made to players that receive a pizza. If you instead change the rule [2] to

[2a] At round N: Nobody is called for this round, nobody receives a pizza.

then you get 1/4 (if my memory serves right -- it's not hard to do the calculation). If you instead change the rule [2] to

[2b] At round N: Everybody in this round is called and receives a pizza.

Then you get 1/2.

The fact that this outcome happens with probability zero (n->infinity) does _not_ mean that you can ignore it, the impact of this round grows infinitely large just as its probability becomes infinitely small.

(Bonus question: If the game ends no later than round N, what's the chance that a person who picks up a call has been called in round N for N->infinity? i.e. "called in round infinity" -- it's a nice result.)

The Sam And Polly Problem[edit]

3 <= x <= y <= 97. Sam knows only x + y; Polly knows only x * y.

Sam (to Polly): "You can't know what x and y are."
Polly (to Sam): "That was true, but now I do."
Sam (to Polly): "Now I do too."

The problem can be narrowed down by logical steps, but I don't think it can be solved completely without some brute force. To get started, observe that if x and y were prime, Polly would be able to deduce x and y from their product. So the sum cannot be an even number. Similarly, the sum cannot exceed 99, because if y were 97, then Polly would be able to deduce x and y from the product. And so on.

I will post the final solution for x and y if enough people ask, but I'm curious how far you all will get with this.

I think I know how to work it out, but I don't have the time. Perhaps I'll work it out in a few days if nobody gets it in that time. --Michael

I realized that whatever the sum is, it cannot be 3,4, or 5 higher than any prime. A prime number + 5/4/3 means polly would be able to know the answer. So then i devised a simple brute force method: Choose a sum that is odd and meets the above criteria. Then look down the list of prime numbers that could add to it. Find the other factors for the product of those primes, which is easy - double the prime and half the associated even number. The sum of the other factors cannot meet the above criteria, or sams statement wouldn't have helped polly.

19 is the first sum that comes up. 13 plus 6 are the first numbers to check. The other factors for the product are 26 and 3 - but 29 meets same criteria as 19 and is not helpful. The next numbers to check are 11 and 8. 4 and 22 are the other factors, 26 is associated sum. Polly could know if 26 was the sum. 8 and 11 indeed seem to be the answers.

I'm afraid it isn't that simple, although we've been working along similar tracks. Counterexample: 14 + 5 = 19, 14 * 5 = 70, other factors are 7 & 10, which sum to 17...which does not meet your given criteria. Since the "solution" which allows Polly to know the numbers is not unique, Sam couldn't also figure out the numbers from her statement (as per Sam's second statement). --Mark

Incidentally, the first sum that matches your criteria is 13, not 19.
I got 13 too for that first bit. I thought I'd solved it then I realised how much more complicated it is... brute forcing such a big problem doesn't sound like much fun --Michael

(corrected and updated) With a little of brute force I found a possible solution by hand. First, to find the sums that don't lead to unique products:

- even numbers don't qualify as the sum of two primes
- a prime + 4 can also be removed
- any number from 53+3 to 53+97 can be removed as it could contain a term 53.
- any higher number can be removed as it could contain a term 97
- any product of three and a prime can also be removed as it could be the sum of a prime and the double of that prime

This leaves us with: 13, 19, 25, 29, 31, 37, 43, 49, 53, 55. We can assume that Sam and Polly both know this. For each of these, the number of sums is limited, and it is easiest to start with the smaller ones. If we can find two pairs with this sum and with a product that leads to no other of the sums listed above, then we can discard the sum as the source for a solution, because then Sam could not find the solution in the last step of the riddle.

For 13: 3*10=30 5*8=40 lead to no other of the sums above.
For 19: 5*14=70 6*13=78
For 25: 3*22=66 5*20=100
For 29: Amazingly, only 13*16=208=4*52=8*26 does not lead to any other of the sums above, the other 11 possible sums do, for example 3*26=78=6*13, 6+13=19, etc... This makes 13 and 16 a possible solution.

I am guessing 31 to 55 don't give more solutions, since the number of possible sums becomes larger for every sum.

I just wonder why the riddle is limited to x and y under 97. The analysis above holds at least also for x, y <= 105. An interesting question is whether any higher limit would yield more possible solutions. I doubt whether it would be possible to find other solutions (among all uneven numbers that are not a prime + 4). But proving it is another matter of course. But the problem would be even more impressive if it could be formulated as just x, y >= 3. -- K

Nice job, 13 and 16 is indeed the correct answer. I can't help but wonder how this puzzle originated in the first place!

Just a couple of additions to K's solution.

- I didn't understand the "prime times three" argument at first (and indeed I think it's wrong for 5 and 7). But for primes whose square is larger than 97 it is correct, so 39 actually does not belong to the list.

- I have found a fast way to rule out some of the elements on the list. If an element can be split in prime + 2^n (with n >= 3), together with Sam's first phrase this leads to a unique solution for Polly, i.e. prime * 2^n, because all other products are of the form (prime*2^m) * 2^(n-m) and their sum is even.
Any number that has two or more decompositions like that cannot be the solution.

This rules out:

- 19 (11 + 8 and 3 + 16)
- 37 (29 + 8 and 5 + 32)
- 49 (41 + 8 and 17 + 32)
- 55 (47 + 8 and 23 + 32)

and leaves six more number to be checked by brute force...

-- Roberto

Actualy you need an upper bound to get rid of even numbers, cause if there is no upper limit sam has to use the Goldbach conjecture! maybe they are perfect logicans and then know the answer to that conjecture, but maybe the answer is no :) so that's why you can't do x,y > 2  ! -- protos_drone

We wrote a computer program to go through all of the possible combinations automatically, and we indeed found 13 and 16 as a solution. However, to answer K's conjecture that it is the only solution for x, y >= 3, we were able to find no solutions for x, y <= 250, so so far the conjecture holds true. - James & Adam

Should the question be changed to say x and y are integers? - nkdz

In answer to K's conjecture, since I was too lazy to work this (the original) problem out by hand, I wrote a bit of software to do it for me... a quick edit let me put the limit up to 1000 without too much effort. A second solution does appear at this point: 16, 73. This might seem a bit strange initially (as both numbers are actually in the 3 to 97 range) but, of course, the reason for this is that when the limit is lower you will reject the idea of multiplying 73 by anything.

Given that Sam knows the Polly cannot know the answer, we know that Sam has one of the following sums: 13, 19, 25, 29, 31, 37, 43, 49, 53, 55, 59, 61, 67, 73, 79, 81, 85, 89, 91, 95, 97, 99, 103, 109, 115, 119, 121, 125, 127, 133, 137, 139, 145, 147, 149, 151, 157, 163, 165, 169, 173, 175, 179, 181, 187, 189, 191, 193, 199, 205, 207, 209, 211, 217, 221, 223, 225, 229, 235, 239, 241, 247, 251, 253, 257, 259, 263, 265, 269, 271, 277, 279, 283, 289, 293, 295, 299, 301, 305, 307, 313, 319, 323, 325, 329, 331, 333, 337, 343, 345, 347, 349, 355, 359, 361, 365, 367, 369, 373, 375, 379, 385, 389, 391, 395, 397, 399, 403, 407, 409, 415, 419, 421, 427, 429, 431, 433, 439, 441, 445, 449, 451, 455, 457, 459, 463, 469, 473, 475, 477, 479, 481, 485, 487, 493, 497, 499, 505.

I thought that I'd put the (awful) code up since I might have made an error (I'm certainly not going to work it out by hand):

Actually, putting the limit up to 4000 reveals that there are 5 solutions by this point: (13, 16), (16, 73), (16, 133), (64, 127), (123, 128). I find it interesting that one of the numbers is always a power of 2... is this true for all solutions as the upper limit is raised? - Ossa

401 Circles[edit]

To fit 401 unit-diameter circles in a 2-by-200 rectangle, the construction is as follows: First set up the standard triangular packing with 399 circles: 200 along the bottom of the rectangle, and 199 resting in the gaps. Then, from the left, group the circles in sets of three. Observe that every second triad can be moved upward a small amount; go ahead and move each second triad upward until they just touch the top of the rectangle. This opens up small gaps between adjacent triads. Finally, compress all the triads horizontally to close these gaps. The compression is about 0.6%, which leaves just enough room at the edges of the rectangle to add circles 400 and 401. (The smallest rectangle for which this works is about 2 by 165, but the problem statement is cleaner with 2 by 200.)

The Hardest Logic Puzzle Ever[edit]

If you're stumped on the solution to this, Wikipedia has a page by the same name. This problem is different from the original in three ways:

  • The gods were set in front of three paths and the objective was changed from determining the identity of the gods to the identity of one of the paths.
  • Random behaves in such a way as to make one of the shortcut solutions invalid (This analysis exists on the Wikipedia page and the original, though).
  • The Exploding heads was added as a variation for which only two questions are allowed (This analysis also exists on Wikipedia and in the original) 06:00, 27 April 2009 (UTC)

First you ask A "If I asked you if B was Random would you say ja?" Asking the question in this way, as per the Wikipedia article means that both true and false would answer it "ja" if B was random, or "da" if B is not Random. If he answers "ja" then either B is Random, or A is and happened to answer the wrong way. Either way you can be sure that C is not Random. Similarly if he answers "da" then you know that B is not Random. You then ask the one you know is not Random "is A the path to heaven?" If he answers "ja" then it is, if he answers "da" you ask the same question about path B. If he answers "ja" this time you take path B, otherwise the only choice left is C.

I agree with the first part of your explanation. However, once you figure out one god who is not random I think you should ask him "if I ask you if A is the path to heaven, would you say Ja?". If he says da you ask this question again about path B. This takes care of the possibility that "ja" means no.

Major Vandalism[edit]

A good 75% or more of the edits on this page are pure vandalism from apparently the same source. They just replace text with random characters, so it's probably a bot with a dynamic IP. Does anybody know of a good way of stopping this? 03:28, 8 June 2009 (UTC)

Lock page? -- 11:23, 19 June 2009 (UTC)
The page really should be semiprotected, if possible. I have no idea why the spambot is even here; it isn't advertising anything. 18:11, 21 June 2009 (UTC)


I didn't find the winning strategy yet, but I can prove the first player has one: Let's assume the second player has a winning strategy, the first player can then take out the lower right box. No matter what the second player's move will be, it'll be like a first move (the lower right box would have been removed in his turn anyway) and now the first player can use the winning strategy and win, in contradiction to the fact the second player has a winning strategy. So we get the winning strategy must be the first's (there has to be a winning strategy, as there is no randomness so perfect players will always get the same result). --AtomicSheep 19:50, 16 June 2009 (UTC)

That is a pretty straightforward strategy-stealing argument, which works for all finite grids. For a finite-by-infinity grid, this no longer works, since there is no lower right square. In this case, whoever is first forced to turn it into an a X a box will lose, as per the argument above. But nobody will generally be FORCED to do so, so things might be more complicated. This is the same for an infinity-by-infinity grid. In the case of 1 X a grid, the first player will always win by leaving a single box for the second. This even holds true for a 1 X infinity grid. 18:18, 21 June 2009 (UTC)

at infinity*infinity the first player has wining strategy: if we mark the rows with 1,2,3,4 and columns with 1,2,3,4 the first step is (2,2). then if the opponent makes step (a,b ) you just make (b,a ). a or b will be 1 but this doesn't matter, after your turn the table goes simmetric, and after your secound step, the table goes finite, so after finitely many steps you win. for a*infinity i don't have the general solution, but the answer is: NO here is a (nontrivial) example: 2*infinity: with the same markings: a) If first player goes (1,x) the table is a finite rectangular table: 2nd have winning strategy. b) If first player goes (2,x) 2nd player will take (1,x+1) and so on: for every first player's step (a,b) 2nd player answers (a-1,b+1) or (a+1,b-1) [ only one of theese will exist in the table so that's a bijection for every field exept the (1,1) what is a loosing place ] - protos_drone 2:30, 01 Aug 2009

for a*infinity: a=1 -> first player has winning strategy. a=2 2nd player has winning strategy (like mentioned above) a>2 first player has winning strategy: he makes the field a 2*infinity , after that he wins with the above mentioned strategy.

tha answer to the 2nd question: YES! consider an infinity*infinity table but only the following places are free to choose: (1,1),(1,2),(2,1),(2,2) ( 2X2 rectangle ) and the places (1,n) and (n,n) Tha table has only 1 infinitely long row/column: the first row, and any place is a loosing position because you can win the game by replacing (a,b) places to (a,2) if b>1 so you get the 2*infinity table, where the 2nd have wining strategy, and every step has the same result at the 2*infinity table and at the first mentioned table.( actualy you couldn't make it from the infinity*infinity table with regular steps, but the question doesn't included that it used to be :D )

Sorry, but I don't understand the last part. What I meant in the second question was a "midgame" field, which started as an infinitly long rectangle, where exactly one row OR column (exclusive-or) is infinitely long.
What do you mean with this: "the following places are free to choose: ... ... and (n,n)"? If the place (n,n) can be deleted, then it must be a infinity*infinity field.
I ment that the question didn't allow only infinitely long ROWS/COLUMNS.Indeed the table has 1 infinitely long row and an infinitely long diagonal, but that fits the conditions.
On the other side: if u ment infinite*infinite table, and some people played on it, and now it has only 1 infinite row (WLOG) then the answer is NO.
Proof is indirect: let's assume that for every field in that row ( 1,2,3,4... ) the 2nd player has an answer when he allways wins. Because there are finitely many fields, excluded theese on the row ( 2nd player can't have "answering fields" on this row, cause they would be winning places for me ) 2nd player has the same "answering field" for a lot of fields in the infinitely long row ( pidgeon hole principle ) Now we will find the contradiction: let's start from the 1st field of the row, and check the opponent's "answering field" do this until 2 answering fields are the same.if that occurs at fields n and k ( n>k ), the 1st's winning strategy is: n, (opponend makes a decision), k, ... now the 1st player wins cause of we assumed that 2nd have a winning strategy everywhere: contradiction. - protos_drone
Wow! I wrote the puzzle but I didn't know the answer to the second question myself. I always wondered, thank you for showing me the answer

16 Cubes[edit]

I don't think this is possible. There are 32 possible states the cubes could have (any of the 16 cubes could be lighter or heavier), but you can only get 27 possible answers from the weighings (each of the 3 weighings could give one of 3 results). I have seen a version of this puzzle that is identical except it has 12 cubes; that version is doable and is indeed a very good puzzle. --Tektotherriggen

But isn't the coin problem already on the page? -- 11:00, 14 June 2015 (EDT)

Four fours[edit]

# Term Penalty   # Term Penalty   # Term Penalty
0 = 44 - 44 1 34 = (4-4/4)c(4) 3 68 = 4+(4*4*4) 5
1 = 44 / 44 2 35 = 4!+44/4 7 69 =  ?  ?
2 = 4/4 + 4/4 5 36 = 44 - 4 - 4 2 70 =  ?  ?
3 = (4+4+4)/4) 4 37 = (4)c(4/4))-4 3 71 =  ?  ?
4 = 4+((4-4)*4) 4 38 = (4)c(4*4)-(-/4) 5 72 =  ?  ?
5 = ((4*4)+4)/4 5 39 =  ?  ? 73 =  ?  ?
6 = ((4+4)/4)+4 4 40 = (4)c((4-4)*4) 3 74 =  ?  ?
7 = 44/4 - 4 3 41 = ((4*4)c(4))/4 4 75 =  ?  ?
8 = ((4*4)-4)-4 4 42 = (4)c((4+4)/4) 3 76 =  ?  ?
9 = 4+4+(4/4) 4 43 = 44 - 4/4 3 77 =  ?  ?
10 = (44-4)/4 3 44 = 44 - 4 + 4 2 78 =  ?  ?
11 = (4/4)c(4/4) 4 45 = 44 + 4/4 3 79 =  ?  ?
12 = (44+4)/4 3 46 = (4)c(4+4-(-/4)) 5 80 = (4+4)c(4-4) 2
13 = (44)/4+(-/4) 6 47 =  ?  ? 81 = (4+4)c(4/4) 3
14 = (4^(4-4))c(4) 4 48 = (4+4+4)*4 4 82 = (4+4)c(4-(-/4)) 5
15 = 44/4+4 3 49 =  ?  ? 83 =  ?  ?
16 = 4+4+4+4 3 50 =  ?  ? 84 = (4+(-/4)+(-/4))c(4) 8
17 = (4/4)+(4*4) 5 51 =  ?  ? 85 =  ?  ?
18 = (4/4)c(4+4) 3 52 = 44 + 4 + 4 2 86 =  ?  ?
19 = 4!-4-4/4 8 53 =  ?  ? 87 =  ?  ?
20 = 4*(4+(4/4)) 5 54 = ((4/4)+4)c(4) 3 88 = 44+44 1
21 = ((4+4)c(4))/4 3 55 =  ?  ? 89 =  ?  ?
22 = ((4+4)/4)c(-/4) 6 56 = ((4/4)c(4))*4 4 90 =  ?  ?
23 = (-/4)c(4-4/4) 6 57 =  ?  ? 91 =  ?  ?
24 = ((4+4)/4)c(4) 3 58 =  ?  ? 92 = (4!-4/4)*4 9
25 = (-/4)c(4+4/4) 6 59 =  ?  ? 93 =  ?  ?
26 = 4!+(4+4)/4 8 60 = 44+(4*4) 3 94 =  ?  ?
27 = 4!+4-4/4 8 61 =  ?  ? 95 = (4!)*4-4/4 9
28 = 44-(4*4) 3 62 = (4+(-/4))c(4-(-/4)) 8 96 = (4!)*4+4-4 8
29 = 4!+4+4/4 8 63 = ((4^4)-4)/4 6 97 = (4!)*4+4/4 9
30 = (4+4/4)!/4 9 64 = (4+4)*(4+4) 4 98 =  ?  ?
31 = (4-(ceil(0.4)))c(4/4) 13 65 = ((4^4)+4)/4 6 99 =  ?  ?
32 = (4*4)+(4*4) 5 66 = (4+(-/4))c(4+(-/4)) 8 100 = (4!+4/4)*4 9
33 =  ?  ? 67 =  ?  ?

65% DONE

I am going to assume we can use composition with any number as it was listed as an operation defined. Here's a few improvements and additions (by no means complete):

2 = 4-((4+4)/4), penalty 4
3 = (4+4+4)/4, penalty 4
4 = 4+((4-4)*4), penalty 4
5 = ((4*4)+4)/4, penalty 5
6 = ((4+4)/4)+4, penalty 4
8 = (4+4+4)-4, penalty 3
9 = 4+4+(4/4), penalty 4
10 = (44-4)/4, penalty 3
11 = (4/4)(4/4), penalty 4. This is (1 composition 1).
12 = (44+4)/4, penalty 1
14 = (4^(4-4))4, penalty 4 (1 compo 4)
15 = (44/4)+4, penalty 3
16 = 4+4+4+4, penalty 3
17 = (4/4)+(4*4), penalty 5
18 = (4/4)(4+4), penalty 3 (1 compo 8)
20 = 4*(4+(4/4)), penalty 5
21 = ((4+4)4)/4, penalty 3 (8 compo 4, divide by 4)
24 = ((4+4)/4)4, penalty 3 (2 compo 4)
28 = 44-(4*4), penalty 3
32 = (4*4)+(4*4), penalty 5
34 = (4-(4/4))4, penalty 3 (3 compo 4)
37 = (4(4/4))-4, penalty 3 (4 compo 1, subtract 4)
40 = 4((4-4)*4), penalty 3 (4 compo 0)
41 = ((4*4)4)/4, penalty 4 (16 compo 4, divide by 4)
42 = 4((4+4)/4), penalty 3 (4 compo 2)
48 = (4+4+4)*4, penalty 4
54 = ((4/4)+4)4, penalty 3 (5 compo 4)
56 = ((4/4)4)*4, penalty 4 (1 compo 4, multiply by 4)
60 = 44+(4*4), penalty 3
63 = ((4^4)-4)/4, penalty 6
64 = (4+4)*(4+4), penalty 4
65 = ((4^4)+4)/4, penalty 6
68 = 4+(4*4*4), penalty 5
80 = (4+4)(4-4), penalty 2 (8 compo 0. Can do without compo as [(4+(4*4))*4] with penalty 5 )
81 = (4+4)(4/4), penalty 3 (8 compo 1. Can do without compo as [(4-(4/4))^4] with penalty 6 )

Shove them in the official answer table if they're valid

-- TLH

checked all TLH's, added them to the table, and filled the rest of first column exept 33 + some others included 100 with penality 9

i'm not sure at 31 does the first "extra" operation do/capable of : 4 -> 0.4 ? -- protos_drone

With a bit of help of my computer I found the following:

13 is wrong. It should be

13 = (44)/4+sqrt(4)  : 6

Further I improved or found the following (I especially like 77):

20 = (44)-(sqrt(4)c4)  : 4
22 = sqrt((4c(4+4))c4)  : 4
26 = (44/(sqrt(4)))+4  : 6
30 = ((4+4)*4)-(sqrt(4))  : 7
31 = ((4/4)c(4!))/4  : 8
33 = ((4c(sqrt(4)))+(4!))/(sqrt(4))  : 13
39 = (4c(4/4))-(sqrt(4))  : 6
47 = 4c((4+(4!))/4)  : 7
49 = (4/4)+(4!)+(4!)  : 12
50 = (4c((sqrt(4))+4))+4  : 5
51 = (((4!)-4)c4)/4  : 7
53 = (4c(4!))/(4+4)  : 7
55 = (sqrt(4)c((4!)-4))/4  : 10
57 = (((4!)c4)/4)-4  : 7
58 = (4*4)+(4c(sqrt(4)))  : 6
59 = (((4!)c4)/4)-(sqrt(4))  : 10
61 = (sqrt(4)c44)/4  : 5
62 = ((4!)c(4+4))/4  : 7
66 = (sqrt(4)c(sqrt(4)))+(44)  : 7
67 = (((4!)c4)+(4!))/4  : 11
68 = (sqrt(4)c4)+(4c4)  : 4
70 = (4c((sqrt(4))+4))+(4!)  : 9
71 = ((4+(4!))c4)/4  : 7
72 = (4c(4+4))+(4!)  : 6
74 = ((4+(4!))/4)c4  : 7
76 = (((4!)-4)*4)-4  : 8
77 = (((4!)c(4!))/(4!))-(4!)  : 19
78 = ((4+4)c(sqrt(4)))-4  : 5
84 = ((4-(sqrt(4)))*4)c4  : 6
85 = (((4!)c4)/4)+(4!)  : 11
86 = (4c(sqrt(4)))+(4c4)  : 4
90 = ((4c4)*(sqrt(4)))+(sqrt(4))  : 9
92 = ((4c4)*(sqrt(4)))+4  : 6
94 = ((sqrt(4)c4)*4)-(sqrt(4))  : 9
96 = ((4-(sqrt(4)))c4)*4  : 6
98 = ((sqrt(4)c4)*4)+(sqrt(4))  : 9
99 = (((4!)c(4!))/(4!))-(sqrt(4))  : 18
100 = ((sqrt(4)c4)*4)+4  : 6

I didn't use the advanced operations, and I limited my search at penalty 20.
Further, if you are willing to accept that 0c4 = 4 you can easily improve more on 4 and 8.
We are still missing solutions for: 69, 73, 75, 79, 83, 87, 89, 91 and 93.

If we assume that they can not be reached by using just the basic functions (which seems very hard to prove to me), we could make these number as follows with the help of the ceiling and floor function:

69 = floor((sqrt(4c(4^4)))+4) : 13
73 = floor((sqrt(sqrt(4+4)))*(44)) : 15
75 = ceil((sqrt((4^4)c4))+(4!)) : 17
79 = ceil(sqrt(sqrt(((44)^4)c4))) : 15
83 = ceil((4+4)c(sqrt(4+4))) : 11
87 = ceil((4+4)c(sqrt(44))) : 10
89 = ceil((44)+(4c(sqrt(4!)))) : 14
91 = ceil(sqrt((4+4)c(4^4))) : 13
93 = floor((4c(sqrt(44)))*(sqrt(4))) : 14

With the ceiling and floor function we can also definitely improve 49, 67, 77 and 85, but I think the problem statement does not allow this since they can be made without also.


12 Cubes[edit]

label the cubes from 1 to 12. this'll be a little bit long, but i don't think that there exist a shorter solution so:

1st measurement: 1,2,3,4 | 5,6,7,8

Case I.: they are equal: then the secound measurement'll be

1,2,3 | 9,10,11 if that's also equal, only 12 can be the one with different waight, so 1 | 12 will give the answer: heavyer os lighter? if 1,2,3 | 9,10,11 aren't equal we'll know wether is the different cube lighter or heavyer, then we are lefti with 3 cubes and theinformation that one is (WLOG) Heavyer that's possible with 1 measurement ( for example 9 | 10 )

Case II: one side is heavyer, WLOG 1,2,3,4 > 5,6,7,8

then 2nd measurement will be: 1,5,9 | 2,6,7 if there is equality we get the answer with 3 | 4 cause we have the information that 3,4 + 2 neutral cubes are heavyer than 8 + 3 neutral cubes, so if 3>4 then 3 is the different cube, and it's heavyer, if 3<4 then 4 is the different and it's heavyer if 3=4 then 8 is thedifferent and it's lighter.

If 1,5,9 < 2,6,7 then we know that the different cube is 2 or 5 so with 1 measurement: 1|2 we get: 1=2 => 5 is different and lighter. If 1>2 => 2 is different and lighter If 1>2 => 2 is different and heavyer.

If 1,5,9 < 2,6,7 then we know that 5 and 2 are neutral because we swapped them, and the inequality still holds. so we got that: 1 + 3 neutral is lighter than 6,7 + 2 neutral cubes, so like above: 6|7 will solve the puzzle.

all cases are done. Q.E.D. - protos_drone

  • An other solution would be to do the following measurements :

1 2 3 4 | 5 6 7 8

9 10 11 4 | 1 2 3 8

12 11 3 5 | 9 1 7 4

No need to make conditional measurements here, any valid combination (valid, because <<<, >>> and === are not possible) of results will allow to deduce which cube weigths different, and if it is lighter or heavier. For example, <=> means cube number 5 is heavier than the others.



You weigh the cubes like so:

1 2 3 4 5 6 | 7 8 9 10 11 12

One side is obviously going to be heavier. Divide that side into two groups of three, and weigh them.

So you have either:

1 2 3 | 4 5 6 - OR - 7 8 9 | 10 11 12

Again, one side is heavier, and you're left with three cubes, one of which is the heavy one. Randomly pick two of these three, and measure them. If one is heavier, that is the heavy cube. If both are equally heavy, the remaining cube is the heavy one.


Sorry Tim, but that won't work. In the puzzle the cube can be heavier OR lighter. If it's lighter, everything you've done here is useless, as you would of discarded the correct cube after step 1. The heaver side would turn out to be equal once you divided it in half. Nice approach though if the puzzle was simpler.


A very neat solution

I like this solution: Label the twelve cubes as follows (assume that the labelling does not affect weight).

 1 LLL
 2 LLR
 3 LEE
 4 LRE
 5 RLL
 6 REL
 7 RER
 8 RRE
 9 ELE
10 ERL
11 ERR
12 EER

On the i'th weighing, place the cube on the L(eft), R(ight), or E(lsewhere), based on the letter in the i'th column. Note down the way that the scales tip on each weighing as L(eft), R(ight) and E(ven).

Notes: There are 4 L's R's and E's in every column, so on each weighing you are weighing 4 vs 4. There is no EEE, so every cube is weighed at least once. There are no "reflection pairs" in the list i.e. LRE vs RLE, so no cube X is weighed opposite another cube Y on all weighings.

Look up the way that the balance moved in the table above - if you find it, then that is your cube, and it is heavier. If you don't find it, then look up the label's "reflection" (LRE vs RLE), and that will be your cube, and it is lighter.

if the scales go right, then left, then even, then you'll note down RLE, which does not exist in the list.  But LRE does, so it is that cube, and it is lighter.

Exercise: There are 27 ways of choosing three letters from {L,R,E}. One of these is EEE. If you remove one from each reflection pair from the remaining 26, you are left with 13 unique non-reflective labellings. So why can't we do this with 13 cubes???

Buried cable[edit]

Burried Cable[edit]

I think the answer must depend on the number of wires, because: 1 trip separate the cables to 2 sections ( bulb is ON and it isn't ) so if i made n trips i could separate the cables into 2^n sections. if there were k ( >2^n ) cables i couldn't identify some of then cause of the pidgeon hole principle ( 2^n holes and k pidgeons. )

my solution gives [ log_{2} n ] trips where [.] denotes the Ceiling function.By induction: it goes recursively: if there are 2 cables 1 trip is enough and sufficent. If there are 3-4 cables 1 trip is not enough and i'll do it with 2: (assume there are 4 cables, with 3 it's easyer) twist the ends of 2 wires and attach to them the battery. now i have 2 sections both with 2 wires: i can do it with 1 measurement ( actualy if i had n marked sections with 2 wires in each i can make it with 1 trip ) If there are 5-8 cables (WLOG there are 8) twist the ends of 4 and i have 2 sections with 4-4 wires: i need 2 more trips. ... Assume we need n measurements for 2^n wires. If there are (2^n)+1 - 2^(n+1) wires(let them be 2^(n+1)) i twist the ends of 2^n and by induction i need only n measurements now, so had n+1 with this one. - protos_drone

Here's a hint to point you in the right direction. I'll post the full solution later if nobody gets it. Let's say the cable has 10 wires. You can do something much better than just divide them into two groups of 5 wires. You could take 2 wires and twist them together. Then you could take 3 more wires and twist them together. Then 4 wires. That would leave you with one unconnected wire, a group of 2 connected wires, a group of 3 wires, and a group of 4 wires. You could then take your battery and light bulb to the other side of the cable and determine which wire was not connected to any other wires. Which wires were in the group of 2. Which wires were in the group of 3. etc.
Ah, i thought i must attach the battery to some wires, then leave the battery and go to the other side to check everything with the lightbulb. (that would be logical i think) now i need the definition of a trip. you used a fixed twisting, and then battery-bulbed everything, but if the ends of cables are in different buildings you must make several trips to check wich cable is in a 2-group or 3-group etc. (Where my pidgeon hole principle proof still works) - protos_drone
If you twist all the ends on one side of the cable as I describe, then take your battery and light bulb to the other end of the cable, you can determine all the groupings without making multiple trips. Here's how you can test if two wires are connected on the other end of the cable. Connect one wire to the positive terminal on the battery. Connect one end of the light bulb to the negative terminal on the battery. Then connect the second wire to the other end of the light bulb. If the two wires you're testing are twisted together on the other end of the cable, the circuit will be completed and the bulb will light up. By repeating this processes for every combination of two wires, you can map out how all the wires on the other end of the cable are twisted together without physically walking over to the other building.
Let's say you had enough wires to make 99 groups (2 through 100) plus you left one wire unconnected. After your first trip to the other building, you would have one known wire -- the one that was not connected to any other wires. For your 2-group, connect the known wire to one wire and leave the other disconnected. For your 3-group, connect the known wire to one wire in the group and leave the others disconnected. For 4-group and higher, connect the known wire to one wire in the group, leave one wire disconnected, and twist the remaining wires together. By doing this, you will have two more known wires for every group except your 3-group. In the 3-group you will only have 1 known wire. So for this example, when you make your second trip you will have uniquely identified 198 wires. Since your largest group only contains 100 wires (98 of which are unknown), you only need 98 of your 198 known wires to identify all the remaining wires. Just connect the first known wire to the first unknown wire in each group, the second known wire to the second unknown wire in each group, etc. Using this strategy, you can always identify all the wires in no more than 3 trips (though you will need a 4th trip if you want to untwist all the wires on the other end).
Looking at the problem more generically, if you start with N groups of wires with only one left over as an unconnected wire, then after 2 trips you have uniquely identified 2*N wires and your largest group only has N-1 wires that are unidentified. However, sometimes it may not be possible to evenly divide your wires into N groups. In this case, you may need to duplicate one of your groups in order to be left with only one unconnected wire on the first trip. So in this worst-case scenario you will have identified 2*N-4 wires after your second trip, and your largest group will contain N-2 unidentified wires. Since 2*N-4 >= N-2 for any cable that can be divided into at least 3 groups (any cable with at least 8 wires -- 2,2,3 + 1 unconnected), this strategy is sure to work for any cable with at least 8 wires.
Now, for cables with a certain number of wires it is possible to use a variation of the above strategy to uniquely identify all wires in only 2 trips. A cable containing 1035 wires is one example where this is possible, but it's not the only one. See if you can figure out the strategy and what special property has to be true about the number of wires for the strategy to work.

Interestingly it can't be done at all for two wires. One is trivial and three is fine but two is not possible...

You can cheat and tie one of the wires to earth (or, a bit more in the spirit of the exercise, use a big capacitor at either end to temporarily “complete” the circuit; note that this doesn’t actually make a circuit, in the sense that it can work for cables between two separate planets; it’s still cheating, just less so ;-) )

What about this? Make one group of 1 cable, 1 of two cables and so on, as mentioned before, I assume a triangle number of cables. Travel to the other side, identify the groups A_i. As a graph, it now looks like a triangle, each row is a group: X XX XXX ... Now, twist the cable in columns, travel to the other side, untie the groups, identify groups B_i again. The set (A_i,B_i) is known for every cable now (set at one side, measured on the other), and identifies each cable. This way, two journeys are sufficient. -- 05:31, 17 July 2012 (EDT)

There is also a solution that doesn't require connecting more than 2 cables at a time:

First, suppose there is an odd number of cables between the two buildings. Pair-up all but one cable, on side A, and travel to side B. The single unconnected cable can be trivially identified (as it cannot complete the circuit with any other cable), call it cable 1. Connect cable one to an arbitrary cable, and call it cable 2. Cable 2 is paired with another cable, call that one cable 3. Continue this until all cables are labeled with a natural number, in which case we have one big line starting at 1, ending at n. Travel back to side A. The unpaired cable is cable 1. To identify a cable with even rank, temporarily complete the circuit between that cable, and cable 1 (while temporarily breaking the connection with its successor). If it is impossible to (temporarily) break that circuit by breaking a connection on side A, then the cable must be cable 2. If there is exactly one connection that can be broken to break that circuit (then that connection must be 2-3), then the cable must be cable 4. In full generality, if there are k ways to break the circuit, then the rank is 2*(k+1). The odd ones can never complete the circuit with cable 1. They can be identified as simply the successor of an even cable. E.g. cable 3 is simply the cable that cable 2 is connected to on side A.

Remains the possibility that there is an even number of cables (other than 2). Pair-up all but two cables on side A. On side B, we can identify both unconnected cables. We call one cable 1, the other cable 0. Then we treat the problem as the odd-numbered version, and ignore cable 0. Travelling back to side A, we can quickly identify cable 1, as it completes the circuit with any cable except 0 (this would fail if 1 and 0 are the only two cables). Thus, we can label the remaining cable as 0, and reuse the solution for the odd-numbered version without loss of generality.

06Aug2016 (KnifeEdge's solution) One round trip is all that is required for any N.

I'm assuming the wires act like USB cables and the battery like a portable USB battery and lightbulb like a flash drive) that is the battery connects to one wire and the lightbulb to one wire.

First at building 1, just plug in a battery to any cable and start pairing up the rest of the wires, so if wire A is to battery twist wire B with C and D with E etc.

Go to building 2, start jamming the lightbulb in any of the wires until it lights up,label this wire 1, remove bukb, now tie wire 1 to any other wire and label that wire 2, randomly check each other wire with the bulb till it lights, this is wire 3, keep going till you are at N number of wires. Take the bulb and leave all the pairs tied up as you have them now.

Go back to building 1, you have right now a giant single wire snaking its way between building 1 and 2.

Disconnect B&C pair and test both wires with the bulb, only one side will light, this side is closer to the battery end of the snake than the free end. Keep the bulb lit on whichever wire lights it. Disconnect pair D&E and note whether the bulb goes off, if it does D&E are earlier in the snake than B&C, if bulb stays lit, D&E are later in the snake than B&C. Do this for all the pairs in building 1 side and you will know precisely where in the snake B&C belongs and thus match it with the numbered pair at building 2. Repeat this for all the alphabet pairs until complete.

This works with even and odd numbered wires (odd N would just have an extra one hanging around, if you tied the odd wire at building 2 into the circuit you would find no other wire would light the bulb so it would be instantly recognizable) and has no limit on what size N would work.

God of Math[edit]

We start with two tetrahedrons, joint at one triangle. We use up 9 matches and form 7 triangles. Then we bend both peaks in the sames direction in the forth dimension. All lines still have the same lenght but the endpoints are nearer than before. If we do it just right, we can form three new equilateral triangles with only one match.

Actualy what u made is the 4 dimensional analogue of the tetrahedron, it's called a 4-simplex.MAybe a little bit harder question is can you proove it that this is the only solution? - protos_drone
Yes I can. We can project a three-dimensional Objekt on a two-dimensionsional surface. We do this everyday. We must be able to do this with a four-dimeninsional object, too. So if our object has n vertices, our projected image will have n vertices, too. Now we need to find out if we can create 10 triangles with 10 lines, while using each vertex at least once. The lines do not necessarily have to be the same length but we must be aware that the lines do not actually cross each other. We can't do this with three or four vertices but there is one solution with 5 vertices. This doesn't automatically mean that this is an actual solution, but I have already shown that there is a solution with 5 vertices (the 4-simplex). When we try to use more than 5 vertices, we will end with less than 10 triangles. You get the highest triangles/lines ratio if you connect every vertex with each other vertex and with 5 vertices it only just worked. This isn't a proper prove, but you can verify it if you want to be sure, because there cannot be more than 10 vertices and there are only a finity number of possibilities for each number of vertices.
That's why there's no solution with more or less than 5 vertices and only one with 5.

I would start with laying a 'star of david'. This gives you 8 triangles with just 6 matches. After that it is easy to expand this to 10 stars with the remaining matches, for example by building triangles on two opposite sides of the star with two matches each. - K

i would say that u have only 2 triangles in the star of david-like construction, cause of the definition of the triangle, that's a 2 dimensional object!
Clever. I first thought that each triangle had to be the same size as every other triangle. I also assumed that the matchsticks had to match up to each others ends and not overlap. I also assumed that the triangles had to be made without anything in between the sides. (Ie: the triforce would only be 4 triangles). If I hadn't reached a 4-dimensional solution, I might've recognised how much I assumed without someone else pointing out my errors. :(

Wonder how many other people made as silly assumptions as I?

The star of David approach works quite nicely, but i make the star with 6 matches and get 6 triangle, not 8?
yes, its 8 triangles in Star of David. 6 small ones and 2 large ones .. [8]

Diabetic Dilemma[edit]

You put the bottles in the pool, regular coke sinks while diet coke floats.

Or, assuming it's Coca-Cola, look at the lids of the bottles. Regular has a red lid, Diet has a white one.

Coke will be sticky while Diet Coke won't be.


A little hint: 1,1,2,720, ~6*10^23, ~6*10^198 the 12th element in the sequence has over 286 million decimal digits (Just to make it easy to interpolate xD ) What do you think about when u see 720? of course the factorial!

x!="next number"
So, i basically find next number in the following sequence: 0,1,2,6
the "next number" can be (722!)!, although it's larger than 6*10^23.
Not so bad but how would you deal with the previous element of the sequence?, let me help you a little bit: 6=3! - protos_drone
Then it looks like double-factorialed fibonacci sequence.
(1!)!, (1!)!, (2!)!, (3!)!, (5!)!=120!
although 120! is again more than 6*10^23...
oh, i finally got it. it's not fibonacci, just double-factorialed integers.

The second sequeence is pretty easy if you say it out loud and write it down veritcally

1) 1
2) 11
3) 21
4) 1211
5) 111221

Ignoring the starting 1, we'll start at 2). What is the first number in the previous line, and how many of them are in a row? The first number is 1, and there is 1 of them, so 2) = 11. Same thing for 3) The first number is 1, and there is two of them. therefore 3) = 21.

After 3, it's the same process, but the number you're checking changes. So usuing the same logic, wel'l get 4) = 12, but we still have to account for the next number in the sequence, i.e. the 1, and there is only one of them, so 4) = 1211. If you say it outloud it helps.

4) (referring to 3) There is one 2 and one 1. (1211)
5) (referring to 4) There is one 1, one 2, two 1's (111221)
6) (referring to 5) There are 3 1's, two 2's, and one 1 (312211)

Now a much harder question is, can you prove that no number in this sequence will have a digit greater then 3? (Add this to the puzzle if you want)

Proof by contradiction: Suppose there were a term somewhere in the sequence that contains a 4, and consider the first such term. The pair in which it appears cannot be 'x4', otherwise the previous term would contain a 4, so the pair that contains a 4 must be '4x'. Then the previous term must have four of x in a row. There are two possible placements:
  • Two pairs of 'xx' (so 'xxxx'). This would mean that the term before that has (2x) of x in a row, which would be encoded as (2x)x, not xxxx, so this cannot happen.
  • Three pairs, in the form 'axxxxb'. Similarly, the 'axxx' part can not appear as (a+x) of x in a row is (a+x)x, not axxx, so this cannot happen either.
Either way, a 4 cannot be generated. Any term with a higher number will also have a previous term that contains the first placement, so also cannot be generated, and the result follows.
Interestingly, if you seed the sequence with a term containing x = 4+, it seems to degenerate into '1x' and stay like that (or '1y1x' if you put y xs in a row). '22' is an invariant term.
-- TLH

I found another arbitrary solution to the second problem using the following substitution rules (in descending order of precedence) applied from left to right:

12 -> 1112
11 -> 21
2 -> 12
1 -> 11

like this:

[1] -> 11
[11] -> 21
[2][1] -> 1211
[12][11] -> 111221
[11][12][2][1] -> 2111121211

Can the substitution rules be shortened/fewer? 10:24, 24 May 2011 (EDT)


Incidentally, a 'domino that covers 4 places' is called a tetromino.

To start with, it is obvious that one needs even values of n, otherwise the total cells would not be divisible by 4. Trivially, n=2 is an empty board and can therefore be filled (with 0 pieces).

It is certainly possible to fill all boards of size n = 2 + 4k (k = 0, 1, 2...), and here's why:

First define a couple of compound pieces to work with. One is like a Brick, with two oppositely-rotated L-pieces locked together to make a simple 2x4 rectangle. The other is like a Joist, with two oppositely-reflected L-pieces stuck together along the 3-side (making a T-piece with a fat center).

Since the board is of size 2+4k, the edge (excluding the severed corners) is a multiple of 4, so we can fill the left and right hand sides of the board with (2k) Joists, with the base of the Ts facing inward. The remaining cells can be filled by (2k^2) Bricks, in the standard brickwork pattern. Just use a greedy method to place 2x4 horizontal bricks that fill the most top-left unfilled cell. -- TLH

Not so bad, i made it with induction "" the inside is good by inductive hypotheis, and with the marked dominoes there will be two 2*4k and two 2*4m shaped table to fill, and that's possible since a 2*4 table can be filled.

And what about the n = 4k tables? -- protos_drone

Got it... I think. No tables of size n = 4k can be filled by L-pieces, and here's why:

Colour the cells black and white in horizontal stripes. As with the normal chessboard colouring, there are still 8k^2 - 2 of each colour.

Now with this colouring, any placement of an L-piece will cover 3 of one colour and 1 of the other. Therefore, for every placed L-piece, there must be a paired piece that complements those colours so the pair covers 4 of each colour (because we need the totals of each colour to be equal; without pairs, one will inevitably be greater than the other). This implies that the quantity of each colour must be 0 (mod 4), whereas 8k^2 - 2 ≡ 2 (mod 4), so it can't be done.

Could also argue that the colouring requires an even number of L-pieces, but an n = 4k board will have 4k^2 - 1 L-pieces, an odd number. -- TLH

that's it! nice --- protos_drone


What do you think?

This one's a classic. Turn one switch on, keep one off, and turn the last one on for a while, then turn it off. When you go into the room, the light will either be on, warm (because it was on for a while), or cold.-- 19:33, 25 December 2009 (UTC)

Why not have 4 switches and bulbs? -- CrystyB 15:49, 21 January 2010 (UTC)
Ha, yes, it can be done with 4 switches and bulbs. Simply flip the 4th switch and wait 10,000 hours (or comfortably beyond the design lifetime of the bulb). Then do the same as above with the first three switches. The lightbulb now has four possible states: On; Off and warm; Off and cold; Off and burnt out. Voila!
There's a more elegant way to do it with 4 switches. Turn switch A and B on for a while. After waiting long enough that the bulb would have time to heat up, turn bulb B off and bulb C on. Then go into the other room. The bulb will then be either ON/hot, Off/hot, On/cold, or Off/cold.
Combining the last two methods, wouldn't it be possible to find out the answer with 5 switches? The states would be ON/hot, Off/hot, On/cold, Off/cold and Off/burnt. Someone gave me this puzzle with only the 3 switches, but being able to expand it to 5 seems ridiculous! (I love stumping people).
I don't think the On/cold case is realistic, as incandescent bulbs heat up very quickly to 250-300 °C (450-550 °F). Think of how fast a hair dryer heats up; within seconds. So unless you happen to know the precise characteristics of the bulb, and have sophisticated equipment to measure its temperature within seconds of flipping the switch, you can't know whether it's been on for ten seconds or for an hour.
I think it's still a valid case. Have you ever screwed in a lightbulb with the switch turned on? Definitely doesn't get hot enough in the time it takes to screw in. Basically you'd have to run to the room and quickly touch the two lights that are on. After that you can look at the rest. A hair dryer heats up much faster, just because it is a much bigger short compared to that of a light bulb. A lightbulb filament is pretty small, but a hair dryer one is quite large:

It's interesting to note that this classic may be incomprehensible to our kids. My son has only ever known LED light bulbs, which give off so little heat that it can appear as though they give off no heat at all. Telling my son the answer is that they feel the bulbs for residual heat will be utterly perplexing to him.

Egg problem[edit]

First, let's think about an even easier problem; you only have 1 egg. Since if you break that egg you can't go any further, you must be very careful with it. You should drop it from the first floor, then the second floor, then the third, and so on. You only have 20 trials, so you can only tell which floor is the first to break it if that floor is one of the first 20.

Now we can return to the original problem. Suppose you drop your first egg from the n^th floor, and find that it breaks. Then you know that the least egg-breaking floor is below the n^th, but you don't know which floor it is. Since you only have one egg left, you need to be careful, as described above. Since you only have 19 trials left, you can only distinguish all the possibilities if n is at most 20. So it is reasonable to first of all drop an egg from the 20th floor.

If it survives, you can drop it again from the m^th floor. Considering what you could do if it broke, as in the last couple of paragraphs (and taking into account that you only have 18 trials left), m shouldn't be more than 39 = 20 + 19. Similarly, if it survives you shouldn't drop it from any higher than the 20 + 19 + 18 = 57th floor. Following this algorithm, you could deal with buildings up to a height of 20 + 19 + 18 + 17 + ... + 2 + 1 = 210 floors.

More generally, if you have 2 eggs and k trials then you can, by being careful, deal with buildings up to a height of k(k+1)/2 floors.

Now suppose that you have 3 eggs and 20 trials. If your first egg smashes on the first trial, then you only have 2 eggs and 20 trials left. So you really shouldn't make your first trial from higher than the 191st floor (with your remaining 19 trials and 2 eggs you can only deal with up to 190 floors). Your second trial shouldn't be from higher than floor 191 + 172 = 363, and so on, so that in total you can deal with at most 191 + 172 + 154 + ... + 4 + 2 + 1 = 1,350 floors. To deal with that many floors with just 2 eggs, it isn't hard to check you need 52 trials.

You'll never need more than 20 eggs (since you can break at most one egg per trial), but with 20 eggs you can test one more floor than you can with 19 - hold your 20th egg in reserve and pretend the bottom floor isn't there (so you pretend that the second floor is the first, the third is the second and so on). If you break all your first 19 eggs, then you know that the least floor from which eggs break is either the first or second, but you don't know which. You can check that by dropping your last egg from the first floor. So that last egg is occasionally useful, so you should only start refusing when offered a 21st egg.

A very good solution indeed. But what i was wondering is that in the original problem if we start from 20 floor - as you metioned- and first egg breaks then you have just one egg left with 19 tries!! what should be the way hereafter? [Answer: start with your last egg on the first floor, working your way upward until it breaks.]

Yet more prisoners[edit]

"You, together with a finite number n-1 of other ideal mathematicians, have been arrested on a whim by a generic evil dictator and are about to be locked up in a prison. The prison is circular, with n identical windowless cells arranged in a ring around a central court. There are some problems with the lighting system - the light switch in each cell controls the light in the next cell clockwise around the ring. Even worse, electric power is only provided to the lights for one tenth of a second each night, just after midnight.

The warden is worried that you might use the lights to communicate (very slowly), so he will very often rearrange the prisoners, moving them about between the cells in any way he chooses and having all the cells cleaned to prevent prisoners leaving messages for one another. He might do this every day. This will all be done in such a way as to keep you all in ignorance; you will never see each other or any part of the prison except the inside of the cells. You do not even know how many other mathematicians are to be locked up with you.

The warden visits you in your cell, and explains that if you are able to communicate despite these precautions he will consider you all worthy of release. At any time, any prisoner who believes he has discovered how many prisoners there are may petition the warden for release. That prisoner will be allowed one guess at the number of prisoners; if they guess correctly, then all the prisoners will be released, but if they guess incorrectly then all the prisoners will be executed.

You have been chosen to devise a strategy by means of which you will be able to discover the number of prisoners. You may compose a single email outlining your strategy, which will be passed to all the prisoners. However, your strategy must be foolproof, as the warden (who has a deep hatred of ideal mathematicians) will also read your email.

Is there a strategy which will guarantee your release? If so, give one. If not, why not?"

Here is a (the?) solution.

“Greetings fellow prisoners,

Phase 1:

"The first night I will flip on my light switch. At 12:01am on day 2, the mathematician who sees my flash will walk over and turn on his switch. He will know for sure that there are at least 2 prisoners and must remember this. The next night the prisoner in the cell that flashes but has the light switch in their room in the off position must do the same, but this time remember that there are at least 3 prisoners. Each prisoner that has the opportunity to flip on a light switch (when their light flashes) is simply responsible for remembering that if he flipped on a light on day n there are at least n prisoners. If all the light switches are on, all the rooms will flash, but none of them will have a switch that can be flipped, so none of the prisoners will change anything.

Phase 1 will last for 10 days.

Phase 2:

"If there are 10 or less prisoners, all the light switches will be in the on position at this point. If there are more, then 10 light switches in consecutive rooms will be in the “on” position, but the rest will still be off. Critically, the room that I was originally in, where I flipped on the light for the first time, will not receive a flash. The mathematician in this room much flip “off” his light switch just after midnight when he realizes that he has not seen a flash. For the next ten days, the mathematician in the room with the light switch on, but not receiving a flash will turn his light switch is off. Any mathematician that sees a flash should not change his switch.

"After 10 days of phase 2 (20 days total) every prisoner will know for sure if there are at least 10 prisoners. If their light flashes at midnight, then every light is flashing there are 10 or less prisoners. On the other hand, if their light does not flash, phase 2 has turned off all the lights and there are more than 10 prisoners.

"If there are more than 10 prisoners, repeat phase 1 and 2, but go for 100 days each. Then, if necessary 1,000, 10,000 and so on. At some point the upper bound (UB) for number of prisoners will be established. At that point, move on to phase 3.

Phase 3:

"Every prisoner that flipped on a light during phase 1 will have some information about the minimum number of prisoners. One of the prisoners flipped the last switch, he will know “there are at least n prisoners” and his minimum will be the full number. The mathematician that flipped a switch the night before will know “there are at least (n-1) prisoners” but neither knows whether they were the last.

"In order to determine the exact number of prisoners, we will count backward from the maximum. On day 1 of phase 3, a prisoner that knows that there are at least as many prisoners are the full upper bound (UB) will talk to the warden. Day 2 a prisoner knowing that there are at least (UB-1) prisoners will go to the warden and so on.”

This system works regardless of how the warden moves prisoners around as long as everyone is in their cell at midnight. I am assuming that neither the warden nor the cleaners are allowed to flip the light switches. I am also assuming that the prisoners has some way of knowing the passage of the hours and days. They need to keep accurate count of the days and they need to know when midnight is so that they will know when there definitely was not a flash.

--Svenisntb (talk) 15:12, 6 November 2014 (EST)

The solution above wouldn't work as far as I can tell as if a prisoner guesses incorrectly they are all killed. Correct me if I'm wrong but surely under the assumptions given above, namely, that light switches are left alone, prisoners are in their cells at the right time, prisoners can keep time you simply need a system whereby you turn your light switch on if you receive a flash. Then after n nights the chain will have returned to the original cell and the prisoner in the cell that receives a flash when his light switch is already in the on position knows the number of nights so far and thus the number of prisoners. -- 09:28, 24 July 2015 (EDT)

I'm pretty sure I have a solution for this. I've written it up below. I apologize if it's not written in a very clear, mathematically precise way, and I acknowledge that I'm glossing over the details of how the mathematicians plan exactly how many nights to wait before initiating different phases of the plan, but I'm quite confident that they can plan it unambiguously (they do the upper bound check first, and after that, every operation takes a well-defined amount of time.)

There's two important attributes of the mathematicians here:

1) One of them knows they're "you", and the rest know they aren't "you". (Without this attribute, we'd have nothing to start with. All the initial information comes from the "you".)

2) They have unlimited memory, so they can store various state information.

Another important factor is that that warden doesn't freely distribute all the flashes sent on a given night. If you divide the mathematicians into 'red' mathematicians and 'blue' mathematicians, and all the red mathematicians flash one night, then the warden can't make all the flashes hit red mathematicians - the worst he can do is put them all in a row, but still, at least one blue mathematician will receive a flash.

Suppose they've set it up so that they know for sure that exactly N mathematicians are 'red'. Then they can check if there are exactly N mathematicians by the following mechanism: First, all red mathematicians flash. After that, for N nights, all mathematicians who both flashed and were flashed on the previous night flash. If there are any blue mathematicians, at least one red mathematician is 'killed' each night, so on the N+1th night, if anyone was flashed, they know that there are exactly N mathematicians, and win. Otherwise, no one was flashed, and you continue by some means.

The same mechanism works to test if all mathematicians are red, if you know a maximum for the number of red mathematicians.

I'm starting to get the idea that this problem must be solved by induction. I don't quite see how to do the induction though, so I'll just do a few iterations.

So, night one, you flash once. If you get flashed, you're the only prisoner, and you win. Yay!

Otherwise, night two, you flash once. Let's call you "M1" and the person who gets flashed "M2". The two of you are the new red mathematicians, and you do the same test. Either you win or you (and everyone else) now know that there are at least three mathematicians.

Next: M1 and M2 both flash. Either one or two new mathematicians is now "red"; if it's just one, either M1 or M2 now knows this. You do the test. If it comes up yes and there are three red mathematicians, the one who knows that makes you win. If it came up yes and you haven't won, then everyone knows there are exactly four red mathematicians - M1, M2, and two more who are indistinguishable. And if they're the only ones, then they know that now, so they win.

So there are now three or four red mathematicians, and one of them knows if there are three, and there are some blue mathematicians left. If the one who knows there are three had any way of communicating to all the rest, then the mathematicians would be able to pin the number down exactly.

But there IS INDEED a way for that mathematician to communicate their knowledge to all the rest! All it requires is an upper bound on the number of mathematicians. Consider this: If you had an upper bound - call it UB - you could repeat the following: The mathematician with the message flashes first, if the message is true, or doesn't flash, if it's not true. Then, for UB nights, every mathematician who was flashed on any night before that flashes. This guarantees increasing the number of flashes by one each night, so by the UBth night, either every mathematician is flashing or none is, so they all have received the one-bit message. All that remains is to determine an upper bound on the number of mathematicians.

Finding an upper bound is a bit complicated:

 Set i = 1.
 Spend i nights expanding (starting with one mathematician flashing, and after that, having everyone who had ever been flashed in this process flash)
 Either everyone is now flashing, or you now have somewhere between i+1 and 2^i mathematicians flashing.
 Do the "all red test" with "having been flashed" as the "red" and 2^i as the maximum.
 If yes:
   There are no more than 2^i mathematicians, so that's an upper bound. Break.
 If no:
   Increment i and repeat.

Eventually i+1 must exceed the number of mathematicians, so you will have found an upper bound. This now means that any mathematician can communicate any number of single bits of information to the group as a whole. In the particular case, the mathematicians can now determine whether 3 or 4 of them are currently 'red'.

Ideally, I'd have something of this form:

 Given the knowledge that exactly N mathematicians are red,
 All red mathematicians flash. Anyone who is flashed this way becomes red.
 Some number L < N of originally-red mathematicians have been flashed. The new number of red mathematicians is 2N - L.
 Somehow, the L mathematicians communicate to the group how many of them there are. (This is the part I haven't explained how to do.)
 The mathematicians now know the exact number of red mathematicians. They do the red test and either they win or they don't.
 If they haven't won, you now have at least N+1 red mathematicians. Repeat until victorious.

Of course, the problem is how the L mathematicians communicate their exact number to the group, given that they may be indistinguishable. If we could assign unique numbers to every red mathematician, we could do this, because they could simply go in order. But the third and fourth members (if indeed there are four) were flashed at the same time, so they can't tell which one they are, and M1 and M2 couldn't have flash in sequence, because then there would have been no guarantee that either of them would hit a blue mathematician. The only way to expand the group of red mathematicians is for every red mathematician to flash at once.

But there is a way. We know that we have the M1 and M2 mathematicians (exactly one each), and the M3 mathematicians (no more than 2), and the M4 mathematicians (no more than 4), and so forth. Suppose M6 is the farthest we've gotten currently, and M7 is the group that's just been added. First, we sequentially check whether M1 is an L-mathematician, then whether M2 is an L-mathematician, then whether there are any M3 L-mathematicians, and so forth. And for each group, we can divide that group into two now-differentiated subgroups: The current L-mathematicians and the current non-L-mathematicians. So, if we call groups of red mathematicians who have no distinction from each other "undifferentiated groups", the only problem is to somehow propagate around these values in a way that reduces the number of members of any undifferentiated subgroup to 1.

Now, think about it this way: Each L-mathematician is carrying a single unit of extra undistributed redness. So, at this point, we could have any set of L-mathematicians flash and "drop" a unit of redness, and everyone who gets flashed "picks up" a unit of redness - blue mathematicians become red, red mathematicians become L-mathematicians, L-mathematicians get an extra unit, etc. So we can measure the "redness" of a mathematician as an integer. And we can enumerate all the possible redness values at the same time as we enumerate the undifferentiated groups.

Another thing: We can measure whether any subgroup has any member with redness N, by having every member of that subgroup with redness N do the one-bit-communication trick. So we can break down a complete boolean distribution of whether each subgroup has any members with each possible redness value. The only problems arise if you get groups that have at least three members, and fewer different redness values than members. Let's call those groups "problematic groups". We know that each has at least two different redness values in it.

So, repeat the following as needed:

 Call the largest problematic group "P". Members of P are called P-mathematicians. Members of P who have the current maximum redness value from P are called PL-mathematicians.
 Each PL-mathematician flashes and drops a unit of redness. Everyone who is flashed picks up a unit of redness.
 We can now divide P into these categories:
   - Those who were P but not PL. By definition, there is at least one of these.
   - Those who were PL and were not flashed. We can deduce that there is at least one of these, because not every PL-mathematician could have hit another PL-mathematician.
   - Those who were PL and were flashed.
 But we can enumerate over this division, so there is now a differentiation between members of P who were not previously differentiated! So all mathematicians can now add this differentiation to their list.

Since there can only be finitely many members of undifferentiated groups at any time, the prisoners only need to do this finitely many times before there is enough differentiation to determine the exact quantity of excess redness.

Phew! That was hella complicated. But I'm pretty sure it concludes my solution.

- Eli Dupree (Elvish Pillager on the XKCD forums, Evilish_Pillager on Foonetic,

I don't believe your method of differentiating is guaranteed to work. There is no guarantee that some of those who were PL get flashed again. It is thus possible that we have divided a group of n PL-mathematicians into a group of 0 and a group of n, and we haven't made any progress. BlueSoxSWJ 00:45, 22 April 2012 (EDT)

1024 bottles of wine[edit]

King has amassed 1024 bottles of most exquisite wine. Some major celebration is about to happen, and king is planning to spend all of them. But just 24 hours before the event, a terrible message arrives: apparently one of those bottles contains poison: whoever drinks poisoned wine, even a milligram, is doomed to die in next 22-23 hours. The idea king has is to make prisoners of war drink small amounts of wine from bottles, and after observing who dies and who lives, conclude which bottle was poisoned and discard it, offering the rest to guests. The most obvious way of doing this is to use 1024 prisoners, each drinks from one bottle. Unfortunately, kingdom does not possess this many, so it is your task to devise a way to find poisoned bottle using as little prisoners as possible.

With one poisoned bottle of wine this puzzle is pretty much trivial (10 prisoners; binary code). The difficulties start when there are two poisoned bottles. There are less than 20 bits of information in two poisoned bottles out of 1024, but I don't know for the love of god how to solve this with 20 prisoners.

I would post this to the actual page but wiki won't let me. -- 11:21, 24 October 2011 (EDT)

To find the 2 bottles you will need up to 20 tests.
Divide the bottles in 2 subsets, fill two cups, each one a mix of wine from all bottles in each set and make a prisoner taste cup 1. If he is not poisoned, both poisoned bottles are in set 2. If he gets poisoned, then make another prisoner test cup 2; if he dies there is a poisoned bottle in each set; if he doesn't die, both poisoned bottles are in set 1.
Take again the sets known to have at least one poisoned bottle and keep splitting and tasting the same way until the two poisoned bottles get isolated.
Interestingly, in the one bottle problem there is small probability (1/1024) that no prisoner dies, but in the two bottles one at least one prisoner will get poisoned.--Pere prlpz (talk) 12:58, 8 March 2016 (EST)

Forking Konigsberg Bridge Problem[edit]

Many are familiar with the Konigsberg bridge problem, in which the King wishes a parade to pass over all the bridges of his city exactly once and which can be solved through relatively straight-forward graph theory. An alternative is the forking Konigsberg bridge problem - the parade may fork (split into multiple parts) at any point along the parade route. If these parts are allowed to re-join arbitrarily, any graph may be spanned (correct if wrong). However, which graphs may be spanned if the different subsets of the parade may only re-join at the final node? If this can be solved for any graph, please provide a general solution, or if not a counter-example (and ideally a proof for what types of graphs may and may not be spanned by the forking parade).

This is an original problem, as far as I know to date only posted here, and any discussion much appreciated.

Robots on a number line[edit]

Two robots are parachuted onto a spot of a discrete number line that stretches infinitely in either direction. They are an unknown distance apart. Where they land, they drop their parachute. They begin executing the same set of instructions at the same time. Unfortunately, these are not very good robots, and they only understand commands in character form. There is only room for 10 instructions. Possible instructions are as follows:

   L: Move left one space
   R: Move right one space
   S: Skip the next instruction if and only if there is a parachute at my feet
   0-9: Move to this position in the instructions (If the instructions are LRS1, the 1 would move the robot back to the 'R') 

Every step takes the same amount of time to execute, including parachute skips and moving through the instructions. There is no variable storage. The robots begin executing from step 0. What set of instructions will result in the two robots ultimately finding each other on the infinite number line in every case? There are multiple possible answers.

Let's make both robots move in the same direction, slowly. Say they move one step to the right for every 3 instructions executed. Also, make a robot change state when there is a parachute at its feet, so that in its new state the robot moves in the same direction, but faster (say one step to the right for 2 instructions executed). Since both robots are moving right, the leftmost robot is guaranteed to step on rightmost robot's parachute at some point. The leftmost robot will then speed up and at some point catch the rightmost robot, solving the puzzle.

Code to do this is: RS0R3

RS0 moves the robot one step to the right for every 3 instructions executed. S0 returns execution to the start of the RS0 sequence if the robot is not currently stepping on a parachute, but if the robot is currently stepping on a parachute, execution moves to R3. R3 moves the robot one step to the right for every 2 instructions executed (3 is the position of the R in R3).

This is the shortest instruction set that solves the problem. What is the instruction set that solves the problem in the fastest time? I think the fastest intruction set is RRRRS0RRR6 (which gives 4 steps to the right for every 6 instructions executed, in the first block, and 3 steps to the right for every 4 instructions executed, in the second block), but that may not actually be the fastest.

- I don't believe RRRRS0RRR6 is a correct solution. If the robots start one space apart, the left robot will immediately step past the right robot's parachute and they will trundle away into infinity. - MG

Four solutions which may be considered to be quickest.[edit]

[Updated on Jan 16, 2014. I found another (fourth) solution.]

I agree with MG. Robots can skip over the parachutes. But how is it for meeting them? They could run past each other if they are moving in different directions. However, if they are moving in the same direction, then they can not pass each other without first arriving at the same position.

I like to discuss the question of finding the fastest programs that do the job. It's a very nice follow up problem!

I found four solutions to this question. In the first solution, the robots have the same program. In the second solution, the robots have different programs, both will initially look for a parachute. In the third solution, the robots have different programs, only one will initially look for a parachute, the other won't. In the fourth solution, the robots have different programs, they will either run towards each other or away from each other. In the first three solutions, the robots will always find each other, but the time will depend on the situation; with a probability of 50% they are quicker, with a probability of 50% they are slower than the previous solution. The fourth solution has the quickest solution with a probability of 50%, but fails (robots never meet) with a probability of 50%. A remedy is proposed.

Note. Robots moving in the same direction will be in the same position when they are about to "pass" each other. Only in Solution 4 arises the possibility that the robots do not actually sit in the same position at any time after executing a program command. After a short discussion, this problem is discarded for the sake of simplicity.

Solution 1.

A robot needs to find the other parachute and then accelerate. As MG pointed out, the program can not have ...RR... in its initial part. So, I thought at first that RS0... would be the only possibility for its first part, as follows:

   RS0RRRRRR3 - initial speed R/3, after finding parachute accelerates to R.6/7.

It will first look for a parachute with speed R/3 and when it has found the parachute, it will skip to a speed of R.6/7. Since initially both robots move at the same speed in the same direction, if they start at distance D, they will still be at distance D by the time the first robot finds the other robot's parachute. This will take about 3.D steps. After that, their difference of speed equals R.(6/7 - 1/3) = R.(11/21) and it will take about D/(11/21) = D.21/11 more steps before they meet. The total is about D.(4 + 10/11) steps.

Another program, which I found during the writing of this article, performs only slightly worse, but is interesting to mention (I first made a computation error and thought it was quicker):

   RSRS0RRRR5 - initial speed R.2/5, after finding parachute accelerates to R.4/5.

Noteworthy is the fact that it moves quicker than R/3. It will skip to instruction 5 if it steps on a parachute after instruction 0 or 2. This program will find the parachute in D.5/2 = D.(2 + 1/2) steps. Then it speeds up to speed R.4/5 and its difference in speed with the other one will be R.(4/5 - 2/5) = R.2/5. It will take another D.5/2 = D.(2 + 1/2) steps to catch up with the other robot. The total number of steps equals about D.5 steps, which is only slightly worse than the previous. If more than 10 instructions were allowed, this use of the S-instruction might pay off.

Solution 2.

For this and the next solutions, let's write R1 for robot running program 1 and R2 for robot running program 2. We write R1<<R2 if R1 is to the left of R2.

In this solution, the idea is that both R1 and R2 move to the right with speeds s1 and s2 respectively and s1<s2. If R1<<R2, then R1 will look for a parachute and when it finds a parachute it will speed up to speed s3>s2. If R2<<R1, then R2 will also look for a parachute, and when it finds one, it will speed up to speed s4>s2>s1. The second scenario takes less steps than Solution 1. The proposed programs are the following:

       1: R23456S0R9 - initial speed R/7, after finding parachute accelerates to R.1/2 > R.3/7
       2: RSRSRS0RR7 - initial speed R.3/7, after finding parachute accelerates to 2/3 > 3/7 > 1/7

R1<<R2. It will take R1 about 7.D steps to find the other parachute. At that time, the distance with R2 equals 3.D and the difference in speed becomes 1/2 - 3/7 = 1/21, so it will take another 63.D steps to catch up with R2. Total number of steps equals about 70.D steps.

R2<<R1. It will take R2 about D.7/3 steps to find the other parachute. At that time, the distance to R1 equals D/3 and the difference in speed becomes 2/3 - 1/7 = 11/21, so it will take another D.21/11 steps to catch up with R1. Total number of steps equals about D.(3 + 1/3 + 10/21) = D.(3 + 17/21) steps. Note that this scenario requires (80/21)/(54/11) = 880/1134 = 77.6% of the time of the scenario in Solution 1 (or 1.3 times quicker).

Note. A robot looking for a parachute can not have a speed higher than 1/2, because each R or L will have to be followed by S and the last S of the search loop will have to be followed by 0. Using a max of 10 instructions, it can not be faster than 3/7 to end with a loop, e.g., xxxxxS0RR7 (final speed R.2/3) or xxxxxS0R97 (final speed R/3, does not seem optimal) or xxxxxxS0R8 (final speed R/2).

Note. One can opt for this strategy if it is required that the robots meet in both scenarios and it is desirable for them to meet quicker than in Solution 1 in one of the scenarios (50% of the cases).

Note. If one only accepts a failure rate < 0.01, then one can send 7 pairs of robots each trying to find each other (assuming each pair falls on a different line, non-intersecting with the lines of other pairs, and each of the two scenarios is equally likely for each pair); the failure rate for 7 pairs is (1/2)^7 = 1/128 < 0.01. In fact, the expected number of pairs which find each other quickly equals 3.5. The other pairs of robots will find each other only after a long time as shown.

Note. If only one of the robots is going to find a parachute, then why have both robots look for a parachute? This suggests the following solution, which indeed is an improvement.

Solution 3.

In this solution, the idea is that both R1 and R2 move to the right with speeds s1 and s2 respectively and s1<s2. If R1<<R2, then R1 will look for a parachute and when it finds a parachute it will speed up to speed s3>s2 and catch up with R2. If R2<<R1, it will NOT look for a parachute, but R2 will just move to the right with a constant speed s2>s1 and catch up with R1.

The second scenario will take less time than Solutions 1 and 2. After having done some experiments, I propose the following programs (I conjecture that with these, the second scenario above takes smallest amount of steps):

   1: R23S0RRRR5 - initial speed R/5 < R.7/9, after finding parachute accelerates to R.4/5 > R.7/9
   2: RRRRRRR80  - speed is R.7/9 (does not look for parachute)

In scenario R1<<R2 it will take about 5.D steps to find the parachute. At that time, the distance between the robots is equal to the distance R2 has traveled, or about D.35/9. Their difference in speed becomes R.(4/5 - 7/9) = R.(1/45) and so it will take another D.(35/9)/(1/45) = D.175 steps for the robots to catch up. The total number of steps is about D.180 steps.

In scenario R2<<R1, the difference in speed is R.(7/9 - 1/5) = R.(26/45) and it will take about D.45/26 = D.(1 + 19/26) steps for the robots to meet. This scenario requires only (45/26)/(54/11) = 495/1404 < 35.3% of the time compared to the fastest robots with equal programs (2.8 times quicker than Solution 1 above), and (45/26)/(80/21) = 945/2080 = 45.4% of the second scenario in Solution 2 (or 2.2 times quicker).

(See also the notes under Solution 2.)

Note. If in 50% of the cases, the robots will take a long time to meet up (70.D steps in Solution 2 or 180.D steps in Solution 3), it suggests, that maybe in 50% of the cases they do not have to meet at all. If one requires the at least two robots meet, one can send more pairs and skip the requirement that each pair has to meet. If the operation is successful if only one pair of robots meets, the robots can use different programs, which do not have to look for parachutes at all! See next solution.

Solution 4.

The robots run in opposite directions as fast as they can. They will either meet or not. The programs are

   1: RRRRRRRRR0 - speed is R.9/10
   2: LLLLLLLLL0 - speed is L.9/10 = -R.9/10

If R1<<R2, then the difference in their speed is R.9/5 and they will meet after about D.5/9 steps. In this scenario, the robots require only (5/9)/(45/26) = 130/405 < 32.1% of the number of steps required by the quickest solution which requires the robots always to meet (3.11 times quicker than Solut ion 3), and (5/9)/(54/11) = 55/486 = 11.3% of the number of steps required by Solution 1 (or 8.84 times quicker).

Note. If D is odd, they may pass each other if it is required they be on the same spot after executing a program instruction. If D is even, they will meet in the middle. This detail was not mentioned in the original problem and is discarded (has no impact on this solution). It is likely not as important as finding the parachute, since that required a special instruction. Were this a real life situation, then this must be addressed in the robot test. If they can not recognize when they have met while moving in opposite direction, the robots and the programming language need to be enhanced so that this functionality is incorporated. In the worst case it's a design flaw which will cost more time before the mission can be launched.

If R2<<R1, they will never meet. (Failure of mission. In particular, it takes more than the 180.D steps of the first scenario in Solution 3.)

Note. I did not take into account the exact number of steps for each program before it either finds the parachute or the other robot. But as D increases, the number of steps will be more accurate. There may be only 1 or 2 steps difference.

Conclusion. Depending on the requirements for the robots to have identical or different programs, to look for a parachute or not, to find each other always or sometimes, one can improve on the quickest solution of identical programs for both robots. I found four different solutions, each suggesting the next and faster solution than the previous. Clever use of the S-instruction and testing different speed combinations gives rise to intricate optimized solutions.

Can the proposed solutions be improved upon or extended?

BCurfs (talk) 03:56, 15 January 2014 (EST)

BCurfs (talk) 04:22, 17 January 2014 (EST), Updated and corrected

Other follow ups[edit]

- What is the farthest point where the robots can meet? (Again, consider sub-cases using identical or different programs, the number of robots looking for a parachute, finding each other always or not.)

BCurfs (talk) 04:39, 17 January 2014 (EST)

- Taking the problem seriously that the robots may pass each other when they approach each other, can we devise programs that deal with this issue correctly? In other words, can we force them to be in the same spot after executing their last instruction? I have some ideas and will follow it up later.

BCurfs (talk) 04:45, 18 January 2014 (EST)

Prisoner's Chess coins[edit]

So I've done some thinking about this puzzle and despite my first instinct that it was impossible, I'm beginning to see a general form for it.

I make the following assumptions: 1) You must flip a coin (the question is a little ambiguous here 2) The two prisoners can somehow determine an orientation for the board

On the case of a 1x2 board

1 2

Prisoner 2 will look at coin 1, Prisoner 1 will leave coin 1 heads if 1 is magic and tails if 2 is magic. If coin 1 is already correct, Prisoner 1 will flip coin 2.

On the case of a 2 by 2 board

1 2
3 4

Prisoner two will look at coins 1&2 for same/different and coins 1&3 for same different. they are the same the magic square is in that row/column if they are different the coin is in the other row/column. As heads and tails no longer matter the full map of states where coin 1 is Heads is; Starting position of coins 1-2-3 | if magic coin is 1/2/3/4 prisoner 1 flips

1-2-3 | 1/2/3/4
h-h-h | 4/3/2/1
h-h-t | 3/4/1/2
h-t-h | 2/1/4/3
h-t-t | 1/2/3/4

Then we get to a 4 x 2 board

1 2 3 4
5 6 7 8

You can set up an algorithm for prisoner 2 such that if 1&5 are the same, the magic square is in 1,2,5,6. If 1&4 are the same the magic square is in 1,2,3,4. If 1&8 are the same the coin is in 1,5,3,7.

As Heads or tails is irrelevant again, we assume 1 is heads when prisoner 1 walks in the room. The configurations he can find are


But from each of those he can only indicate 5 different squares (flip something else, flip any 1 of the 4 coins), so to distinguish 8 coins we need to have 7 possible flips or each flip conveying more than binary data. A trio of coins {1,2,3} can be in 4 states {All the same, 1 different, 2 different, 3 different} So we could use some form of Trio Pair combo to get

123(same) means col 4
123(3dif) means col 3
123(2dif) means col 2
123(1dif) means col 1

and you can always flip to indicate that but unless the column indicator is correct, you are forced on which coin to flip, so you can't indicate row.

You could potentially use evenness and oddness of some count to allow more flexibility but I'm no closer to solution.

My next instinct is that we need to make each coin represent a string of 3 bits.


When prisoner 2 walks in he adds up all the coins which are tails. Say 1,3,4,6 which gives him 14, 1110, apply mod 8 and you get 110 (6), which is coin 6. All prisoner 1 has to do is flip the coin that adds or substracts the relevant amount from the total he walks in to.

However this doesn't quite work, I won't go through all 2048 combinations of starting position and magic square but some examples

Starting position 1-2-3-4-5-6-7-8 If magic square is 1/2/3/4/5/6/7/8 then flip

1-2-3-4-5-6-7-8 | 1/2/3/4/5/6/7/8
0-0-0-0-0-0-1-0 | 2/3/4/5/6/?/8/7 or 1 Cannot flip to make 6, 

this is because I need to add 1 or 7 and they're flipped the wrong way. So rather than adding, which carries the risk of changing adjoining units do the following.

Count how many coins with the units digit 1 are heads, if odd units digit is 1, if even it's 2. Same for 2's digit and same for 4's digit. This now works.

1-2-3-4-5-6-7-8 | 1/2/3/4/5/6/7/8
0-0-0-0-0-0-1-0 | 6/5/4/3/2/1/8/7

A quick scan of all 2048 combos shows this method works. It works because necessarily in any position the Total when the prisoner walks in minus the magic square is a single figure that represents 1 coin.

To make it work for 8X8, 64 coins, you have to go 6 bit but There's my method for calculating the solution, thank you for coming on this marvelous journey with me.

On the 2x4 board the full range of solutions is hidden below again in the form state of coin 1-2-3-4-5-6-7-8 | coin to flip if magic coin is 1/2/3/4/5/6/7/8

Assuming you are the first prisoner. Number the 64 board positions using 6 binary bits(represented by binary expansion of n-1):


Now write all positions where there are heads and XOR corresponding bits: for eg: Let,in some random distribution, a head be at 5th 6th 7th 8th 9th cell and the position to represent is 7th cell. A XOR will give:


Now XOR it with the position to represent(here 7)


Now convert it to back to position number and flip the coin. Here 001110(14) is code for cell number 15. So flip coin number 15.

Now as the second prisoner enters, He'll decode as by just XORing all the heads. Here 5,6,7,8,9(from previous) and 15


This is the code for 7 which is the pass to freedom.

                                                -Prateek Pandey

You may want to look at Hamming codes. Additionally, potentially [citation needed] a more user-friendly solution: Count the parity of coins flipped heads (which conveniently equals that of tails) in each row. XOR rows 0 through 7 to determine which row the magic square is. Repeat for columns.

-- 20:45, 14 June 2015 (EDT)

24 Puzzle[edit]

Using only the four basic mathematical operators + - * / and parentheses, (no exponents, roots, factorials, concatenation, trig, etc.), and using each element of { 3, 3, 8, 8 } exactly once, construct the number 24.

There are a variety of other number sets that are similarly difficult. Try to construct 24 from any of these sets (no exponents, roots, factorials, etc., just the basic 4 arithmetic operators plus parentheses):

  • { 1, 5, 5, 5 }
  • { 4, 4, 7, 7 }
  • { 2, 5, 7, 8 }
  • { 3, 6, 6, 11 }
  • { 3, 3, 5, 7 }
  • { 1, 4, 5, 6 }
  • { 2, 5, 5, 10 }
  • { 2, 3, 5, 12 }


{ 3, 3, 8, 8 }  8/(3 - (8/3))
{ 1, 5, 5, 5 }  (5 - 1/5)*5
{ 4, 4, 7, 7 }  (4 - 4/7)*7

It has absolutely no logic, I don't like such puzzles.

Strict judge[edit]

The strict judge (currently last on the page) is the Unexpected Hanging paradox — — and it should be edited to note that there is no official consensus answer among the scholars who have studied it.

Four Points on a Sphere[edit]

Choosing a random point on a sphere is equivalent to first choosing a random line through the center of the sphere, then choosing one of its endpoints. So consider three random lines Ā, B̄, and C̄, and a fourth point D. Choosing 4 random points can then be done by randomly choosing one endpoint each for Ā, B̄, and C̄ (eight possible outcomes, each equally likely), as well as D.

Now consider the endpoints of the three lines: A and A', B and B', C and C'. These six endpoints partition the surface of the sphere into eight spherical triangles. D's antipode, D', will fall into exactly one of these eight spherical triangles (e.g. the one defined by A', B, C'), and the resulting tetrahedron (and none of the other 7) will contain the center of the sphere. So the probability that the randomly chosen endpoints of Ā, B̄, and C̄ form this particular spherical triangle (of the eight possibilities), and thus the chance that four randomly chosen surface points will form a tetrahedron that contains the center of the sphere, is exactly 1/8.

Another way of visualizing this: The six endpoints of the first three lines (A, A', B, B', C, C') form an octahedron that encloses the center of the sphere. Consider the "view" from Point D, looking toward the center of the sphere. Your line of sight will first intersect a "near" face of the octahedron, then the center of the sphere, then a "far" face of the octahedron. This "far" face is one of the eight triangles with vertices (A or A'), (B or B'), and (C or C'), and the tetrahedron defined by these three "far face" vertices and point D trivially contains the center of the sphere. (And none of the other 7 tetrahedrons do.)