Newcomb’s problem is a thought experiment where you’re presented with two boxes, and the option to take one or both. One box is transparent and always contains $1000. The second is a mystery box.

Before making the choice, a supercomputer (or team of psychologists, etc) predicted whether you would take one box or both. If it predicted you would take both, the mystery box is empty. If it predicted you’d take just the mystery box, then it contains $1,000,000. The predictor rarely makes mistakes.

This problem tends to split people 50-50 with each side thinking the answer is obvious.

An argument for two-boxing is that, once the prediction has been made, your choice no longer influences the outcome. The mystery box already has whatever it has, so there’s no reason to leave the $1000 sitting there.

An argument for one-boxing is that, statistically, one-boxers tend to walk away with more money than two-boxers. It’s unlikely that the computer guessed wrong, so rather than hoping that you can be the rare case where it did, you should assume that whatever you choose is what it predicted.

  • starman2112@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    4 days ago

    Their machine should be able to predict how dumb and irrational I am. Even if there is necessarily no downside to taking both boxes, I only take the mystery box.

    If I end up taking both boxes, then the machine may or may not have predicted that. But if I end up only taking the mystery box, then I doubt the machine would have predicted that I’d take both. I’m walking away with that cool mil

  • Azzu@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    5 days ago

    The answer depends entirely on what “rarely makes mistakes” means.

    If the prediction is correct more than 50.05% of the time, then I would take the mystery box. Expected value = 0.5006 * 1,000,000 = 500,600

    If the prediction is correct less than 50.05% of the time, then I would take both: expected value = 1000 + (1 - 0.5004) * 1,000,000 = 500,600

    Since “rarely” usually means some value much less than 50%, I would definitely take the mystery box.

  • Arrkk@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    5 days ago

    An angle I don’t see people looking at is to reframe the problem with amounts that are much more understandable, there is one thousand times more money in the mystery box, so let’s do the following:

    The Open box has 1 cent in it, and the mystery box might have $10, what do you do?

    Y’all are telling me you’d rather take a penny and have a tiny Chance at $10, rather than taking $10 with a tiny Chance of getting zero?

  • FackCurs@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    5 days ago

    It means that the people in the experiment have $1,001,000 to give way, for free.

    What if I rob them first?

    What if I convince them to unionize and they redistribute all the money fairly among the workers and force management to not conduct shitty social experiments on people?

  • chicken@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    0
    ·
    5 days ago

    A rule of thumb I think is good for most sorts of investment is, what choice can you feel good about making whether or not it works out? I can handle not getting 1k, but I would feel like a real chump missing out on an easy 1m without giving my best effort. If I pick just the mystery box and win, I feel like that win is deserved. If I pick just the mystery box and I walk away with nothing, then at least I don’t have to live with the shame of being a 2-boxer, which is more valuable than $1k. If I pick both boxes, I most likely get a little bit of money and a lifetime of bitter regrets, or in the less likely case get 1.001 million dollars and a sense of having barely avoided disaster and not really “deserving” it. Choosing only the mystery box is the clear choice because it is the choice I am more able to handle having made, on an emotional level.

  • Sickos [they/them, it/its]@hexbear.net
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    I am, admittedly confused by the premisetoo hypothetical to warrant reasoning about, but, I am interested in how there is ever a possible downside to taking both?

    It’s $1000one box, $xone box, $1000+$xtwo boxen. $1000+$x > $1000 because the hungry alligator eats the bigger number

    • Ryanmiller70@lemmy.zip
      link
      fedilink
      arrow-up
      0
      ·
      5 days ago

      That’s what has me confused. I thought I was misreading something cause I couldn’t see a downside to not taking both boxes. If the box is empty, you still have the $1000 and if it’s not then you get even more money.

  • fizzle@quokk.au
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 days ago

    It obviously depends on the computers mysterious ability to predict what I’m going to do.

    once the prediction has been made, your choice no longer influences the outcome.

    This statement doesn’t make sense. The computer would predict that you would think that.

  • davel@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 days ago

    Mmmm, this sounds like an idealist hypothetical problem that in reality can’t exist, so to engage with it is to engage with nonsense.

    The predictor rarely makes mistakes because… just because. It’s axiomatic. The predictor runs on the magic of unsupported assertion.

    • OBJECTION!@lemmy.mlOP
      link
      fedilink
      arrow-up
      0
      ·
      5 days ago

      Some version of it could exist. Not with the big numbers and not with the high degree of certainty in the problem, but you could have, say, somebody who’s on average 70% accurate at reading people and the boxes are $1 and $10.

      It is somewhat idealist in that it’s a contrived scenario, but it’s really just idle curiosity on my part. Maybe it could reflect something about people’s thought processes, or maybe it’s just people interpreting the question differently.

      • davel@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 days ago

        Even if it were to exist in the short run, it wouldn’t be stable. The predictor must be predicting somehow, which eventually could be at least partially sussed out, and future decisions would change as a result. Unless the predictor runs on literal magic, it would eventually no longer fit its own definition.

        • Arrkk@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          5 days ago

          You can flip the problem around and have it be mathematically the same. The predictor has some knowable accuracy, you can run the experiment many times to determine what it is. Let’s also replace the predictor with an Oracle, guaranteed 100% always correct, and we’ll manually impose some error by doing the opposite of its prediction with some probability. This is fully indistinguishable from our original predictor.

          Now, instead of the predictor making a prediction, let’s choose our box first, then decide what to put in the mystery box afterwards, with some probability of being “wrong” (not putting the money in for the 1 box taker, or putting the money in for the 2 box taker). This is identical to having an Oracle, we know exactly what boxes will be taken, but there is some error in the system.

          Now we ask, should you take one box or two? Obviously it depends on what the probability is. There’s no more “fooling” the predictor. So, you do the EV calculation and find that if the probability is more than 50% accurate (in other words, if the probability of error is less than 50%), you should always take 1 box