32 Comments
User's avatar
Daniel Greco's avatar

You might like Lara Buchak's book, Risk and Rationality. The orthodox view is that once you've moved from $ to utils, you've accounted for everything there is to care about vis a vis risk. Basically, if you think bet 1 is better than bet 2 because it's less risky, even though they have the same EU, then that just means you haven't correctly accounted for the utilities.

But that's at least not obvious, and Buchak defends caring about risk over and above the difference it makes to EU--given two gambles with equal EU, her risk averse agents will still prefer the gamble whose possible utility payoffs exhibit less variance. Basically, while I do think you're differing from standard decision theory, you've got some good conpany:

https://philpapers.org/rec/BUCRAR-5

Expand full comment
Dylan's avatar
7dEdited

Very interesting, thank you! I checked out the Buchak paper and indeed I very much agree with her argument- and it’s obviously leagues more robust than what I’ve been trying to say here. Really appreciate you stopping by to leave such polite and helpful comments.

Expand full comment
Onid's avatar

“The orthodox view is that once you've moved from $ to utils, you've accounted for everything there is to care about vis a vis risk.”

Do you think you could explain this a bit? I’m having trouble seeing how this could possibly be true but I might be missing something.

As far as I’m aware, the only difference between dollars and utils is that the utility of dollars doesn’t increase linearly. But it’s still monotonically non-decreasing, so it seems hard for me to imagine that the dynamics would change in any meaningful way. And if you look at Dylan’s example, he used utils - why would risk suddenly stop mattering in that context?

Expand full comment
Daniel Greco's avatar

As I'd put it, in the orthodox picture--what you get in econ textbooks, probably not what either Bentham or Dylan have in mind--utility, unlike dollars, is an abstract theoretical construct defined in terms of people's choices/preferences. If an agent is such that they're unwilling to accept arbitrarily small probabilities of getting to heaven over certainly getting some other thing they value (cake, let's say), then it just follows that heaven doesn't have infinite utility for that agent. If your choices/preferences obey certain constraints--stuff like transitivity, but a bit more is required--then they can be represented by a utility function such that you're maximizing expected utility relative to that function. There are a bunch of "representation theorems" which show this. In fact, typical representation theorems include axioms (in particular, continuity) that basically rule out infinite utilities by fiat, though my sense is that it's possible to relax them in a way that allows agents to assign some outcomes infinite utility.

Coming back to risk, the idea is that an agent's attitudes towards risk are reflected in her utility function. If the agent prefers a sure $100 to a 50/50 shot at $1,000,000, it follows that the utility of $1,000,000 can't be more than twice the utility of $100. But that kind of move is all we need to capture attitudes towards risk. (Buchak disagrees, focusing on stuff like Allais, and ambiguity aversion). So the orthodox view, I think, is unfriendly to Bentham-style arguments in a very different way. On the orthodox view, you can't really stipulate facts about utility and then make arguments about rational choice, because on the orthodox view, facts about utility are downstream of facts about choice. Heaven's having infinity utility would *follow* from it being rational to accept arbitrarily small probabilities of getting there over high probabilities of finite goods. But the latter would be the more basic fact.

Expand full comment
Onid's avatar
7dEdited

Thanks for the detailed reply. I had thought you might be referring to something like expected utility theory, but it isn’t an area I know too much about. For whatever reason, I had thought that models including risk-aversion stood in contrast to things like Von Neumann-Morgenstern theory, but after reading your comment and looking into a little I’ve learned that, actually, a version of the concept is built into their notion of utility.

Very interesting, and I’m excited to read more.

Expand full comment
Turtle out of shell's avatar

This was hugely illuminating. Thank you for the elaboration. So, if I am getting it right in my lay person understanding it means that nobody can tell another person what they should "rationally" do based on the expected utility because they do not determine that person's preferences. Is that a close enough understanding of the moral if the story?

Expand full comment
Daniel Greco's avatar

I think that's a fair lesson to draw, though maybe it brings out stuff people often don't like about orthodox decision theory. It's very hard to get any prescriptive lessons out of it! You can say stuff like this:

"If you prefer A to B, B to C, and C to A, then your preferences can't be represented as EU maximization. Isn't that weird?"

And maybe upon hearing this, the agent will think: "oh yeah, I thought I was valuing A, B, and C in a consistent way that can be represented by the theory, but now I see I wasn't. Maybe I should change my mind.

Or similarly: "you know if you aren't willing to accept this 50/50 shot at $1,000,000 rather than taking a sure $100, you're implicitly treating $1,000,000 as less than twice as good as $100. Is that really what you want to do? Does that capture your considered judgment about their comparative value to you?"

And it's easy to imagine somebody thinking: "you know that does sound silly once you put it that way."

On the orthodox view, I think a fair analogy is

Decision theory : preferences :: logic : belief

Logic doesn't tell you what to believe, but it does tell you when combinations of belief are irrational. E.g., don't go in for all three of P, if P then Q, and not-Q. But logic won't tell you which of the three to give up!

Similarly, decision theory doesn't tell you what to prefer, but it does tell you when combinations of preferences are irrational.

Expand full comment
Turtle out of shell's avatar

Makes sense, but then again there are so many things that the preference depends on it. If a person is completely broke my maybe irrational opinion is choosing certainty of 100$ is pragmatically very understandable and even preferable to 50/50 starving vs becoming a millionaire.

Expand full comment
Nemo's avatar

Succinct and to the point. The shape of the distribution does matter for decision making.

If BB and the rest* were right, we could stop every stats class after the integral definition of expected value; we don’t do that because there are in fact a lot more complexities in the field.

*yes, this is an oversimplification of BB’s position on EV Fanaticism; brevity is the soul of wit.

Expand full comment
Both Sides Brigade's avatar

This is a great paper that takes a similar approach, if you haven't read it: https://philarchive.org/rec/HONFAK

Expand full comment
Dylan's avatar

I’ll check it out- thanks!

Expand full comment
Kyle Star's avatar

If it’s money the first option is obviously superior because of diminishing marginal value, but given that we’re dealing directly with utils here I think those options are the same.

I’m a little confused where your argument is here — you say that “there’s no reason the mean should be more valuable than the mode” except that you think EV, the mean, is more valuable for every case except when you only get one shot at some unlikely thing? Where’s the line? Is a 2/3 chance of 10 utils better than a 1/3 chance of 21 utils? Why did you pick the line that you did? Sorry I just don’t really see your reasoning behind being risk averse I just see you saying that you are.

Expand full comment
Onid's avatar

I think the biggest point being missed here is the skeptics aren’t saying EV doesn’t matter, they’re saying that it isn’t the only relevant fact. And how you trade it off with other relevant facts is a complicated question - you have to choose what you’re going to optimize for, and you can’t optimize for everything. You will simply have to decide what risk-reward trade-off is acceptable for you, and live with the consequences.

The situation Dylan presented is meant to illustrate this. They have the same EV, but the second situation has considerably more risk. Or, to put a finer point on it: if you only played once, in the first scenario you would expect to be util positive. In the second one, you would expect not to be.

The source of confusion I think is that there does exist a situation where EV is truly the only relevant fact: in the case of infinite (and/or arbitrarily many) trials. In that case, these two situations really are identical. But before that happens, the distributions will be different, and you will have to make trade offs. And if you don’t understand these trade offs, you will make really bad decisions.

In particular, if the number of trials is so low that there’s almost no possibility of overcoming the risk, then you’re probably not calibrating your trade-off well.

Expand full comment
Joe James's avatar

This is a great summary!

Expand full comment
Kyle Star's avatar

This is true, but while the arguments for using EV are intuitive and well-trodden, I think the arguments for being risk-averse would need a defense.

Personally, I think people are far too risk-averse when it comes to their money, career, and life decisions, so I find being risk-averse even when there’s massive gains in the unknown to be a human trait. I think humans are very bad at optimizing and prioritizing in general (amount of time they spend with family, prioritizing the things they care about), and I believe optimizing and prioritizing is useful and good. You say it’s not the only relevant fact, but I think failing to understand which facts are relevant is the issue here.

I think this essay a good descriptor of what he feels, I just don’t see any part that’s meant to convince anyone, or maybe I’m still missing it — I don’t deny that being risk averse is something some humans prefer, and I don’t deny that humans don’t only consider EV when making decisions.

It would be interesting to have a discussion about what relevant facts can trump expected value and their implications, because I think this argument is too vibes based right now.

Expand full comment
Onid's avatar

I agree-people tend to be too risk averse.

But I think you’re confusing the “ought” with the “is” here. How risk averse you should be is a subjective question, but the fact that there is risk is a well defined mathematical truth. EV fanaticism asks to us view risk as irrelevant, essentially saying our consideration of risk in that subjective equation should be zero.

And what I’m saying is I don’t know the right trade off, but I’m fairly confident it isn’t to literally ignore half the equation.

Expand full comment
Kyle Star's avatar

This is probably true, they’re definitely not literally identical scenarios. There’s crucial differences. I just worry that we’re saying “EV is good, except there’s other relevant facts” and then using the fact that there is other considerations to sneak in “risk-aversion” which I don’t really understand why we care about beyond humans having random preferences that aren’t rational, gifted by evolution

Basically, I want to see a better argument of why risk aversion is better than using EV, in a way that doesn’t just point and say “this guy’s risk probably isn’t going to pay off.”

Expand full comment
Onid's avatar

I think there are two points here.

1. Risk aversion in this case isn't a binary, it's a magnitude. You can be more or less risk averse. EV fanaticism only works if your risk aversion is literally 0 in all circumstances - even the tiniest non-zero amount will cause most of its more extreme conclusions to fail entirely.

2. Risk aversion (or rather, mean-variance analysis) is, admittedly, an axiom, so there's a limit to how much you can justify it, just as EV maximization is also an axiom. At some point you just have to say "this makes sense as an assumption." That said, the core assumption is typically motivated by the idea that given two investments with equal expected value, you should take the one with less risk.

In this case, you admitted that both options seem the same to you. So why wouldn't you prefer the one that has a high chance of making you money immediately over the one which almost certainly won't?

Expand full comment
Nemo's avatar

I’ll second Onid here: there is a tradeoff, and throwing away information from your distribution of outcomes (as EV maxing does) comes at a cost.

The fact that humans are often bad at estimating risk doesn’t mean it’s not important. I would even hazard that the human tendency towards risk aversion is in some ways an evolved response to it being incredibly important: “should I eat this new plant?” was at one point of grave importance. Sometimes you get a delicious new salad ingredient, sometimes you ate Destroying Angel and now you’re dead.

I’ve mentioned it in a couple of threads, but I’ll mention it here to: the theory of coherent risk measures evolved in finance precisely because risk does matter in the real world, and there is a wealth of theory on how best to handle it.

However: there’s no escaping the axiological component. Your values will ultimately determine what you think is important, what information is relevant, what levels you feel comfortable setting.

Even EV maxing is not a neutral approach: it is very specifically an approach that favors high rewards over avoiding large penalties. This reflects a particular kind of risk tolerance that is no more inherently rational/moral/systematic than any other.

Expand full comment
Alex's avatar

I think there are a lot of arguments for being risk-averse. Would you play martingale in casino? If you had infinite money, it's a sure profit!

Expand full comment
Keshav's avatar

If you had infinite money, there's no reason to martingale because you have the same amount of money at the end

Expand full comment
Alex Strasser's avatar

This would be more worrisome in a world where I thought the most probable religion required me to amputate a limb, but not in the real world.

Presumably, the commitment level that makes sense correlates with credence level. Trying to pray a few minutes a day is very low cost. By the time you get to 1%, going to church (corresponding to ~1% of your week) makes sense, etc.

Expand full comment
Roman's Attic's avatar

A few questions, to help me understand your view:

When do you think it is worth it to buy insurance?

What type of payment would you need to be given to be willing to pay Russian Roulette?

How do you go about answering those questions?

Expand full comment
Onid's avatar

In any situation, there are two factors to consider, risk and reward (or, if you prefer, variance and mean). Risk disappears in the infinite case, but not the finite. So when things aren’t infinite, you Ignore risk at your own peril.

The answer to all these sorts of questions then is simply “do it when the your desire for the reward outweighs the risk.” If you are risk averse, then you will get insurance sooner than if you are not.

As for Russian Roulette, what reward would get you to play Russian Roulette? Personally I’m not sure any positive reward could, though I would certainly play it if the life of my family were at stake.

Expand full comment
Bentham's Bulldog's avatar

Surely you should take infinitely big risks *more* seriously than finitely big risks rather than the other way around!

Expand full comment
Onid's avatar

Sorry, I ought to have been a little clearer.

What I was referring to here was Modern Portfolio Theory, or Mean-Variance Analysis. "Risk" in this case is a technical term, with very specific mathematical meaning.

Technically, both the risk and return on your version of Pascal's wager would be undefined, but if we do some definitional Kung Fu to ignore that, then you wind up with infinite EV paired against infinite risk. **This makes the trade-off undefined.**

The only way to make the trade-off not undefined is to set risk aversion to zero, which means if your model is consistently applied than in no circumstances are you willing to consider any risk ever.

By the way, I think the first lesson here should be that math generally breaks down when things get infinite, and we should be suspect of ever making a decision based on an equation with a literal infinity.

Expand full comment
Bentham's Bulldog's avatar

But that's very implausible! If you'd make a sacrifice for a low risk of finite but very large outcome, then that being infinite should only make it better!

Expand full comment
Onid's avatar

In case that last comment was too much like Eulering: I’m saying that in this case risk is infinite and return is infinite. Trying to trade one of these numbers against the other is impossible. The reasoning simply breaks down and there’s nothing the theory can say about it.

Expand full comment
Bentham's Bulldog's avatar

I don't know what you mean by risk is infinite. But I think you can do tradeoffs with infinite risks using hyperreals, which is needed to accomodate that a .9 chance of infinitely good outcomes is better than a .0000000001% chance.

Expand full comment
Onid's avatar

Your point about hyperreals is well taken - I should have thought of that.

It seems to me though that there are basically two different topics under discussion here.

One is EV fanaticism. As far as I can tell, EV fanaticism only makes sense as a rational theory if you declare your risk aversion to be precisely zero. But risk aversion is something you assign subjectively, and declaring it to be zero does not change the mathematical fact of its existence. Trying to reason as such in real life would be “rational” in the technical sense that it would be self-consistent, but it would almost certainly cause you to lose all your money. SBF famously believed in maximizing EV above all else, and he lost more money than probably any other human in history.

The other issue is Pascal’s wager, which I haven’t put much thought into and probably shouldn’t be commenting on. Your point about hyperreals is enough to convince me that the discussion around Pascal’s wager isn’t necessarily related to EV fanaticism, though - it seems plausible that risk averse models could still take the wager. If there is an issue with the wager (which, full disclosure, I have a strong prior to believe) then I’m starting to think it has little to do with risk-aversion or EV fanaticism.

Expand full comment
Nicholas Halden's avatar

I think the more persuasive forms of Pascal’s wager argue for a higher probability of Christianity (usually based on appeal to authority but whatever) and a lower investment (being a Christian doesn’t actually ruin your life).

Expand full comment
Dylan's avatar

I agree they typically argue with those more palatable conditions, but I don’t see why following that argument would lead to any other answer no matter how low probability you assign or how onerous the requirements?

Expand full comment