Skip to main content
💗💭.ws

Heavy Tailed Distributions and Optimization

One of the major differences between real life and stories is impact distribution. Perhaps the clearest example is Tony Stark in Iron Man 2 saying he's privatized world peace. Essentially correct, and a product of the fact that his personal military power is greater than everyone else combined, in some meaningful sense.

This brings to mind power law distributions with alpha between 1 and 2, namely, that fictional worlds often contain influence such that the most influential person (good or bad) is some notable fixed percentage of the total, even as the scope of focus expands to the entire population.

I've written before that this indicates a staggering lack of coordination and cooperation among nominal teammates, but I think we might be able to infer more than that. I think we can infer a generic lack of optimization.

Here's the pitch: if you're bottlenecked against a constraint, you'll get clumps and sharper drop off near where you can get while handling that bottleneck. All processes under optimization hit some factor where, at the margin, they are limited. With efficient players, you should see they make tradeoffs so the marginal return along all possible approaches are equal (otherwise the most recent marginal contribution was sub-optimal). Unless power lets you accumulate more power (again, in a superhero setting where even the people who make use of lots of money don't get the money from superhero'ing), you'll hit a limit. If it's money, sure, and that has a thinner-tailed distribution than a power law. Time, that seems likely, and is uniformily distributed (although survivor bias kicks in for e.g. cultivation settings of hyper-violent immortals). Intelligence? Now you're in bell-curve-territory, with very thin tails. The smartest person is just barely smarter than the second smartest, so it just wouldn't make a power law distribution. None of these would (excluding some dynamics you see in cultivation stories, again).

Since the analysis lines up with the emotional reaction ("why not give your trusted friends both the mech suit and the super soldier serum to try and help them live?"), I think the conclusion is obvious: this is an optimization-free zone.

What might cause this? One idea that occurred to me is a dynamic where e.g. people are observed in a simulation or similar proxy, and extreme filtering is done on some other trait, like Goodness, which embeds an aversion to power-hungry optimization so thoroughly it overcomes the straightforward appeal of that when e.g. half the universe is at stake. The Most Good in the simulation are given access to strange mystical powers in real life, and, because that of course sets them on another path, sometimes it ends up with supervillains anyhow -- yet the world is never actuallly conquered, somehow.

I will leave the religious metaphors to the reader.