Among my more mundane memories, I have this unusually vivid recollection from sixth grade. I was practicing my splits on the floor in front of the television while my mom was cooking in the kitchen. I didn’t bother changing the channel, so I ended up watching a CNN documentary about this serial killer who butchered his wife. That day, I promised myself I’d be vigilant about not marrying someone who might one day murder me. A very sensible objective.
I’d occasionally revisit this thought as I grew older as a part of the random flashes of memory I get when I let my mind wander or when I discover yet another sensationalized serial killer show, although this wasn’t something that ever demanded my attention when I met a potential spousal candidate.
The one exception was this guy I’d met on Hinge, whom I saw a few times. Whenever I spent more time with him than a short coffee date, I’d mysteriously – but reliably – shiver. This never happened during my previous 23 years of life. I thought it might’ve been due to the winter cold, but it also happened inside a cozy restaurant, a museum, and the Subway, and it never happened on days I didn’t see him. I was so fascinated by this inexplicable, visceral, and physical response that I asked to go on more dates with him to try to understand the phenomenon, despite knowing it wasn’t going to lead anywhere. He certainly had his quirks, but I couldn’t logically comprehend the primal fear I felt. In the end, he decided I was too “overwhelming” in the way I thought and talked1, so I never saw him again. Unsurprisingly, the shivering never happened again.
The guy, to my knowledge, had quite an unGoogleable name, although I suppose it’s a rather relieving sign that neither he nor his name twins have committed a newsworthy crime. Still, I am thankful that I never saw his abode and avoided a potential Tracy Edwards moment in Dahmer. But despite being genuinely afraid, the rational part of me scoffs at the thought that I would think he’s dangerous purely based on a few offbeat behaviors2 and some involuntary physical reactions of mine. The rate of serial killers in the United States is ridiculously low; what would be the chance that I ended up dating one?
But on the other hand, what’s the probability that I would know someone who’d hire a gunman to kill his wife3 for money? It’s the type of crazy they’d screenwrite in a crime drama film like House of Gucci.
For simplicity4, let’s name them John and Jane. To be fair, I’ve only interacted with John a handful of times, but Jane had once been my dentist, went to my mom’s church5, and was a member of my parents’ close-knit friend group I nicknamed “my parents’ cult”6. I’m abusing the word “cult”, but not by far, which speaks to the eccentric intimacy between our families. Jane was a student of the cult’s weekly group ballet class7, and she and John would regularly go on weekend trips with my parents. The members of the cult also supportively, albeit perhaps reluctantly, attended these monthly music recitals hosted8 by John, in what I figured was the display of his version of the classic theater-kid angst, to showcase his developing operatic voice.
It made a lot of sense when I learned that John insisted on singing an aria at Jane’s funeral. There’s something eerie about someone who enjoys opera, let alone someone who wants to practice opera. Eagerly angling to perform at your late partner’s funeral embodies a similar Halloween-level spookiness as someone who eats chocolate bars at a funeral, Hereditary style.
But seriously, I would stop short of categorizing these behaviors as anything beyond personal oddities. The judgmental side of me always thought that John had distasteful theatrical and opportunistic tendencies, but murder for hire? That seems excessively aggressive. In a parallel world where John was my close friend’s long-term boyfriend, even if I had influence over the situation, were more familiar with John, and had discovered his monetary motivations prior to the murder, I doubt my capacity to conclude anything meaningful from the information. I want to think that I’d have predicted differently and done something to mitigate risks of, well, death, but this sounds bizarre in the face of an absurdly insignificant probability that my prediction was true. Because it *is* ridiculous9. But should it be?
Common wisdom states that humans are notoriously bad at comprehending large numbers. This has important implications for understanding things like policies and government decisions. How bad, really, are a million American COVID-19 deaths? What does the $1.4 trillion US government deficit in 2022 mean? Are the $801 billion US military spending and defense budget from 2021 sensible? (That’s less than four percent of the GDP. Is that reasonable?) How do these numbers compare to the number of stars in the Milk Way?
A natural extension to the field of numerical cognition is probabilistic cognition. I made the term up just now. It’s difficult to understand abstractly large numbers, but what about reasoning with small probabilities?
Probabilities are inherently tied to risk and decision-making. A mathematical approach involves modeling the future by separating potential scenarios in gradations of varying desirability, weighted against the plausibility of those situations actualizing. Probabilistic thinking would give you an advantage at poker in Vegas. But as a more personal approach, what is the best way to interpret these percentages? What do you do when things are okay 99% of the time, but catastrophically bad in that 1% of the time? Because that’s approximately your odds of dying in a motor vehicle crash. I reckon most people generally ignore this risk, among others.
Common wisdom once again states that humans simply struggle with probabilities. Probably. How can we accurately ponder the probability of death by exsanguination after tumbling on an ice skating rink and getting sliced in the carotid artery by the shoe blades of someone who couldn’t stop in time, of death after being struck in the balls by a tennis ball, or of the dreaded Death-By-Coconut™️? How do we evaluate the risk of side effects from the COVID vaccine compared to the risk of contracting COVID? What about that one-percent probability of complications from a routine surgery versus the risk of ignoring the problem? I mean, I am occasionally paranoid of getting hit by cicadas carcasses falling from trees after mating season 🥴 At some point, you mix in the undesirable cognitive biases and mechanisms that often come with the prior and posterior likelihoods of Bayesian thinking, or perhaps you’re a frequentist deviant. As a toy example, I find it unintuitive that 1 out of 5, which *feels* quite common, is the same as 20%, which *feels* fairly uncommon. Maybe I’m broken, but in another example, it’s standard to dismiss the possibility of a bad thing happening to oneself as a part of some sort of optimism bias that causes people to underestimate its probability. Here is a long list of cognitive biases, many of which I believe10 at least partially stem from an inability to reason with probability.
When it comes to small probabilities, I’ve found that personal motives play an outsized role in shaping an individual’s understanding of the number. I might be hallucinating patterns, but you can at least entertain my theory. The significance of negligible probabilities to individuals is dictated by what they want, as well as what they want to want, although I get that the second-order analysis is an inherently annoying concept. A cleaner way to express my theory is: the perception of probabilities is colored by individual motivations.
To elaborate, let’s take the most salient example of probability application in my current life, which is determining the futility of my dating app startup idea, Eleven. Like any delusional entrepreneur, I am convinced that Eleven has 🦄 potential. Otherwise working on the idea would lead to (not “catastrophic” failure but) at least an unfortunate waste of time and resources. After all, dating apps are a dime a dozen, with a slim chance of success. Let’s say that 99% of all dating apps fail by some metric. Then, anyone who bets against Eleven will be right 99% of the time. I’m sure my friends carry good intentions, but I also don’t think they’re immune to the vanity of owning that easy “I told you so” moment.11
But let’s focus on that 1%. It’s apparently natural to neglect the small chance of success, and there’s even a highly overrated12 book about black swans and the widespread phenomenon of underestimating the unexpected. I doubt that’s the main picture. What is the difference between the probability that you will die in a car accident tomorrow (that you neglect), the probability that AI kills us all (that you might neglect), or that your company will succeed (that you don’t neglect)? Without engaging in the futile exercise of Spot The Differences, I expect the answer to include perceived control over the situation, additional “insider” information about the topic, and desire for a particular outcome – forms of personal influence. It’s obvious that a person’s perception warps these negligible probabilities, and the importance of these values varies like ε in floating-point operations or words in the middle of a long paragraph in a blog such as this one. And intuitively, perception is affected by a person’s internal state, and personal motivations are deeply embedded within that internal state. A tangential line of reasoning is that it’s been shown that emotions are critical in decision-making, and given the role of probabilities in the decision-making process, I would call these slim probabilities “emotional epsilons” that people subconsciously manipulate based on how they feel.
This sounds slightly sociopathic, but an underrated ability is accurately predicting people’s actions and motivations. When reflecting on John, people from the cult had several major classes of reactions. There were many who proclaimed they “knew he was a <scumbag13> all along!!”, some who felt vindicated after pinpointing his fake displays of sorrow revealing an unpromising career as a moirologist, and a few optimists who were legitimately shocked. As far as I could tell, some were guided by being right, by being kind, by being fair, by being uncontroversial, and so on. As a secondhand observer, I found myself voicing to my parents both suspicions and doubts that John had been involved. While I found something unsettling about the pixelated street recordings of the shooting and did not agree with the initial hasty racial hate crime conclusions, I wanted to avoid being the type of person who jumps to conclusions or who accuses others of being a murderer without compelling evidence.
On a less serious note about motivations more broadly, I don’t like it when my friends label my romantic type as six-foot-four dudes who do mathematics and typically speak French, like tennis, and play piano. That’s so specific. I don’t want to be the type of person who cares about relatively superficial traits, such as height or having a certain hobby. I’d like to think I’m not someone who “has a type”. There’s some empirical truth behind the statement, but it’s within my emotional epsilon, ¿okay?
Here are some more small sample observations on the topic of dating. A handful of my friends like to act innocent about their mating choices. Oh, I'm dating another white guy? “Oops!” They eventually confessed to me that they want halfie14 children. A certain crowd of Hong Kong girls living abroad will date mediocre white guys for the sake of having the status of dating a 白人15. My friend only fancies girls on the taller side, and I think this is because he is somewhat unhappy about his height and suspects he has short genes and wants taller children (he can confirm after reading this 😛). Another friend is a romantic and started seeing this girl and tends to conclude the most flattering explanations for her behavior – for example, that they’re so similar because she says “yup” as he does16. So on, and so forth.
To me, a fun group study on decision-making in the face of small probabilities is the Effective Altruism (EA) movement. EA is a network of people who calls for using evidence and reason to do the most good. Honestly, online discourse scares me17, and I don’t want to fall into the infinite recursion of defending my views, but here’s my hand-wavy take. The “evidence and reason” is scientific and mathematical, which lends itself to a utilitarian-ish “numbers” view. Certain factions of EA are motivated by longtermism that prioritizes the long-term survival of our species and seeks to mitigate the risks from catastrophic events, including potential18 black swans such as nuclear war or AI existential (x-) risk. One observation is that those who work on, say, AI safety usually assign a much higher probability of x-risk due to AI than the rest of the population. I’m unclear about the causal chain since people who work on AI safety likely entered the field because they initially believed it was an issue, but I wouldn’t be surprised if that probability was artificially inflated post-hoc to rationalize their work. To be fair, people can be wrong; at one point, most of the population believed that the Earth was flat. My impression is that many EAs genuinely care about these issues, although I have heard comments that some people join the movement to feel like a good person, and I raise my own eyebrows when a few of my friends mention that the community is comprised of smart people (and I know that they crave to be and feel and be seen as a smart person).
For me, I’ve been best able to predict someone’s behavior in the face of small probabilities by considering the second-order motives of their emotional epsilon, the types of things they don’t want to want but do want. It’s such a simple concept that people don’t consider it explicitly enough in the context of reasoning with small probabilities. It’s natural to hide unsavory objectives – and even lie to oneself – to maintain moral or societal propriety. However, as I like to say, “The truth is the truth19, regardless of whether it’s been said”.
In a moment of confession, a friend told me something quite pitiful, saying, “I just want a girl who can love me”.
“Nonsense. Of course someone will love you.”
Then he told me his unexpectedly wise insight. “Most people have their own insecurities that make them unable to love you in the way you need.”
John was unable to love his partner due to his financial insecurities. His tale ended with his suicide in jail. I wonder, what did he gain? Was it worth it?
I imagine what must have been some flimsy declarations of love for the past ten or so years, and I revisit my doubts about the existence of love. The unconditional, romcom-worthy type of love.
My close friends would be shocked to learn that I was a romantic in my earlier years. I believed in those sweeping feelings of ~love~ cited by Nicholas Sparks. Those thoughts gradually lost their spark to the point where I started defining the concept as simply “the act of caring wholeheartedly for another being more than they care about themselves”. Maybe I’ve been single for four years too long, but I can no longer fathom a realistic version of love. I mean, could someone care for me more than they care about themselves? It follows the same line of reasoning that concludes that humans are motivated by selfishness, that every action is ultimately taken to benefit oneself. Is it true what the pessimists say, that you are all you have in the world? And does it even matter?
Love has a habit of teasing me with hallucinations of unbounded future possibilities, but it eventually eludes me. Sorry, I’m being dramatic. At least I don’t have to worry about having a partner who is driven by insatiable greed to end me.
Stay safe, and avoid assassination, my friends.
For the curious: he wrote that “the way you think and talk feels jumpy and random and figuring out how to respond takes a lot of energy, especially when summed over many interactions”, noting that “‘overwhelming’ is a word that keeps coming to mind when I try to think of how it makes me feel”. You, as a consumer of my thought process, can let me know if you agree. Actually, given my heavy usage of footnotes, maybe the answer should be a resounding “yes”.
He was very particular about avoiding blue light at night, so his phone screen was tinted red, he almost always wore orange glasses at night, and apparently his room only had red lighting. He had many other particularities, but those weren’t so unique to him. In aggregate, though, it felt off. Two years after I met him, I got a notification from Hinge that he’d been banned from the platform due to a safety issue, but I’m in the dark about what actually happened.
Technically, they were in a decade-long partnership and never got married, so “long-term girlfriend” is a more accurate term. Maybe that means something? It’s funny how even in death, we’re discussing the semantics of the relationship – the age-old question, “What are we?”. Kidding. It was complicated, evidently.
Tempted to write WLOG there haha
Yes, I went to Sunday school growing up. My mom is a member of a Chinese Protestant church if that’s useful to know.
That’s a story for another day.
They hired a former Chinese ballet film star as their teacher and make these elaborate slideshow videos of their dance routine. I find it endearing.
Unimportant detail: paid for by Jane.
Speaking in terms of unconditional probabilities
As far as I know, this belief is a completely unfounded hunch.
I once heard a quote from Mike Crittenden: “Pessimists sound smart. Optimists make money.”
In my opinion, but as always, I welcome disagreement.
Fill in your favorite 骂人话 [mà rěn huà; curse word].
Mixed race
bái rěn; white person
I matchmade them, so I do think they are similar, but his evidence is a stretch even for me.
It’s so public and permanent. I expect myself and my views to change in the future. I’d hate to be canceled for certain thoughts I have today and certain arguments I make today because I might want to amend those later.
Estimated probabilities vary. Don’t come at me lol. This value isn’t that important to my point.