Important fallacies, paradox and biases

Correlation is not causation

"Correlation is not causation" is a principle that reminds us that just because two things are related or occur together, it doesn't mean that one causes the other. Here's a simple explanation with an example:

Imagine you notice that whenever the number of ice cream sales increases, the number of drowning incidents at the beach also increases. It might be tempting to think that buying more ice cream somehow leads to more drownings, or vice versa. However, the correlation between ice cream sales and drownings doesn't mean that buying ice cream causes drownings or that drownings cause people to buy more ice cream.

In reality, both the increase in ice cream sales and the rise in drowning incidents may be influenced by a third factor—warm weather. During hot summer days, people are more likely to buy ice cream to cool off, and they're also more likely to visit the beach, increasing the chances of drowning incidents. So, the correlation between ice cream sales and drownings is coincidental and doesn't imply a direct cause-and-effect relationship.

This example illustrates the importance of being cautious when interpreting correlations and highlights the need to consider other factors before concluding causation.

Dunning–Kruger effect

The Dunning–Kruger effect is a cognitive bias in which people with limited competence in a particular domain overestimate their abilities. Some researchers also include the opposite effect for high performers: their tendency to underestimate their skills. In popular culture, the Dunning–Kruger effect is often misunderstood as a claim about general overconfidence of people with low intelligence instead of specific overconfidence of people unskilled at a particular task.

Wikipedia: Dunning–Kruger effect

P-Hacking

P-hacking refers to the questionable practice of manipulating or "tweaking" various aspects of statistical analyses in research studies to achieve a statistically significant result, typically represented by a p-value less than a chosen significance level (often 0.05). The p-value is a measure that helps researchers determine if their results are statistically significant or if they might have occurred by chance.

Researchers may engage in p-hacking with the intention of finding patterns or results that seem significant, even if the underlying data do not genuinely support such conclusions. P-hacking can involve various tactics, such as:

  • Selective reporting: Choosing to report only certain aspects of the data that support the desired conclusion while omitting others.

  • Data dredging or fishing: Analyzing data in multiple ways until a statistically significant result is found, without pre-specifying the analysis plan.

  • Stopping data collection when significance is reached: Continually collecting data and stopping the analysis as soon as a significant result is achieved.

  • Exclusion of data points: Removing outliers or specific data points that would undermine the significance of the findings.

P-hacking is problematic because it can lead to false-positive results, making it seem like there is a significant effect when, in reality, there isn't. It undermines the reliability and validity of scientific research by introducing bias and inflating the likelihood of making Type I errors (incorrectly concluding that an effect exists). To address this issue, researchers are encouraged to follow pre-registered study protocols, clearly state their analysis

Wikipedia: Data dredging

Motte-and-bailey fallacy

The Motte-and-Bailey fallacy is a way of making an argument that involves switching between two positions: a "Motte" position and a "Bailey" position.

The "Motte" is a defensible but not very controversial position, and the "Bailey" is a more controversial and less defensible position. When someone is challenged on the controversial "Bailey" position, they retreat to the more defensible "Motte" position. Essentially, it's like having a strong fortress ("Motte") and a less defensible but more desirable piece of land ("Bailey"). When under attack, you retreat to the fortress, then return to the desirable land when the threat has passed.

In simpler terms, it's a way of arguing where someone defends an easy-to-defend position but really wants to promote a more controversial idea. If their controversial idea is challenged, they fall back on the safer, more defensible position.

For example, imagine someone arguing that "All desserts are healthy because they contain fruits." If questioned about the healthiness of a specific sugary cake, they might retreat to saying, "Well, I meant fruit salads are healthy." In this case, the "fruit salads are healthy" is the defensible "Motte," and the initial claim that "All desserts are healthy" is the controversial "Bailey."

Wikipedia: Motte-and-bailey fallacy

Burden of Proof

The burden of proof is a concept in logic and argumentation that addresses the responsibility one has to provide evidence or support for a claim they make during a debate or discussion. It establishes which side in an argument is required to present evidence to justify or validate their assertions.

In simple terms, the burden of proof asks: Who needs to provide evidence to support their claim?

There are generally two positions in an argument:

  • The Affirmative (or Positive) Claim: This is the side making an assertion, proposing something new, or making a claim about reality.

  • The Negative (or Skeptical) Position: This is the side that questions or rejects the affirmative claim.

The burden of proof usually rests on the party making the affirmative claim. The basic idea is that if you're asserting something new or making a claim, the responsibility is on you to provide evidence to support that claim. The other side is not required to prove the claim false; they can simply point to a lack of evidence or logical flaws in the argument.

For example:

  • Affirmative claim: "This herbal tea can cure insomnia."

    Burden of proof: The person making this claim is responsible for providing evidence, such as scientific studies or testimonials, to support the idea that the herbal tea is an effective cure for insomnia.

  • Negative position: "I don't believe that herbal tea can cure insomnia."

    Burden of proof: The person in the negative position doesn't necessarily have to prove that herbal tea cannot cure insomnia; instead, they can highlight the lack of evidence or flaws in the argument presented by the affirmative side.

In legal contexts, the burden of proof may shift based on the nature of the claim or the type of case. However, in everyday discussions and debates, the general principle is that the person making a claim is expected to provide evidence to support it.

The GI Joe Fallacy: knowing is not half the battle

The phrase "Knowing is half the battle" is a slogan associated with the animated television series "G.I. Joe: A Real American Hero," which aired in the 1980s. The idea behind the slogan is that being informed or aware of a situation is a significant step toward addressing or solving it. However, the concept of the "G.I. Joe Fallacy" suggests that simply knowing about a problem or having information is not enough to effectively address or resolve it.

In other words, while knowledge is valuable, taking action is the other crucial part of the equation. The fallacy highlights the misconception that awareness alone is sufficient for overcoming challenges. It emphasizes the importance of not stopping at acquiring information but also actively applying that knowledge to achieve a desired outcome.

For example, if someone is aware of the negative health effects of a sedentary lifestyle, simply knowing about it is not enough. The person needs to take action by incorporating regular physical activity into their routine to experience the benefits.

In summary, the "G.I. Joe Fallacy" is a reminder that awareness or knowledge is only a part of the solution, and meaningful action is necessary to address and overcome challenges effectively.

Tradeoff and Opportunity Cost

The concepts of tradeoff and opportunity cost are fundamental in economics and can be observed in various aspects of our daily lives.

Tradeoff:

Tradeoffs are omnipresent because resources are limited. In the context of building codes, entrepreneurs face a tradeoff between ensuring consumer safety and the cost of implementing safety measures. For instance, installing sprinklers and safety glass may enhance safety but also increase the overall cost of products. This tradeoff means that entrepreneurs must decide how much safety they can afford without significantly raising prices or hindering their ability to open new businesses.

Example: The coffee shop's decision to install sprinklers is a tradeoff. While it enhances safety during a fire, it incurs additional costs. The tradeoff is between making the building safer and potentially increasing the cost of coffee. The example expands to the decision of opening a second location, where a more stringent building code increases both direct monetary costs and the opportunity costs of time and effort. This demonstrates how tradeoffs are inherent in various aspects of life, from business to education.

Opportunity Cost:

Opportunity cost is the value of the next best alternative. In the scenario of building improvements, the opportunity cost is the potential alternative uses of the resources spent on safety measures. For instance, the money used for sprinklers and safety glass could have been invested in research and development, marketing, or other aspects of the business. The opportunity cost highlights what is foregone in terms of potential benefits when a particular choice is made.

Understanding these concepts is crucial because they apply not only to monetary decisions but also to choices involving time and resources. Even non-monetary decisions, such as spending time with loved ones, involve opportunity costs that are not easily measured in dollars and cents. Economics provides a framework for comprehending these tradeoffs and opportunity costs, allowing individuals and businesses to make informed decisions in a world where resources are limited.

Example: Jenny's decision to go on a third date with Adam has associated costs like the Uber ride and the chai tea latte. However, the opportunity cost, in this case, is the value of the next best alternative, such as having breakfast with her sister. It emphasizes that decisions involve tradeoffs, and considering alternatives is crucial to avoid wasting scarce resources like time and energy.

Marginal Thinking and the Sunk Cost Fallacy

Marginal Thinking:

  • Definition: Marginal thinking involves evaluating the additional or incremental benefit and cost of a specific action when making decisions.
  • Explanation: When faced with a decision, instead of considering the total or overall cost and benefit, marginal thinking focuses on the extra or additional impact of the next unit of the activity. If the marginal benefit outweighs the marginal cost, it is considered a rational decision. This approach allows individuals to optimize their choices by assessing the impact of small changes on the overall outcome.
  • Example: When deciding how long to watch TV, marginal thinking involves considering whether the additional enjoyment gained from watching one more episode exceeds the cost of the time spent. Similarly, a clothing shop using marginal thinking in pricing would assess whether lowering the price slightly would attract more customers without significantly reducing overall profit.

Sunk Cost Fallacy:

  • Definition: The sunk cost fallacy is a cognitive bias where individuals consider the resources (time, money, effort) already invested in a decision and let those sunk costs influence future choices.
  • Explanation: Instead of focusing on the marginal benefit and cost of the next action, people fall into the sunk cost fallacy by dwelling on what has already been invested. This fallacy can lead to irrational decision-making, as past investments should not impact the assessment of future opportunities. The key idea is to recognize that sunk costs are irreversible and should not be a primary factor in determining the value of future actions.
  • Example: If someone has paid for a ticket to a movie they find boring, applying marginal thinking would involve evaluating whether staying for the rest of the film provides more enjoyment than leaving. The sunk cost fallacy occurs if the decision is influenced by the fact that money has already been spent on the ticket, despite the lack of enjoyment.

In summary, while marginal thinking involves comparing the additional benefits and costs of the next action to optimize decisions, the sunk cost fallacy warns against letting past investments unduly impact future choices. It's about focusing on what can be changed in the future rather than dwelling on irreversible past expenditures.

Nudge Theory

Nudge theory involves subtly influencing individuals' behavior without imposing mandates. In education, the expert or teacher acts as a "nudger," guiding students to recognize the relevance and benefits of a subject. By highlighting the practical applications and fostering curiosity, they encourage a positive attitude toward learning. This approach aims to gently steer individuals toward making informed choices and embracing the intrinsic value of education.

In 2008, Richard Thaler and Cass Sunstein's book Nudge: Improving Decisions About Health, Wealth, and Happiness brought nudge theory to prominence. The authors refer to the influencing of behaviour without coercion as libertarian paternalism and the influencers as choice architects.

Thaler and Sunstein defined their concept as the following:

A nudge, as we will use the term, is any aspect of the choice architecture that alters people's behavior in a predictable way without forbidding any options or significantly changing their economic incentives. To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates. Putting fruit at eye level counts as a nudge. Banning junk food does not.

Nudge theory offers a cost-effective approach to influencing behavior because it focuses on small, strategic interventions that are easy and cheap to implement, non-intrusive, and scalable. By encouraging intrinsic motivation and voluntary participation, nudge interventions can lead to long-term behavioral change without the need for significant financial investments or disruptive changes to existing systems.

Non-intrusive refers to actions or interventions that do not interfere with or disrupt the normal functioning, routine, or autonomy of an individual or system.