Independent Events

Mathematical Statistics

Samir Orujov, PhD

ADA University, School of Business

Information Communication Technologies Agency, Statistics Unit

2025-10-05

Overview

Today’s Journey

  • 📐 Independent Events Definition
  • 🔗 Properties of Independence
  • 🎲 Multiple Event Independence
  • 🌍 Real-world Applications

Learning Objectives

  • ✅ Understand independence concept
  • 🧮 Apply independence formulas
  • ⚖️ Distinguish independence from mutual exclusivity
  • 🎯 Solve complex probability problems

Think-Pair-Share: Card Intuition

🃏 Scenario: Drawing a card from a standard deck…

Think (30 seconds): Does knowing a card is a spade affect the probability it’s an ace?

👥 Pair (1 minute): Discuss with your neighbor

🗣️ Share: Let’s hear your reasoning!

Definition of Independent Events

Definition #1

Two events \(E\) and \(F\) are said to be independent if:

\[P(EF) = P(E)P(F)\]

holds.

Two events \(E\) and \(F\) that are not independent are said to be dependent.

Intuitive Understanding

Key Insight:

\(E\) is independent of \(F\) if knowledge that \(F\) has occurred does not change the probability that \(E\) occurs.

Symmetry Property:

Whenever \(E\) is independent of \(F\), then \(F\) is also independent of \(E\).

🤔 Discussion Moment: Why does this symmetry make intuitive sense?

Example 1: Playing Cards

Setup: A card is selected at random from an ordinary deck of 52 playing cards.

Events:

  • \(E\): The selected card is an ace
  • \(F\): The selected card is a spade

Question: Are \(E\) and \(F\) independent?

🎯 Quick Poll: What do you think?

    1. Yes, independent
    1. No, dependent
    1. Not enough information

Example 1: Solution

Calculate each probability:

  • \(P(EF) = P(\text{ace of spades}) = \frac{1}{52}\)
  • \(P(E) = P(\text{ace}) = \frac{4}{52}\)
  • \(P(F) = P(\text{spade}) = \frac{13}{52}\)

Check independence:

\[P(E)P(F) = \frac{4}{52} \times \frac{13}{52} = \frac{52}{52^2} = \frac{1}{52} = P(EF)\]

Conclusion

\(E\) and \(F\) are independent! ✓

Example 2: Two Coins

Experiment: Two coins are flipped, and all 4 outcomes are assumed equally likely.

Sample Space: \(S = \{HH, HT, TH, TT\}\)

Events:

  • \(E\): First coin lands on heads
  • \(F\): Second coin lands on tails

🧠 Think-Write-Pair: Take 1 minute to determine if \(E\) and \(F\) are independent.

Example 2: Solution

Identify outcomes:

  • \(E = \{HH, HT\}\), so \(P(E) = \frac{2}{4} = \frac{1}{2}\)
  • \(F = \{HT, TT\}\), so \(P(F) = \frac{2}{4} = \frac{1}{2}\)
  • \(EF = \{HT\}\), so \(P(EF) = \frac{1}{4}\)

Verify independence:

\[P(E)P(F) = \frac{1}{2} \times \frac{1}{2} = \frac{1}{4} = P(EF)\]

Result

\(E\) and \(F\) are independent! ✓

Example 3: Two Dice - Part 1

Experiment: Toss 2 fair dice.

Events:

  • \(E_1\): Sum of the dice equals 6
  • \(F\): First die equals 4

Question: Are \(E_1\) and \(F\) independent?

⏱️ Group Activity: Work in pairs for 2 minutes

Example 3: Solution - Part 1

Calculate probabilities:

  • \(P(F) = \frac{6}{36} = \frac{1}{6}\) (first die = 4)
  • \(P(E_1) = \frac{5}{36}\) (sums to 6: (1,5), (2,4), (3,3), (4,2), (5,1))
  • \(P(E_1F) = \frac{1}{36}\) (only (4,2))

Check independence: \[P(E_1)P(F) = \frac{5}{36} \times \frac{1}{6} = \frac{5}{216} \neq \frac{1}{36} = P(E_1F)\]

Conclusion

\(E_1\) and \(F\) are not independent! ✗

Question #1: Two Dice - Part 2

New Event:

  • \(E_2\): Sum of the dice equals 7

Question: Is \(E_2\) independent of \(F\)?

🎯 Your Challenge: Verify independence

⏱️ Time: 2 minutes individually

Question #1: Solution

Calculate probabilities:

  • \(P(E_2) = \frac{6}{36} = \frac{1}{6}\) (sums to 7: six outcomes)
  • \(P(F) = \frac{1}{6}\) (first die = 4)
  • \(P(E_2F) = \frac{1}{36}\) (only (4,3))

Check independence:

\[P(E_2)P(F) = \frac{1}{6} \times \frac{1}{6} = \frac{1}{36} = P(E_2F)\]

Conclusion

\(E_2\) and \(F\) are independent! ✓

Proposition #1: Complement Property

Proposition #1

If \(E\) and \(F\) are independent, then so are \(E\) and \(F^c\).

🤔 Discussion: Why is this property useful in practice?

Hint: Often easier to compute probability of complement!

Think-Pair-Share: Three Events

🧩 Scenario:

Suppose \(E\) is independent of \(F\) and \(E\) is also independent of \(G\).

Question #2: Is \(E\) necessarily independent of \(FG\)?

🤔 Think (1 minute): Form a hypothesis

👥 Pair (2 minutes): Test your hypothesis with examples

Question #2: Counter Example

Example 4:

Two fair dice are thrown.

Events:

  • \(E\): Sum of the dice is 7
  • \(F\): First die equals 4
  • \(G\): Second die equals 3

🎯 Group Challenge:

  1. Verify \(E\) is independent of \(F\)
  2. Verify \(E\) is independent of \(G\)
  3. Check if \(E\) is independent of \(FG\)

Example 4: Solution

Verifying independence:

  • \(P(E) = \frac{1}{6}\), \(P(F) = \frac{1}{6}\), \(P(G) = \frac{1}{6}\)
  • \(P(EF) = \frac{1}{36} = P(E)P(F)\) ✓ Independent
  • \(P(EG) = \frac{1}{36} = P(E)P(G)\) ✓ Independent

But what about \(FG\)?

  • \(FG = \{(4,3)\}\), so \(P(FG) = \frac{1}{36}\)
  • \(EFG = \{(4,3)\}\), so \(P(EFG) = \frac{1}{36}\)
  • \(P(E)P(FG) = \frac{1}{6} \times \frac{1}{36} = \frac{1}{216} \neq \frac{1}{36}\)

Answer

NO! Pairwise independence ≠ independence of intersection

Definition #2: Three Independent Events

Definition #2

The events \(E\), \(F\), and \(G\) are said to be independent if:

\[P(EFG) = P(E)P(F)P(G)\] \[P(EF) = P(E)P(F)\] \[P(EG) = P(E)P(G)\] \[P(FG) = P(F)P(G)\]

💡 Key Point: ALL four conditions must hold!

Remark #1: Event Combinations

Important Property:

If \(E\), \(F\), and \(G\) are independent, then \(E\) will be independent of any event formed from \(F\) and \(G\).

Examples of events formed from \(F\) and \(G\):

  • \(F \cup G\)
  • \(F^c \cap G\)
  • \(F \cup G^c\)
  • etc.

Definition #3: n Independent Events

Definition #3

The events \(E_1, E_2, \ldots, E_n\) are said to be independent if for every subset \(E_1', E_2', \ldots, E_r'\), \(r \leq n\), of these events:

\[P(E_1'E_2' \cdots E_r') = P(E_1')P(E_2') \cdots P(E_r')\]

Infinite sets: We define an infinite set of events to be independent if every finite subset is independent.

Example 5: Independent Trials

Setup: An infinite sequence of independent trials is performed. Each trial results in:

  • Success with probability \(p\)
  • Failure with probability \(1-p\)

What is the probability that:

  1. At least 1 success occurs in the first \(n\) trials?

🎯 Student Activity: Work this out before we reveal the answer!

⏱️ Time: 2 minutes

Example 5: Part (a) Solution

Question: P(at least 1 success in first \(n\) trials)?

Strategy: Use complement!

Solution:

Let \(E_i\) = success on trial \(i\)

\[P(\text{at least 1 success}) = 1 - P(\text{all failures})\] \[= 1 - P(E_1^c E_2^c \cdots E_n^c)\] \[= 1 - P(E_1^c)P(E_2^c) \cdots P(E_n^c)\] \[= 1 - (1-p)^n\]

Example 5: More Questions

What is the probability that:

  1. Exactly \(k\) successes occur in the first \(n\) trials?
  1. All trials result in successes?

👥 Pair Work: Take 3 minutes to solve parts (b) and (c) with a partner

Example 5: Part (b) Solution

Question: P(exactly \(k\) successes in first \(n\) trials)?

Key Insight: This is a binomial distribution!

Solution:

  • Choose which \(k\) trials are successes: \(\binom{n}{k}\) ways
  • Each specific sequence has probability \(p^k(1-p)^{n-k}\)

\[P(\text{exactly } k \text{ successes}) = \binom{n}{k}p^k(1-p)^{n-k}\]

Example 5: Part (c) Solution

Question: P(all trials result in successes)?

This is asking: P(infinite sequence of successes)

Solution:

For all \(n\) trials to be successes: \[P(\text{first } n \text{ all successes}) = p^n\]

For infinite trials: \[P(\text{all infinite trials successes}) = \lim_{n \to \infty} p^n = 0\]

(assuming \(0 < p < 1\))

Example 6: Dice Competition

Setup: Independent trials of rolling a pair of fair dice are performed.

Question: What is the probability that an outcome of 5 appears before an outcome of 7?

(The outcome of a roll is the sum of the dice)

🧠 Group Discussion: What strategy should we use?

⏱️ Time: 2 minutes to brainstorm

Example 6: Solution Approach

First, find basic probabilities:

  • \(P(\text{sum} = 5) = \frac{4}{36} = \frac{1}{9}\)
    • Outcomes: (1,4), (2,3), (3,2), (4,1)
  • \(P(\text{sum} = 7) = \frac{6}{36} = \frac{1}{6}\)
  • \(P(\text{neither 5 nor 7}) = 1 - \frac{1}{9} - \frac{1}{6} = \frac{13}{18}\)

Strategy: Use a general formula…

Formula #1: First Occurrence

Formula #1

If \(E\) and \(F\) are mutually exclusive events of an experiment, then when independent trials of the experiment are performed, the event \(E\) will occur before \(F\) with probability:

\[P(E \text{ before } F) = \frac{P(E)}{P(E) + P(F)}\]

Example 6: Final Solution

Apply Formula #1:

\[P(5 \text{ before } 7) = \frac{P(\text{sum} = 5)}{P(\text{sum} = 5) + P(\text{sum} = 7)}\] \[= \frac{\frac{1}{9}}{\frac{1}{9} + \frac{1}{6}} = \frac{\frac{1}{9}}{\frac{5}{18}} = \frac{1}{9} \times \frac{18}{5} = \frac{2}{5}\]

Answer

The probability is \(\frac{2}{5}\) or 0.4

Think-Pair-Share: Coupon Problem

Example 7 (HW):

Suppose there are \(n\) types of coupons. Each new coupon collected is, independent of previous selections, a type \(i\) coupon with probability \(p_i\), where \(\sum_{i=1}^{n} p_i = 1\).

Suppose \(k\) coupons are collected. Let \(A_i\) = event that there is at least one type \(i\) coupon among those collected.

For \(i \neq j\), find:

  1. \(P(A_i)\)
  2. \(P(A_i \cup A_j)\)
  3. \(P(A_i | A_j)\)

Example 7: Hints for Solutions

Part (a) - Hint

Use complement! What’s the probability of NOT getting any type \(i\) coupon?

Answer: \(P(A_i) = 1 - (1-p_i)^k\)

Part (b) - Hint

Use inclusion-exclusion principle: \(P(A \cup B) = P(A) + P(B) - P(AB)\)

Answer: \(P(A_i \cup A_j) = 1 - (1-p_i)^k - (1-p_j)^k + (1-p_i-p_j)^k\)

Part (c) - Hint

Use conditional probability formula and think about what changes given \(A_j\).

Example 8: The Problem of the Points

Classic Problem

Independent trials resulting in a success with probability \(p\) and a failure with probability \(1-p\) are performed.

Question: What is the probability that \(n\) successes occur before \(m\) failures?

🏆 Challenge Problem: Work on this in study groups!

Example 8: Solution Strategy

Key Insight: The game ends after at most \(n + m - 1\) trials.

Better approach:

Consider all \(n+m-1\) trials. We win if we get at least \(n\) successes.

\[P = \sum_{k=n}^{n+m-1} \binom{n+m-1}{k} p^k (1-p)^{n+m-1-k}\]

Alternative: Think of it as needing \(n\) successes in at most \(n+m-1\) trials.

This is equivalent to NOT getting \(m\) failures first!

Example 9: Service Protocol (HW)

Tennis/Volleyball Serving Rules:

  • Rally with A serving: A wins with probability \(p_A\), B wins with \(q_A = 1-p_A\)
  • Rally with B serving: A wins with probability \(p_B\), B wins with \(q_B = 1-p_B\)
  • Player A is initial server

Two protocols under consideration:

  1. “Winner serves” - winner of rally serves next
  2. “Alternating serve” - players alternate serving

Question: If you were player A, which protocol would you prefer?

Example 9: Analysis Hints

Winner Serves Protocol

Let \(P_A\) = probability A eventually wins game starting as server

Think about what happens after first rally…

Set up: \(P_A = p_A P_A + q_A P_B\) where \(P_B\) satisfies \(P_B = p_B P_A + q_B P_B\)

Alternating Serves Protocol

More complex! Need to track:

  • Who is serving
  • Current score
  • Pattern of serves

Example 10: The Gambler’s Ruin Problem

Classic Problem

Two gamblers, A and B, bet on outcomes of successive coin flips.

  • Heads: A collects 1 unit from B
  • Tails: A pays 1 unit to B
  • Continue until someone runs out of money

Question: If A starts with \(i\) units and B starts with \(N-i\) units, what is the probability that A ends up with all the money?

(Each flip results in heads with probability \(p\))

Example 10: Solution Setup

Let \(P_i\) = probability A wins starting with \(i\) units

Boundary conditions:

  • \(P_0 = 0\) (A already lost)
  • \(P_N = 1\) (A already won)

Recursive relation:

After first flip, A has either \(i+1\) or \(i-1\) units:

\[P_i = p \cdot P_{i+1} + (1-p) \cdot P_{i-1}\]

Example 10: General Solution

Case 1: Fair coin (\(p = \frac{1}{2}\))

\[P_i = \frac{i}{N}\]

Case 2: Biased coin (\(p \neq \frac{1}{2}\))

\[P_i = \frac{1 - \left(\frac{1-p}{p}\right)^i}{1 - \left(\frac{1-p}{p}\right)^N}\]

🎲 Insight: If the game is unfair (\(p \neq \frac{1}{2}\)), the probability depends strongly on the ratio of odds!

Group Activity: Drug Test Analysis

Example 11 (HW):

Two new drugs for treating a disease have cure rates \(P_1\) and \(P_2\) (unknown).

Testing procedure:

  • Treat pairs of patients sequentially (one gets drug 1, other gets drug 2)
  • Stop when cumulative cures from one drug exceeds the other by a fixed amount \(d\)

Question: For given \(P_1\) and \(P_2\) where \(P_1 > P_2\), what is the probability the test incorrectly asserts \(P_2 > P_1\)?

Example 11: Problem Structure

This is related to:

  • Random walks
  • Sequential analysis
  • Gambler’s ruin problem!

Key differences:

  • Four possible outcomes per pair: (cure, cure), (cure, no cure), (no cure, cure), (no cure, no cure)
  • Only the difference in cure counts matters

🔬 Research Project: Develop the full solution as homework!

Interactive Review: Quick Quiz

Q1: If \(E\) and \(F\) are independent and \(P(E) = 0.3\), \(P(F) = 0.5\), what is \(P(EF)\)?

Interactive Review: Quiz (cont.)

Q2: If events are mutually exclusive, can they be independent?

Q3: How many conditions must be checked to verify three events are independent?

Key Concepts Summary

🎯 Main Takeaways:

  • Independence: \(P(EF) = P(E)P(F)\)
  • Independence ≠ Mutual Exclusivity
  • Pairwise independence ≠ Full independence
  • Complement property is powerful

🔗 Connections:

  • Conditional probability: \(P(E|F) = P(E)\) when independent
  • Multiplication rule simplifies for independent events
  • Bernoulli trials are independent

Real-World Applications

Where independence matters:

  • 🏭 Quality control: defects in manufacturing
  • 📈 Finance: returns on different stocks (often NOT independent!)
  • 💊 Medicine: treatment outcomes for different patients
  • ⚽ Sports: game outcomes (if truly independent)
  • 🔧 Reliability engineering: component failures

Warning

⚠️ Warning: Always verify independence assumptions!

Common Pitfalls

❌ Mistake #1

Assuming independence without verification

Example: Card draws without replacement are NOT independent

❌ Mistake #2

Confusing independence with mutual exclusivity

Key: Mutually exclusive events (with positive probability) cannot be independent!

❌ Mistake #3

Assuming pairwise independence implies full independence

Remember: Need to check ALL subset products!

Problem-Solving Strategies

Strategy #1: “At least one”

Use complement: \(P(\text{at least one}) \\ = 1 - P(\text{none})\)

Strategy #2: Independent events

Probabilities multiply: \[P(E_1 E_2 \cdots E_n) \\ = P(E_1)P(E_2) \cdots P(E_n)\]

Strategy #3: “Before” problems

Use Formula #1 or set up recursive equations

Homework Problems

📚 Required:

  1. Example 7 (Coupon Problem) - all parts
  2. Example 9 (Service Protocol) - full analysis
  3. Example 11 (Drug Test) - derive probability formula

🏆 Optional Challenges:

  1. Prove Formula #1 (first occurrence)
  2. Derive full solution to Example 10 (Gambler’s Ruin)
  3. Create your own independence problem

Next Class Preview

🔜 Coming Up:

  • Bayes’ Theorem
  • Total Probability Law
  • Advanced conditional probability
  • Applications to real-world inference problems

📚 Preparation:

  • Review conditional probability
  • Think about how independence relates to conditioning
  • Read textbook sections on Bayes’ Theorem

Final Reflection

🤔 Think About:

  1. How does independence simplify probability calculations?
  2. When is it reasonable to assume independence in real life?
  3. What’s the relationship between independence and information?

💬 Discussion Question:

“Independence means events don’t influence each other” - Is this always true? Can you think of situations where this intuition might mislead you?

✨ Remember: Independence is one of the most powerful concepts in probability theory!

Questions?

Thank you!

Office Hours: Get appointment by email

Contact: sorujov@ada.edu.az