Mathematical Statistics
ADA University, School of Business
Information Communication Technologies Agency, Statistics Unit
2025-10-05
🃏 Scenario: Drawing a card from a standard deck…
Think (30 seconds): Does knowing a card is a spade affect the probability it’s an ace?
👥 Pair (1 minute): Discuss with your neighbor
🗣️ Share: Let’s hear your reasoning!
Definition #1
Two events \(E\) and \(F\) are said to be independent if:
\[P(EF) = P(E)P(F)\]
holds.
Two events \(E\) and \(F\) that are not independent are said to be dependent.
Key Insight:
\(E\) is independent of \(F\) if knowledge that \(F\) has occurred does not change the probability that \(E\) occurs.
Symmetry Property:
Whenever \(E\) is independent of \(F\), then \(F\) is also independent of \(E\).
🤔 Discussion Moment: Why does this symmetry make intuitive sense?
Setup: A card is selected at random from an ordinary deck of 52 playing cards.
Events:
Question: Are \(E\) and \(F\) independent?
🎯 Quick Poll: What do you think?
Calculate each probability:
Check independence:
\[P(E)P(F) = \frac{4}{52} \times \frac{13}{52} = \frac{52}{52^2} = \frac{1}{52} = P(EF)\]
Conclusion
\(E\) and \(F\) are independent! ✓
Experiment: Two coins are flipped, and all 4 outcomes are assumed equally likely.
Sample Space: \(S = \{HH, HT, TH, TT\}\)
Events:
🧠 Think-Write-Pair: Take 1 minute to determine if \(E\) and \(F\) are independent.
Identify outcomes:
Verify independence:
\[P(E)P(F) = \frac{1}{2} \times \frac{1}{2} = \frac{1}{4} = P(EF)\]
Result
\(E\) and \(F\) are independent! ✓
Experiment: Toss 2 fair dice.
Events:
Question: Are \(E_1\) and \(F\) independent?
⏱️ Group Activity: Work in pairs for 2 minutes
Calculate probabilities:
Check independence: \[P(E_1)P(F) = \frac{5}{36} \times \frac{1}{6} = \frac{5}{216} \neq \frac{1}{36} = P(E_1F)\]
Conclusion
\(E_1\) and \(F\) are not independent! ✗
New Event:
Question: Is \(E_2\) independent of \(F\)?
🎯 Your Challenge: Verify independence
⏱️ Time: 2 minutes individually
Calculate probabilities:
Check independence:
\[P(E_2)P(F) = \frac{1}{6} \times \frac{1}{6} = \frac{1}{36} = P(E_2F)\]
Conclusion
\(E_2\) and \(F\) are independent! ✓
Proposition #1
If \(E\) and \(F\) are independent, then so are \(E\) and \(F^c\).
🤔 Discussion: Why is this property useful in practice?
Hint: Often easier to compute probability of complement!
🧩 Scenario:
Suppose \(E\) is independent of \(F\) and \(E\) is also independent of \(G\).
Question #2: Is \(E\) necessarily independent of \(FG\)?
🤔 Think (1 minute): Form a hypothesis
👥 Pair (2 minutes): Test your hypothesis with examples
Example 4:
Two fair dice are thrown.
Events:
🎯 Group Challenge:
Verifying independence:
But what about \(FG\)?
Answer
NO! Pairwise independence ≠ independence of intersection
Definition #2
The events \(E\), \(F\), and \(G\) are said to be independent if:
\[P(EFG) = P(E)P(F)P(G)\] \[P(EF) = P(E)P(F)\] \[P(EG) = P(E)P(G)\] \[P(FG) = P(F)P(G)\]
💡 Key Point: ALL four conditions must hold!
Important Property:
If \(E\), \(F\), and \(G\) are independent, then \(E\) will be independent of any event formed from \(F\) and \(G\).
Examples of events formed from \(F\) and \(G\):
Definition #3
The events \(E_1, E_2, \ldots, E_n\) are said to be independent if for every subset \(E_1', E_2', \ldots, E_r'\), \(r \leq n\), of these events:
\[P(E_1'E_2' \cdots E_r') = P(E_1')P(E_2') \cdots P(E_r')\]
Infinite sets: We define an infinite set of events to be independent if every finite subset is independent.
Setup: An infinite sequence of independent trials is performed. Each trial results in:
What is the probability that:
🎯 Student Activity: Work this out before we reveal the answer!
⏱️ Time: 2 minutes
Question: P(at least 1 success in first \(n\) trials)?
Strategy: Use complement!
Solution:
Let \(E_i\) = success on trial \(i\)
\[P(\text{at least 1 success}) = 1 - P(\text{all failures})\] \[= 1 - P(E_1^c E_2^c \cdots E_n^c)\] \[= 1 - P(E_1^c)P(E_2^c) \cdots P(E_n^c)\] \[= 1 - (1-p)^n\]
What is the probability that:
👥 Pair Work: Take 3 minutes to solve parts (b) and (c) with a partner
Question: P(exactly \(k\) successes in first \(n\) trials)?
Key Insight: This is a binomial distribution!
Solution:
\[P(\text{exactly } k \text{ successes}) = \binom{n}{k}p^k(1-p)^{n-k}\]
Question: P(all trials result in successes)?
This is asking: P(infinite sequence of successes)
Solution:
For all \(n\) trials to be successes: \[P(\text{first } n \text{ all successes}) = p^n\]
For infinite trials: \[P(\text{all infinite trials successes}) = \lim_{n \to \infty} p^n = 0\]
(assuming \(0 < p < 1\))
Setup: Independent trials of rolling a pair of fair dice are performed.
Question: What is the probability that an outcome of 5 appears before an outcome of 7?
(The outcome of a roll is the sum of the dice)
🧠 Group Discussion: What strategy should we use?
⏱️ Time: 2 minutes to brainstorm
First, find basic probabilities:
Strategy: Use a general formula…
Formula #1
If \(E\) and \(F\) are mutually exclusive events of an experiment, then when independent trials of the experiment are performed, the event \(E\) will occur before \(F\) with probability:
\[P(E \text{ before } F) = \frac{P(E)}{P(E) + P(F)}\]
Apply Formula #1:
\[P(5 \text{ before } 7) = \frac{P(\text{sum} = 5)}{P(\text{sum} = 5) + P(\text{sum} = 7)}\] \[= \frac{\frac{1}{9}}{\frac{1}{9} + \frac{1}{6}} = \frac{\frac{1}{9}}{\frac{5}{18}} = \frac{1}{9} \times \frac{18}{5} = \frac{2}{5}\]
Answer
The probability is \(\frac{2}{5}\) or 0.4
Example 7 (HW):
Suppose there are \(n\) types of coupons. Each new coupon collected is, independent of previous selections, a type \(i\) coupon with probability \(p_i\), where \(\sum_{i=1}^{n} p_i = 1\).
Suppose \(k\) coupons are collected. Let \(A_i\) = event that there is at least one type \(i\) coupon among those collected.
For \(i \neq j\), find:
Part (a) - Hint
Use complement! What’s the probability of NOT getting any type \(i\) coupon?
Answer: \(P(A_i) = 1 - (1-p_i)^k\)
Part (b) - Hint
Use inclusion-exclusion principle: \(P(A \cup B) = P(A) + P(B) - P(AB)\)
Answer: \(P(A_i \cup A_j) = 1 - (1-p_i)^k - (1-p_j)^k + (1-p_i-p_j)^k\)
Part (c) - Hint
Use conditional probability formula and think about what changes given \(A_j\).
Classic Problem
Independent trials resulting in a success with probability \(p\) and a failure with probability \(1-p\) are performed.
Question: What is the probability that \(n\) successes occur before \(m\) failures?
🏆 Challenge Problem: Work on this in study groups!
Key Insight: The game ends after at most \(n + m - 1\) trials.
Better approach:
Consider all \(n+m-1\) trials. We win if we get at least \(n\) successes.
\[P = \sum_{k=n}^{n+m-1} \binom{n+m-1}{k} p^k (1-p)^{n+m-1-k}\]
Alternative: Think of it as needing \(n\) successes in at most \(n+m-1\) trials.
This is equivalent to NOT getting \(m\) failures first!
Tennis/Volleyball Serving Rules:
Two protocols under consideration:
Question: If you were player A, which protocol would you prefer?
Winner Serves Protocol
Let \(P_A\) = probability A eventually wins game starting as server
Think about what happens after first rally…
Set up: \(P_A = p_A P_A + q_A P_B\) where \(P_B\) satisfies \(P_B = p_B P_A + q_B P_B\)
Alternating Serves Protocol
More complex! Need to track:
Classic Problem
Two gamblers, A and B, bet on outcomes of successive coin flips.
Question: If A starts with \(i\) units and B starts with \(N-i\) units, what is the probability that A ends up with all the money?
(Each flip results in heads with probability \(p\))
Let \(P_i\) = probability A wins starting with \(i\) units
Boundary conditions:
Recursive relation:
After first flip, A has either \(i+1\) or \(i-1\) units:
\[P_i = p \cdot P_{i+1} + (1-p) \cdot P_{i-1}\]
Case 1: Fair coin (\(p = \frac{1}{2}\))
\[P_i = \frac{i}{N}\]
Case 2: Biased coin (\(p \neq \frac{1}{2}\))
\[P_i = \frac{1 - \left(\frac{1-p}{p}\right)^i}{1 - \left(\frac{1-p}{p}\right)^N}\]
🎲 Insight: If the game is unfair (\(p \neq \frac{1}{2}\)), the probability depends strongly on the ratio of odds!
Example 11 (HW):
Two new drugs for treating a disease have cure rates \(P_1\) and \(P_2\) (unknown).
Testing procedure:
Question: For given \(P_1\) and \(P_2\) where \(P_1 > P_2\), what is the probability the test incorrectly asserts \(P_2 > P_1\)?
This is related to:
Key differences:
🔬 Research Project: Develop the full solution as homework!
Q1: If \(E\) and \(F\) are independent and \(P(E) = 0.3\), \(P(F) = 0.5\), what is \(P(EF)\)?
Q2: If events are mutually exclusive, can they be independent?
Q3: How many conditions must be checked to verify three events are independent?
🎯 Main Takeaways:
🔗 Connections:
Where independence matters:
Warning
⚠️ Warning: Always verify independence assumptions!
❌ Mistake #1
Assuming independence without verification
Example: Card draws without replacement are NOT independent
❌ Mistake #2
Confusing independence with mutual exclusivity
Key: Mutually exclusive events (with positive probability) cannot be independent!
❌ Mistake #3
Assuming pairwise independence implies full independence
Remember: Need to check ALL subset products!
Strategy #1: “At least one”
Use complement: \(P(\text{at least one}) \\ = 1 - P(\text{none})\)
Strategy #2: Independent events
Probabilities multiply: \[P(E_1 E_2 \cdots E_n) \\ = P(E_1)P(E_2) \cdots P(E_n)\]
Strategy #3: “Before” problems
Use Formula #1 or set up recursive equations
📚 Required:
🏆 Optional Challenges:
🔜 Coming Up:
📚 Preparation:
🤔 Think About:
💬 Discussion Question:
“Independence means events don’t influence each other” - Is this always true? Can you think of situations where this intuition might mislead you?
✨ Remember: Independence is one of the most powerful concepts in probability theory!
Thank you!
Office Hours: Get appointment by email
Contact: sorujov@ada.edu.az

Mathematical Statistics - Independent Events