When is finite additivity not enough for probabilists? [closed]Dudley’s exercise on finitely additive...
What is this high flying aircraft over Pennsylvania?
If Captain Marvel (MCU) were to have a child with a human male, would the child be human or Kree?
Why do Radio Buttons not fill the entire outer circle?
Why the "ls" command is showing the permissions of files in a FAT32 partition?
Typing CO_2 easily
Pre-Employment Background Check With Consent For Future Checks
Sound waves in different octaves
Mimic lecturing on blackboard, facing audience
I'm just a whisper. Who am I?
Why is the Sun approximated as a black body at ~ 5800 K?
Why didn’t Eve recognize the little cockroach as a living organism?
Air travel with refrigerated insulin
Ways of geometrical multiplication
How much do grades matter for a future academia position?
Review your own paper in Mathematics
Can I run 125kHz RF circuit on a breadboard?
ContourPlot — How do I color by contour curvature?
El Dorado Word Puzzle II: Videogame Edition
Why didn't Voldemort know what Grindelwald looked like?
Telemetry for feature health
Has the laser at Magurele, Romania reached a tenth of the Sun's power?
What does "Scientists rise up against statistical significance" mean? (Comment in Nature)
Anime with legendary swords made from talismans and a man who could change them with a shattered body
What should be the ideal length of sentences in a blog post for ease of reading?
When is finite additivity not enough for probabilists? [closed]
Dudley’s exercise on finitely additive probabilitiesThe sum of an uncountable number of positive numbersMeasure theory singles out the countable cardinal. Why?Why does not the Euclidean space support a countably additive measure defined for all subsets?product $sigma$-algebra equals the set of all countable unions of disjoint measurable rectangles?countable intersection of open set argumentShowing that any finite algebra is generated by a collection of disjoint setsShow that the additive group of rational numbers does not have a Haar measure.When does integrating the characteristic function give us the length of an open set?Usefulness of constructing a probability measure on an algebra in $mathbb{R}$Prove that the following sets are measurable
$begingroup$
When do probabilists need countable additivity? For most practical applications, I’m not entirely sure why you’d need to find the union or intersection of infinitely many sets. If you could suffice with finite additivity, then paradoxes like Banach-Tarski would be abolished.
probability-theory measure-theory
$endgroup$
closed as unclear what you're asking by Saad, Alex Provost, Cesareo, José Carlos Santos, Song Mar 13 at 12:09
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
|
show 1 more comment
$begingroup$
When do probabilists need countable additivity? For most practical applications, I’m not entirely sure why you’d need to find the union or intersection of infinitely many sets. If you could suffice with finite additivity, then paradoxes like Banach-Tarski would be abolished.
probability-theory measure-theory
$endgroup$
closed as unclear what you're asking by Saad, Alex Provost, Cesareo, José Carlos Santos, Song Mar 13 at 12:09
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
1
$begingroup$
Countable-additivity ensures continuity of the measure: $A_nuparrow ARightarrow mu(A_n)uparrow mu(A)$, which is in the core of limiting theorems in probability theory (e.g., law of large numbers, central limit, 0-1 Kolmogorov law, etc).
$endgroup$
– Augusto S.
Mar 12 at 22:41
$begingroup$
This article seems relevant: Finite Additivity versus Countable Additivity by Nick Bingham. Link from his webpage
$endgroup$
– twnly
Mar 12 at 22:55
$begingroup$
Here is an example where an only finitely additive probability measure leads to unintuitive consequences.
$endgroup$
– Mike Earnest
Mar 12 at 23:03
$begingroup$
It seems that in general so many processes/events/models etc in probability involve some sort of countability. Finite additivity is much too weak a condition, there are quite a few finitely additive measures that are otherwise very pathological.
$endgroup$
– rubikscube09
Mar 13 at 0:35
$begingroup$
Could the close-voter explain?
$endgroup$
– Yatharth Agarwal
Mar 13 at 2:34
|
show 1 more comment
$begingroup$
When do probabilists need countable additivity? For most practical applications, I’m not entirely sure why you’d need to find the union or intersection of infinitely many sets. If you could suffice with finite additivity, then paradoxes like Banach-Tarski would be abolished.
probability-theory measure-theory
$endgroup$
When do probabilists need countable additivity? For most practical applications, I’m not entirely sure why you’d need to find the union or intersection of infinitely many sets. If you could suffice with finite additivity, then paradoxes like Banach-Tarski would be abolished.
probability-theory measure-theory
probability-theory measure-theory
asked Mar 12 at 21:54
Yatharth AgarwalYatharth Agarwal
542418
542418
closed as unclear what you're asking by Saad, Alex Provost, Cesareo, José Carlos Santos, Song Mar 13 at 12:09
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
closed as unclear what you're asking by Saad, Alex Provost, Cesareo, José Carlos Santos, Song Mar 13 at 12:09
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
1
$begingroup$
Countable-additivity ensures continuity of the measure: $A_nuparrow ARightarrow mu(A_n)uparrow mu(A)$, which is in the core of limiting theorems in probability theory (e.g., law of large numbers, central limit, 0-1 Kolmogorov law, etc).
$endgroup$
– Augusto S.
Mar 12 at 22:41
$begingroup$
This article seems relevant: Finite Additivity versus Countable Additivity by Nick Bingham. Link from his webpage
$endgroup$
– twnly
Mar 12 at 22:55
$begingroup$
Here is an example where an only finitely additive probability measure leads to unintuitive consequences.
$endgroup$
– Mike Earnest
Mar 12 at 23:03
$begingroup$
It seems that in general so many processes/events/models etc in probability involve some sort of countability. Finite additivity is much too weak a condition, there are quite a few finitely additive measures that are otherwise very pathological.
$endgroup$
– rubikscube09
Mar 13 at 0:35
$begingroup$
Could the close-voter explain?
$endgroup$
– Yatharth Agarwal
Mar 13 at 2:34
|
show 1 more comment
1
$begingroup$
Countable-additivity ensures continuity of the measure: $A_nuparrow ARightarrow mu(A_n)uparrow mu(A)$, which is in the core of limiting theorems in probability theory (e.g., law of large numbers, central limit, 0-1 Kolmogorov law, etc).
$endgroup$
– Augusto S.
Mar 12 at 22:41
$begingroup$
This article seems relevant: Finite Additivity versus Countable Additivity by Nick Bingham. Link from his webpage
$endgroup$
– twnly
Mar 12 at 22:55
$begingroup$
Here is an example where an only finitely additive probability measure leads to unintuitive consequences.
$endgroup$
– Mike Earnest
Mar 12 at 23:03
$begingroup$
It seems that in general so many processes/events/models etc in probability involve some sort of countability. Finite additivity is much too weak a condition, there are quite a few finitely additive measures that are otherwise very pathological.
$endgroup$
– rubikscube09
Mar 13 at 0:35
$begingroup$
Could the close-voter explain?
$endgroup$
– Yatharth Agarwal
Mar 13 at 2:34
1
1
$begingroup$
Countable-additivity ensures continuity of the measure: $A_nuparrow ARightarrow mu(A_n)uparrow mu(A)$, which is in the core of limiting theorems in probability theory (e.g., law of large numbers, central limit, 0-1 Kolmogorov law, etc).
$endgroup$
– Augusto S.
Mar 12 at 22:41
$begingroup$
Countable-additivity ensures continuity of the measure: $A_nuparrow ARightarrow mu(A_n)uparrow mu(A)$, which is in the core of limiting theorems in probability theory (e.g., law of large numbers, central limit, 0-1 Kolmogorov law, etc).
$endgroup$
– Augusto S.
Mar 12 at 22:41
$begingroup$
This article seems relevant: Finite Additivity versus Countable Additivity by Nick Bingham. Link from his webpage
$endgroup$
– twnly
Mar 12 at 22:55
$begingroup$
This article seems relevant: Finite Additivity versus Countable Additivity by Nick Bingham. Link from his webpage
$endgroup$
– twnly
Mar 12 at 22:55
$begingroup$
Here is an example where an only finitely additive probability measure leads to unintuitive consequences.
$endgroup$
– Mike Earnest
Mar 12 at 23:03
$begingroup$
Here is an example where an only finitely additive probability measure leads to unintuitive consequences.
$endgroup$
– Mike Earnest
Mar 12 at 23:03
$begingroup$
It seems that in general so many processes/events/models etc in probability involve some sort of countability. Finite additivity is much too weak a condition, there are quite a few finitely additive measures that are otherwise very pathological.
$endgroup$
– rubikscube09
Mar 13 at 0:35
$begingroup$
It seems that in general so many processes/events/models etc in probability involve some sort of countability. Finite additivity is much too weak a condition, there are quite a few finitely additive measures that are otherwise very pathological.
$endgroup$
– rubikscube09
Mar 13 at 0:35
$begingroup$
Could the close-voter explain?
$endgroup$
– Yatharth Agarwal
Mar 13 at 2:34
$begingroup$
Could the close-voter explain?
$endgroup$
– Yatharth Agarwal
Mar 13 at 2:34
|
show 1 more comment
1 Answer
1
active
oldest
votes
$begingroup$
Cons of finite additivity
Finite additivity cannot handle the following scenarios (obviously not an exhaustive list):
Brownian motion, stochastic differential equations, some mathematical finance tools.
More damningly, even seemingly elementary questions question like “What’s the probability of a Poisson being odd?” become out of reach.
Pros of finite additivity
That’s not to say there aren’t any reasons to use finite additivity. Even though it’s fallen out of favor, some respectable probability theorists historically advocated for it. The advantages include:
In 2-dimensions, you can avoid unmeasurable sets by sticking with finite additivity (pretty underwhelming advantage).
In many simple, day-to-day contexts, you might get away with finite additivity. To quote a commenter, “Why construct infinitely complicated sets?”
Don’t need countable additivity for Weak Law of Large Numbers, Central Limit Theorems, many if not all of the empirically relevant theorems.
To quote the latter half of this stats.SE answer:
Restrict ourselves now to discrete probability, let's say, for convenience, coin tossing. Throwing the coin a finite number of times, all events we can describe using the coin can be expressed via events of the type "head on throw i", "tails on throw i, and a finite number of "and" or "or". So, in this situation, we do not need 𝜎-algebras, algebras of sets is enough. So, is there any situation, in this context, where 𝜎-algebras arise? In practice, even if we can only throw the dice a finite number of times, we develop approximations to probabilities via limit theorems when 𝑛, the number of throws, grows without bound. So have a look at the proof of the central limit theorem for this case, the Laplace-de Moivre theorem. We can prove via approximations using only algebras, no 𝜎-algebra should be needed. The weak law of large numbers can be proved via the Chebyshev's inequality, and for that we need only compute variance for finite 𝑛 cases. But, for the strong law of large numbers, the event we prove has probability one can only be expressed via a countably infinite number of "and" and "or"'s, so for the strong law of large numbers we need 𝜎-algebras.
The strong law is not directly empirically meaningful, since it is about actual convergence, which can never be empirically verified. The weak law, on the other hand, is about the quality of approximation increasing with 𝑛, with numerical bounds for finite 𝑛, so is more empirically meaningful.
For more on why the strong version is empirically irrelevant, see this stats.SE question.
Avoid improper priors and the marginalization paradox.
Source: This awesome history of countable vs finite additivity, linked to me by one of the commenters on this question.
$endgroup$
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Cons of finite additivity
Finite additivity cannot handle the following scenarios (obviously not an exhaustive list):
Brownian motion, stochastic differential equations, some mathematical finance tools.
More damningly, even seemingly elementary questions question like “What’s the probability of a Poisson being odd?” become out of reach.
Pros of finite additivity
That’s not to say there aren’t any reasons to use finite additivity. Even though it’s fallen out of favor, some respectable probability theorists historically advocated for it. The advantages include:
In 2-dimensions, you can avoid unmeasurable sets by sticking with finite additivity (pretty underwhelming advantage).
In many simple, day-to-day contexts, you might get away with finite additivity. To quote a commenter, “Why construct infinitely complicated sets?”
Don’t need countable additivity for Weak Law of Large Numbers, Central Limit Theorems, many if not all of the empirically relevant theorems.
To quote the latter half of this stats.SE answer:
Restrict ourselves now to discrete probability, let's say, for convenience, coin tossing. Throwing the coin a finite number of times, all events we can describe using the coin can be expressed via events of the type "head on throw i", "tails on throw i, and a finite number of "and" or "or". So, in this situation, we do not need 𝜎-algebras, algebras of sets is enough. So, is there any situation, in this context, where 𝜎-algebras arise? In practice, even if we can only throw the dice a finite number of times, we develop approximations to probabilities via limit theorems when 𝑛, the number of throws, grows without bound. So have a look at the proof of the central limit theorem for this case, the Laplace-de Moivre theorem. We can prove via approximations using only algebras, no 𝜎-algebra should be needed. The weak law of large numbers can be proved via the Chebyshev's inequality, and for that we need only compute variance for finite 𝑛 cases. But, for the strong law of large numbers, the event we prove has probability one can only be expressed via a countably infinite number of "and" and "or"'s, so for the strong law of large numbers we need 𝜎-algebras.
The strong law is not directly empirically meaningful, since it is about actual convergence, which can never be empirically verified. The weak law, on the other hand, is about the quality of approximation increasing with 𝑛, with numerical bounds for finite 𝑛, so is more empirically meaningful.
For more on why the strong version is empirically irrelevant, see this stats.SE question.
Avoid improper priors and the marginalization paradox.
Source: This awesome history of countable vs finite additivity, linked to me by one of the commenters on this question.
$endgroup$
add a comment |
$begingroup$
Cons of finite additivity
Finite additivity cannot handle the following scenarios (obviously not an exhaustive list):
Brownian motion, stochastic differential equations, some mathematical finance tools.
More damningly, even seemingly elementary questions question like “What’s the probability of a Poisson being odd?” become out of reach.
Pros of finite additivity
That’s not to say there aren’t any reasons to use finite additivity. Even though it’s fallen out of favor, some respectable probability theorists historically advocated for it. The advantages include:
In 2-dimensions, you can avoid unmeasurable sets by sticking with finite additivity (pretty underwhelming advantage).
In many simple, day-to-day contexts, you might get away with finite additivity. To quote a commenter, “Why construct infinitely complicated sets?”
Don’t need countable additivity for Weak Law of Large Numbers, Central Limit Theorems, many if not all of the empirically relevant theorems.
To quote the latter half of this stats.SE answer:
Restrict ourselves now to discrete probability, let's say, for convenience, coin tossing. Throwing the coin a finite number of times, all events we can describe using the coin can be expressed via events of the type "head on throw i", "tails on throw i, and a finite number of "and" or "or". So, in this situation, we do not need 𝜎-algebras, algebras of sets is enough. So, is there any situation, in this context, where 𝜎-algebras arise? In practice, even if we can only throw the dice a finite number of times, we develop approximations to probabilities via limit theorems when 𝑛, the number of throws, grows without bound. So have a look at the proof of the central limit theorem for this case, the Laplace-de Moivre theorem. We can prove via approximations using only algebras, no 𝜎-algebra should be needed. The weak law of large numbers can be proved via the Chebyshev's inequality, and for that we need only compute variance for finite 𝑛 cases. But, for the strong law of large numbers, the event we prove has probability one can only be expressed via a countably infinite number of "and" and "or"'s, so for the strong law of large numbers we need 𝜎-algebras.
The strong law is not directly empirically meaningful, since it is about actual convergence, which can never be empirically verified. The weak law, on the other hand, is about the quality of approximation increasing with 𝑛, with numerical bounds for finite 𝑛, so is more empirically meaningful.
For more on why the strong version is empirically irrelevant, see this stats.SE question.
Avoid improper priors and the marginalization paradox.
Source: This awesome history of countable vs finite additivity, linked to me by one of the commenters on this question.
$endgroup$
add a comment |
$begingroup$
Cons of finite additivity
Finite additivity cannot handle the following scenarios (obviously not an exhaustive list):
Brownian motion, stochastic differential equations, some mathematical finance tools.
More damningly, even seemingly elementary questions question like “What’s the probability of a Poisson being odd?” become out of reach.
Pros of finite additivity
That’s not to say there aren’t any reasons to use finite additivity. Even though it’s fallen out of favor, some respectable probability theorists historically advocated for it. The advantages include:
In 2-dimensions, you can avoid unmeasurable sets by sticking with finite additivity (pretty underwhelming advantage).
In many simple, day-to-day contexts, you might get away with finite additivity. To quote a commenter, “Why construct infinitely complicated sets?”
Don’t need countable additivity for Weak Law of Large Numbers, Central Limit Theorems, many if not all of the empirically relevant theorems.
To quote the latter half of this stats.SE answer:
Restrict ourselves now to discrete probability, let's say, for convenience, coin tossing. Throwing the coin a finite number of times, all events we can describe using the coin can be expressed via events of the type "head on throw i", "tails on throw i, and a finite number of "and" or "or". So, in this situation, we do not need 𝜎-algebras, algebras of sets is enough. So, is there any situation, in this context, where 𝜎-algebras arise? In practice, even if we can only throw the dice a finite number of times, we develop approximations to probabilities via limit theorems when 𝑛, the number of throws, grows without bound. So have a look at the proof of the central limit theorem for this case, the Laplace-de Moivre theorem. We can prove via approximations using only algebras, no 𝜎-algebra should be needed. The weak law of large numbers can be proved via the Chebyshev's inequality, and for that we need only compute variance for finite 𝑛 cases. But, for the strong law of large numbers, the event we prove has probability one can only be expressed via a countably infinite number of "and" and "or"'s, so for the strong law of large numbers we need 𝜎-algebras.
The strong law is not directly empirically meaningful, since it is about actual convergence, which can never be empirically verified. The weak law, on the other hand, is about the quality of approximation increasing with 𝑛, with numerical bounds for finite 𝑛, so is more empirically meaningful.
For more on why the strong version is empirically irrelevant, see this stats.SE question.
Avoid improper priors and the marginalization paradox.
Source: This awesome history of countable vs finite additivity, linked to me by one of the commenters on this question.
$endgroup$
Cons of finite additivity
Finite additivity cannot handle the following scenarios (obviously not an exhaustive list):
Brownian motion, stochastic differential equations, some mathematical finance tools.
More damningly, even seemingly elementary questions question like “What’s the probability of a Poisson being odd?” become out of reach.
Pros of finite additivity
That’s not to say there aren’t any reasons to use finite additivity. Even though it’s fallen out of favor, some respectable probability theorists historically advocated for it. The advantages include:
In 2-dimensions, you can avoid unmeasurable sets by sticking with finite additivity (pretty underwhelming advantage).
In many simple, day-to-day contexts, you might get away with finite additivity. To quote a commenter, “Why construct infinitely complicated sets?”
Don’t need countable additivity for Weak Law of Large Numbers, Central Limit Theorems, many if not all of the empirically relevant theorems.
To quote the latter half of this stats.SE answer:
Restrict ourselves now to discrete probability, let's say, for convenience, coin tossing. Throwing the coin a finite number of times, all events we can describe using the coin can be expressed via events of the type "head on throw i", "tails on throw i, and a finite number of "and" or "or". So, in this situation, we do not need 𝜎-algebras, algebras of sets is enough. So, is there any situation, in this context, where 𝜎-algebras arise? In practice, even if we can only throw the dice a finite number of times, we develop approximations to probabilities via limit theorems when 𝑛, the number of throws, grows without bound. So have a look at the proof of the central limit theorem for this case, the Laplace-de Moivre theorem. We can prove via approximations using only algebras, no 𝜎-algebra should be needed. The weak law of large numbers can be proved via the Chebyshev's inequality, and for that we need only compute variance for finite 𝑛 cases. But, for the strong law of large numbers, the event we prove has probability one can only be expressed via a countably infinite number of "and" and "or"'s, so for the strong law of large numbers we need 𝜎-algebras.
The strong law is not directly empirically meaningful, since it is about actual convergence, which can never be empirically verified. The weak law, on the other hand, is about the quality of approximation increasing with 𝑛, with numerical bounds for finite 𝑛, so is more empirically meaningful.
For more on why the strong version is empirically irrelevant, see this stats.SE question.
Avoid improper priors and the marginalization paradox.
Source: This awesome history of countable vs finite additivity, linked to me by one of the commenters on this question.
edited Mar 13 at 11:14
answered Mar 13 at 11:01
Yatharth AgarwalYatharth Agarwal
542418
542418
add a comment |
add a comment |
1
$begingroup$
Countable-additivity ensures continuity of the measure: $A_nuparrow ARightarrow mu(A_n)uparrow mu(A)$, which is in the core of limiting theorems in probability theory (e.g., law of large numbers, central limit, 0-1 Kolmogorov law, etc).
$endgroup$
– Augusto S.
Mar 12 at 22:41
$begingroup$
This article seems relevant: Finite Additivity versus Countable Additivity by Nick Bingham. Link from his webpage
$endgroup$
– twnly
Mar 12 at 22:55
$begingroup$
Here is an example where an only finitely additive probability measure leads to unintuitive consequences.
$endgroup$
– Mike Earnest
Mar 12 at 23:03
$begingroup$
It seems that in general so many processes/events/models etc in probability involve some sort of countability. Finite additivity is much too weak a condition, there are quite a few finitely additive measures that are otherwise very pathological.
$endgroup$
– rubikscube09
Mar 13 at 0:35
$begingroup$
Could the close-voter explain?
$endgroup$
– Yatharth Agarwal
Mar 13 at 2:34