Proof of Caratheodory's Theorem (for Convex Sets) using Radon's LemmaConvex hull as an infinite...

Rationale to prefer local variables over instance variables?

"If + would" conditional in present perfect tense

Has a sovereign Communist government ever run, and conceded loss, on a fair election?

What is the purpose of a disclaimer like "this is not legal advice"?

Can't make sense of a paragraph from Lovecraft

Are these two graphs isomorphic? Why/Why not?

Is it a Cyclops number? "Nobody" knows!

Why restrict private health insurance?

PTIJ: Who was the sixth set of priestly clothes for?

Computation logic of Partway in TikZ

Why does Central Limit Theorem break down in my simulation?

Can one live in the U.S. and not use a credit card?

What is the "determinant" of two vectors?

How to install round brake pads

Was it really inappropriate to write a pull request for the company I interviewed with?

Is it possible to clone a polymorphic object without manually adding overridden clone method into each derived class in C++?

Is divide-by-zero a security vulnerability?

Giving a career talk in my old university, how prominently should I tell students my salary?

How do I increase the number of TTY consoles?

Having the player face themselves after the mid-game

What do you call someone who likes to pick fights?

Numerical value of Determinant far from what it is supposed to be

Professor forcing me to attend a conference, I can't afford even with 50% funding

Called into a meeting and told we are being made redundant (laid off) and "not to share outside". Can I tell my partner?



Proof of Caratheodory's Theorem (for Convex Sets) using Radon's Lemma


Convex hull as an infinite intersectionConvexity of sum and intersection of convex setsfinding a counter example to Caratheodory's convex hull theorem for infinite dimentional spaceIntersections of convex hullsconv(A) equals to the intersection of all convex sets containing AGeometric Intuition for Caratheodory's Theorem (for Convex Sets)Fejer monotone with respect to the convex hull of $C$Not a convex combinationApproximate a convex body by polyhedronProduct of two polytopes is a polytope













5












$begingroup$


I am self-studying some discrete geometry / convex analysis. Many descriptions of Caratheodory's Theorem for convex sets mention that Radon's Lemma can be used to simplify the proof, but I haven't seen it done. For reference, here is Radon's Lemma:




Lemma (Radon). Let $A subset mathbb{R}^d$ contain $d+2$ points. Then there exist two disjoint subsets $A_1, A_2 subset A$ whose convex hulls have nonempty intersection.




I will attempt to prove:




Theorem (Caratheodory). Let $X subset mathbb{R}^d$. Then each point of $mathrm{conv}(X)$ can be written as a convex combination of at most $d+1$ points in $X$.




Proof Attempt. Each $y in mathrm{conv}(X)$ is a convex combination $y = sum_{k=1}^m alpha_k x_k$ of finitely many points $x_1, dots, x_m in X$, where $alpha_k > 0$ and $sum_{k=1}^m alpha_k = 1$. Assume $m geq d+2$, otherwise we are done. Further assume towards contradiction that $m$ is minimal, that is, $y$ cannot be written as the convex combination of fewer than $m$ points from $X$.



Then, the points $x_1, dots, x_m$ are affinely dependent, being $m geq d+2$ points in $mathbb{R}^d$; hence one point, say $x_m$, is an affine combination of the rest. Apply Radon's Lemma to the set $A = { y, x_1, dots, x_{m-1} }$, giving two sets $A_1, A_2 subset A$ whose convex hulls have nonempty intersection....?




Is this the right idea? How might I continue?











share|cite|improve this question











$endgroup$

















    5












    $begingroup$


    I am self-studying some discrete geometry / convex analysis. Many descriptions of Caratheodory's Theorem for convex sets mention that Radon's Lemma can be used to simplify the proof, but I haven't seen it done. For reference, here is Radon's Lemma:




    Lemma (Radon). Let $A subset mathbb{R}^d$ contain $d+2$ points. Then there exist two disjoint subsets $A_1, A_2 subset A$ whose convex hulls have nonempty intersection.




    I will attempt to prove:




    Theorem (Caratheodory). Let $X subset mathbb{R}^d$. Then each point of $mathrm{conv}(X)$ can be written as a convex combination of at most $d+1$ points in $X$.




    Proof Attempt. Each $y in mathrm{conv}(X)$ is a convex combination $y = sum_{k=1}^m alpha_k x_k$ of finitely many points $x_1, dots, x_m in X$, where $alpha_k > 0$ and $sum_{k=1}^m alpha_k = 1$. Assume $m geq d+2$, otherwise we are done. Further assume towards contradiction that $m$ is minimal, that is, $y$ cannot be written as the convex combination of fewer than $m$ points from $X$.



    Then, the points $x_1, dots, x_m$ are affinely dependent, being $m geq d+2$ points in $mathbb{R}^d$; hence one point, say $x_m$, is an affine combination of the rest. Apply Radon's Lemma to the set $A = { y, x_1, dots, x_{m-1} }$, giving two sets $A_1, A_2 subset A$ whose convex hulls have nonempty intersection....?




    Is this the right idea? How might I continue?











    share|cite|improve this question











    $endgroup$















      5












      5








      5


      3



      $begingroup$


      I am self-studying some discrete geometry / convex analysis. Many descriptions of Caratheodory's Theorem for convex sets mention that Radon's Lemma can be used to simplify the proof, but I haven't seen it done. For reference, here is Radon's Lemma:




      Lemma (Radon). Let $A subset mathbb{R}^d$ contain $d+2$ points. Then there exist two disjoint subsets $A_1, A_2 subset A$ whose convex hulls have nonempty intersection.




      I will attempt to prove:




      Theorem (Caratheodory). Let $X subset mathbb{R}^d$. Then each point of $mathrm{conv}(X)$ can be written as a convex combination of at most $d+1$ points in $X$.




      Proof Attempt. Each $y in mathrm{conv}(X)$ is a convex combination $y = sum_{k=1}^m alpha_k x_k$ of finitely many points $x_1, dots, x_m in X$, where $alpha_k > 0$ and $sum_{k=1}^m alpha_k = 1$. Assume $m geq d+2$, otherwise we are done. Further assume towards contradiction that $m$ is minimal, that is, $y$ cannot be written as the convex combination of fewer than $m$ points from $X$.



      Then, the points $x_1, dots, x_m$ are affinely dependent, being $m geq d+2$ points in $mathbb{R}^d$; hence one point, say $x_m$, is an affine combination of the rest. Apply Radon's Lemma to the set $A = { y, x_1, dots, x_{m-1} }$, giving two sets $A_1, A_2 subset A$ whose convex hulls have nonempty intersection....?




      Is this the right idea? How might I continue?











      share|cite|improve this question











      $endgroup$




      I am self-studying some discrete geometry / convex analysis. Many descriptions of Caratheodory's Theorem for convex sets mention that Radon's Lemma can be used to simplify the proof, but I haven't seen it done. For reference, here is Radon's Lemma:




      Lemma (Radon). Let $A subset mathbb{R}^d$ contain $d+2$ points. Then there exist two disjoint subsets $A_1, A_2 subset A$ whose convex hulls have nonempty intersection.




      I will attempt to prove:




      Theorem (Caratheodory). Let $X subset mathbb{R}^d$. Then each point of $mathrm{conv}(X)$ can be written as a convex combination of at most $d+1$ points in $X$.




      Proof Attempt. Each $y in mathrm{conv}(X)$ is a convex combination $y = sum_{k=1}^m alpha_k x_k$ of finitely many points $x_1, dots, x_m in X$, where $alpha_k > 0$ and $sum_{k=1}^m alpha_k = 1$. Assume $m geq d+2$, otherwise we are done. Further assume towards contradiction that $m$ is minimal, that is, $y$ cannot be written as the convex combination of fewer than $m$ points from $X$.



      Then, the points $x_1, dots, x_m$ are affinely dependent, being $m geq d+2$ points in $mathbb{R}^d$; hence one point, say $x_m$, is an affine combination of the rest. Apply Radon's Lemma to the set $A = { y, x_1, dots, x_{m-1} }$, giving two sets $A_1, A_2 subset A$ whose convex hulls have nonempty intersection....?




      Is this the right idea? How might I continue?








      geometry discrete-mathematics convex-analysis discrete-geometry convex-hulls






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Jun 21 '17 at 5:58







      Ben Bray

















      asked Jun 21 '17 at 3:57









      Ben BrayBen Bray

      321110




      321110






















          2 Answers
          2






          active

          oldest

          votes


















          1












          $begingroup$

          You started off well, especially by assuming $m$ is minimal. This is not a simple proof, and figuring it all out on your own can be very challenging. Instead of trying to use the affine dependence of the points directly, as done in the proof of Radon's theorem, you can instead apply the theorem itself on the set ${x_1,...,x_m}$ (since, as you mentioned, $mgeq n+2$).
          This gives us disjoint $I,Jsubseteq [m]$ and equivalent convex combinations:
          $$sum_{hin I}x_hmu_h=sum_{hin J}x_hmu_h,sum_{hin I}mu_h=sum_{hin J}mu_h=1$$
          We can rename the points, and WLOG assume $I={1,...,k},J={k+1,...,l}$, and in addition:
          $$frac{alpha_1}{mu_1}=min_{iin I}{frac{alpha_i}{mu_i}}:=t$$
          (here, the coefficients $alpha_i$ are the same coefficients you defined for the given $y$ in your question). This gives us:
          $$y=sum_{i=1}^malpha_ix_i=sum_{i=1}^malpha_ix_i-tsum_{i=1}^kmu_ix_i+tsum_{j=k+1}^lmu_ix_i=$$
          $$sum_{i=1}^k(alpha_i-tmu_i)x+sum_{j=k+1}^l(alpha_i+tmu_i)x+sum_{h=l+1}^malpha_ix_i$$
          $tmu_i= frac{alpha_1mu_i}{mu_1}leqfrac{alpha_imu_i}{mu_i}$ and so all of the coefficients in this sum are non-nagative. You can also easily verify that the first coefficient is $0$, and the sum of the coefficients is $1$, and therefore we managed to present $y$ as a convex combination of $m-1$ points, in contradiction to the minimality of $m$.






          share|cite|improve this answer









          $endgroup$





















            2












            $begingroup$

            You're in the right direction. By induction it's enough to prove that we can reduce a convex combination of $n+2$ points to a convex combination of $n+1$ points. Suppose we have a point $vinmathbb{R}^n$ and that it is a convex combination of $n+2$ points. This means that
            $$v=sum_{i=1}^{n+2}alpha_ix_i.$$
            Where $x_iin X, ~sum_ialpha_i=1$ and $alpha_igeq 0$. Since any $n+2$ points in $mathbb{R}^n$ are affinely dependent, then we have
            $$sum_{i=1}^{n+2}beta_ix_i=0,$$ where $sum_ibeta_i=1$.



            Now, let $gamma=min{frac{alpha_i}{beta_i}:beta_i>0}>0$, without loss of generality, assume that $gamma=frac{a_{n+2}}{beta_{n+2}}$ and pay attention to the following
            $$
            v=v-gammacdot 0=sum_i^{n+2}(alpha_i-gammabeta_i)x_i=sum_i^{n+1}(alpha_i-gammabeta_i)x_i,
            $$



            clearly, every coefficient is positive and they sum to 1 which completes the proof.






            share|cite|improve this answer











            $endgroup$









            • 1




              $begingroup$
              This is a good explanation of the usual proof, but where is Radon's lemma used?
              $endgroup$
              – Ben Bray
              Oct 20 '17 at 19:23






            • 1




              $begingroup$
              That is the straigtforward proof, it doesn’t really answer the question
              $endgroup$
              – narek Bojikian
              Feb 7 at 14:54












            • $begingroup$
              I think the property that $sum_i beta_i=1$ is not a direct consequence of the affine dependence, and practically uses Radon's theorem. I wrote this part in more detail in my answer.
              $endgroup$
              – Dean Gurvitz
              yesterday













            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "69"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2330626%2fproof-of-caratheodorys-theorem-for-convex-sets-using-radons-lemma%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1












            $begingroup$

            You started off well, especially by assuming $m$ is minimal. This is not a simple proof, and figuring it all out on your own can be very challenging. Instead of trying to use the affine dependence of the points directly, as done in the proof of Radon's theorem, you can instead apply the theorem itself on the set ${x_1,...,x_m}$ (since, as you mentioned, $mgeq n+2$).
            This gives us disjoint $I,Jsubseteq [m]$ and equivalent convex combinations:
            $$sum_{hin I}x_hmu_h=sum_{hin J}x_hmu_h,sum_{hin I}mu_h=sum_{hin J}mu_h=1$$
            We can rename the points, and WLOG assume $I={1,...,k},J={k+1,...,l}$, and in addition:
            $$frac{alpha_1}{mu_1}=min_{iin I}{frac{alpha_i}{mu_i}}:=t$$
            (here, the coefficients $alpha_i$ are the same coefficients you defined for the given $y$ in your question). This gives us:
            $$y=sum_{i=1}^malpha_ix_i=sum_{i=1}^malpha_ix_i-tsum_{i=1}^kmu_ix_i+tsum_{j=k+1}^lmu_ix_i=$$
            $$sum_{i=1}^k(alpha_i-tmu_i)x+sum_{j=k+1}^l(alpha_i+tmu_i)x+sum_{h=l+1}^malpha_ix_i$$
            $tmu_i= frac{alpha_1mu_i}{mu_1}leqfrac{alpha_imu_i}{mu_i}$ and so all of the coefficients in this sum are non-nagative. You can also easily verify that the first coefficient is $0$, and the sum of the coefficients is $1$, and therefore we managed to present $y$ as a convex combination of $m-1$ points, in contradiction to the minimality of $m$.






            share|cite|improve this answer









            $endgroup$


















              1












              $begingroup$

              You started off well, especially by assuming $m$ is minimal. This is not a simple proof, and figuring it all out on your own can be very challenging. Instead of trying to use the affine dependence of the points directly, as done in the proof of Radon's theorem, you can instead apply the theorem itself on the set ${x_1,...,x_m}$ (since, as you mentioned, $mgeq n+2$).
              This gives us disjoint $I,Jsubseteq [m]$ and equivalent convex combinations:
              $$sum_{hin I}x_hmu_h=sum_{hin J}x_hmu_h,sum_{hin I}mu_h=sum_{hin J}mu_h=1$$
              We can rename the points, and WLOG assume $I={1,...,k},J={k+1,...,l}$, and in addition:
              $$frac{alpha_1}{mu_1}=min_{iin I}{frac{alpha_i}{mu_i}}:=t$$
              (here, the coefficients $alpha_i$ are the same coefficients you defined for the given $y$ in your question). This gives us:
              $$y=sum_{i=1}^malpha_ix_i=sum_{i=1}^malpha_ix_i-tsum_{i=1}^kmu_ix_i+tsum_{j=k+1}^lmu_ix_i=$$
              $$sum_{i=1}^k(alpha_i-tmu_i)x+sum_{j=k+1}^l(alpha_i+tmu_i)x+sum_{h=l+1}^malpha_ix_i$$
              $tmu_i= frac{alpha_1mu_i}{mu_1}leqfrac{alpha_imu_i}{mu_i}$ and so all of the coefficients in this sum are non-nagative. You can also easily verify that the first coefficient is $0$, and the sum of the coefficients is $1$, and therefore we managed to present $y$ as a convex combination of $m-1$ points, in contradiction to the minimality of $m$.






              share|cite|improve this answer









              $endgroup$
















                1












                1








                1





                $begingroup$

                You started off well, especially by assuming $m$ is minimal. This is not a simple proof, and figuring it all out on your own can be very challenging. Instead of trying to use the affine dependence of the points directly, as done in the proof of Radon's theorem, you can instead apply the theorem itself on the set ${x_1,...,x_m}$ (since, as you mentioned, $mgeq n+2$).
                This gives us disjoint $I,Jsubseteq [m]$ and equivalent convex combinations:
                $$sum_{hin I}x_hmu_h=sum_{hin J}x_hmu_h,sum_{hin I}mu_h=sum_{hin J}mu_h=1$$
                We can rename the points, and WLOG assume $I={1,...,k},J={k+1,...,l}$, and in addition:
                $$frac{alpha_1}{mu_1}=min_{iin I}{frac{alpha_i}{mu_i}}:=t$$
                (here, the coefficients $alpha_i$ are the same coefficients you defined for the given $y$ in your question). This gives us:
                $$y=sum_{i=1}^malpha_ix_i=sum_{i=1}^malpha_ix_i-tsum_{i=1}^kmu_ix_i+tsum_{j=k+1}^lmu_ix_i=$$
                $$sum_{i=1}^k(alpha_i-tmu_i)x+sum_{j=k+1}^l(alpha_i+tmu_i)x+sum_{h=l+1}^malpha_ix_i$$
                $tmu_i= frac{alpha_1mu_i}{mu_1}leqfrac{alpha_imu_i}{mu_i}$ and so all of the coefficients in this sum are non-nagative. You can also easily verify that the first coefficient is $0$, and the sum of the coefficients is $1$, and therefore we managed to present $y$ as a convex combination of $m-1$ points, in contradiction to the minimality of $m$.






                share|cite|improve this answer









                $endgroup$



                You started off well, especially by assuming $m$ is minimal. This is not a simple proof, and figuring it all out on your own can be very challenging. Instead of trying to use the affine dependence of the points directly, as done in the proof of Radon's theorem, you can instead apply the theorem itself on the set ${x_1,...,x_m}$ (since, as you mentioned, $mgeq n+2$).
                This gives us disjoint $I,Jsubseteq [m]$ and equivalent convex combinations:
                $$sum_{hin I}x_hmu_h=sum_{hin J}x_hmu_h,sum_{hin I}mu_h=sum_{hin J}mu_h=1$$
                We can rename the points, and WLOG assume $I={1,...,k},J={k+1,...,l}$, and in addition:
                $$frac{alpha_1}{mu_1}=min_{iin I}{frac{alpha_i}{mu_i}}:=t$$
                (here, the coefficients $alpha_i$ are the same coefficients you defined for the given $y$ in your question). This gives us:
                $$y=sum_{i=1}^malpha_ix_i=sum_{i=1}^malpha_ix_i-tsum_{i=1}^kmu_ix_i+tsum_{j=k+1}^lmu_ix_i=$$
                $$sum_{i=1}^k(alpha_i-tmu_i)x+sum_{j=k+1}^l(alpha_i+tmu_i)x+sum_{h=l+1}^malpha_ix_i$$
                $tmu_i= frac{alpha_1mu_i}{mu_1}leqfrac{alpha_imu_i}{mu_i}$ and so all of the coefficients in this sum are non-nagative. You can also easily verify that the first coefficient is $0$, and the sum of the coefficients is $1$, and therefore we managed to present $y$ as a convex combination of $m-1$ points, in contradiction to the minimality of $m$.







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered yesterday









                Dean GurvitzDean Gurvitz

                384215




                384215























                    2












                    $begingroup$

                    You're in the right direction. By induction it's enough to prove that we can reduce a convex combination of $n+2$ points to a convex combination of $n+1$ points. Suppose we have a point $vinmathbb{R}^n$ and that it is a convex combination of $n+2$ points. This means that
                    $$v=sum_{i=1}^{n+2}alpha_ix_i.$$
                    Where $x_iin X, ~sum_ialpha_i=1$ and $alpha_igeq 0$. Since any $n+2$ points in $mathbb{R}^n$ are affinely dependent, then we have
                    $$sum_{i=1}^{n+2}beta_ix_i=0,$$ where $sum_ibeta_i=1$.



                    Now, let $gamma=min{frac{alpha_i}{beta_i}:beta_i>0}>0$, without loss of generality, assume that $gamma=frac{a_{n+2}}{beta_{n+2}}$ and pay attention to the following
                    $$
                    v=v-gammacdot 0=sum_i^{n+2}(alpha_i-gammabeta_i)x_i=sum_i^{n+1}(alpha_i-gammabeta_i)x_i,
                    $$



                    clearly, every coefficient is positive and they sum to 1 which completes the proof.






                    share|cite|improve this answer











                    $endgroup$









                    • 1




                      $begingroup$
                      This is a good explanation of the usual proof, but where is Radon's lemma used?
                      $endgroup$
                      – Ben Bray
                      Oct 20 '17 at 19:23






                    • 1




                      $begingroup$
                      That is the straigtforward proof, it doesn’t really answer the question
                      $endgroup$
                      – narek Bojikian
                      Feb 7 at 14:54












                    • $begingroup$
                      I think the property that $sum_i beta_i=1$ is not a direct consequence of the affine dependence, and practically uses Radon's theorem. I wrote this part in more detail in my answer.
                      $endgroup$
                      – Dean Gurvitz
                      yesterday


















                    2












                    $begingroup$

                    You're in the right direction. By induction it's enough to prove that we can reduce a convex combination of $n+2$ points to a convex combination of $n+1$ points. Suppose we have a point $vinmathbb{R}^n$ and that it is a convex combination of $n+2$ points. This means that
                    $$v=sum_{i=1}^{n+2}alpha_ix_i.$$
                    Where $x_iin X, ~sum_ialpha_i=1$ and $alpha_igeq 0$. Since any $n+2$ points in $mathbb{R}^n$ are affinely dependent, then we have
                    $$sum_{i=1}^{n+2}beta_ix_i=0,$$ where $sum_ibeta_i=1$.



                    Now, let $gamma=min{frac{alpha_i}{beta_i}:beta_i>0}>0$, without loss of generality, assume that $gamma=frac{a_{n+2}}{beta_{n+2}}$ and pay attention to the following
                    $$
                    v=v-gammacdot 0=sum_i^{n+2}(alpha_i-gammabeta_i)x_i=sum_i^{n+1}(alpha_i-gammabeta_i)x_i,
                    $$



                    clearly, every coefficient is positive and they sum to 1 which completes the proof.






                    share|cite|improve this answer











                    $endgroup$









                    • 1




                      $begingroup$
                      This is a good explanation of the usual proof, but where is Radon's lemma used?
                      $endgroup$
                      – Ben Bray
                      Oct 20 '17 at 19:23






                    • 1




                      $begingroup$
                      That is the straigtforward proof, it doesn’t really answer the question
                      $endgroup$
                      – narek Bojikian
                      Feb 7 at 14:54












                    • $begingroup$
                      I think the property that $sum_i beta_i=1$ is not a direct consequence of the affine dependence, and practically uses Radon's theorem. I wrote this part in more detail in my answer.
                      $endgroup$
                      – Dean Gurvitz
                      yesterday
















                    2












                    2








                    2





                    $begingroup$

                    You're in the right direction. By induction it's enough to prove that we can reduce a convex combination of $n+2$ points to a convex combination of $n+1$ points. Suppose we have a point $vinmathbb{R}^n$ and that it is a convex combination of $n+2$ points. This means that
                    $$v=sum_{i=1}^{n+2}alpha_ix_i.$$
                    Where $x_iin X, ~sum_ialpha_i=1$ and $alpha_igeq 0$. Since any $n+2$ points in $mathbb{R}^n$ are affinely dependent, then we have
                    $$sum_{i=1}^{n+2}beta_ix_i=0,$$ where $sum_ibeta_i=1$.



                    Now, let $gamma=min{frac{alpha_i}{beta_i}:beta_i>0}>0$, without loss of generality, assume that $gamma=frac{a_{n+2}}{beta_{n+2}}$ and pay attention to the following
                    $$
                    v=v-gammacdot 0=sum_i^{n+2}(alpha_i-gammabeta_i)x_i=sum_i^{n+1}(alpha_i-gammabeta_i)x_i,
                    $$



                    clearly, every coefficient is positive and they sum to 1 which completes the proof.






                    share|cite|improve this answer











                    $endgroup$



                    You're in the right direction. By induction it's enough to prove that we can reduce a convex combination of $n+2$ points to a convex combination of $n+1$ points. Suppose we have a point $vinmathbb{R}^n$ and that it is a convex combination of $n+2$ points. This means that
                    $$v=sum_{i=1}^{n+2}alpha_ix_i.$$
                    Where $x_iin X, ~sum_ialpha_i=1$ and $alpha_igeq 0$. Since any $n+2$ points in $mathbb{R}^n$ are affinely dependent, then we have
                    $$sum_{i=1}^{n+2}beta_ix_i=0,$$ where $sum_ibeta_i=1$.



                    Now, let $gamma=min{frac{alpha_i}{beta_i}:beta_i>0}>0$, without loss of generality, assume that $gamma=frac{a_{n+2}}{beta_{n+2}}$ and pay attention to the following
                    $$
                    v=v-gammacdot 0=sum_i^{n+2}(alpha_i-gammabeta_i)x_i=sum_i^{n+1}(alpha_i-gammabeta_i)x_i,
                    $$



                    clearly, every coefficient is positive and they sum to 1 which completes the proof.







                    share|cite|improve this answer














                    share|cite|improve this answer



                    share|cite|improve this answer








                    edited Mar 6 at 20:09









                    Dean Gurvitz

                    384215




                    384215










                    answered Oct 20 '17 at 18:39









                    MickelMickel

                    294




                    294








                    • 1




                      $begingroup$
                      This is a good explanation of the usual proof, but where is Radon's lemma used?
                      $endgroup$
                      – Ben Bray
                      Oct 20 '17 at 19:23






                    • 1




                      $begingroup$
                      That is the straigtforward proof, it doesn’t really answer the question
                      $endgroup$
                      – narek Bojikian
                      Feb 7 at 14:54












                    • $begingroup$
                      I think the property that $sum_i beta_i=1$ is not a direct consequence of the affine dependence, and practically uses Radon's theorem. I wrote this part in more detail in my answer.
                      $endgroup$
                      – Dean Gurvitz
                      yesterday
















                    • 1




                      $begingroup$
                      This is a good explanation of the usual proof, but where is Radon's lemma used?
                      $endgroup$
                      – Ben Bray
                      Oct 20 '17 at 19:23






                    • 1




                      $begingroup$
                      That is the straigtforward proof, it doesn’t really answer the question
                      $endgroup$
                      – narek Bojikian
                      Feb 7 at 14:54












                    • $begingroup$
                      I think the property that $sum_i beta_i=1$ is not a direct consequence of the affine dependence, and practically uses Radon's theorem. I wrote this part in more detail in my answer.
                      $endgroup$
                      – Dean Gurvitz
                      yesterday










                    1




                    1




                    $begingroup$
                    This is a good explanation of the usual proof, but where is Radon's lemma used?
                    $endgroup$
                    – Ben Bray
                    Oct 20 '17 at 19:23




                    $begingroup$
                    This is a good explanation of the usual proof, but where is Radon's lemma used?
                    $endgroup$
                    – Ben Bray
                    Oct 20 '17 at 19:23




                    1




                    1




                    $begingroup$
                    That is the straigtforward proof, it doesn’t really answer the question
                    $endgroup$
                    – narek Bojikian
                    Feb 7 at 14:54






                    $begingroup$
                    That is the straigtforward proof, it doesn’t really answer the question
                    $endgroup$
                    – narek Bojikian
                    Feb 7 at 14:54














                    $begingroup$
                    I think the property that $sum_i beta_i=1$ is not a direct consequence of the affine dependence, and practically uses Radon's theorem. I wrote this part in more detail in my answer.
                    $endgroup$
                    – Dean Gurvitz
                    yesterday






                    $begingroup$
                    I think the property that $sum_i beta_i=1$ is not a direct consequence of the affine dependence, and practically uses Radon's theorem. I wrote this part in more detail in my answer.
                    $endgroup$
                    – Dean Gurvitz
                    yesterday




















                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2330626%2fproof-of-caratheodorys-theorem-for-convex-sets-using-radons-lemma%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Nidaros erkebispedøme

                    Birsay

                    Where did Arya get these scars? Unicorn Meta Zoo #1: Why another podcast? Announcing the arrival of Valued Associate #679: Cesar Manara Favourite questions and answers from the 1st quarter of 2019Why did Arya refuse to end it?Has the pronunciation of Arya Stark's name changed?Has Arya forgiven people?Why did Arya Stark lose her vision?Why can Arya still use the faces?Has the Narrow Sea become narrower?Does Arya Stark know how to make poisons outside of the House of Black and White?Why did Nymeria leave Arya?Why did Arya not kill the Lannister soldiers she encountered in the Riverlands?What is the current canonical age of Sansa, Bran and Arya Stark?