Multivariate Quadratic RegressionEquations For Quadratic RegressionComplexity of first, second and zero order...

extract characters between two commas?

Copycat chess is back

I see my dog run

Is there any use for defining additional entity types in a SOQL FROM clause?

Can the Produce Flame cantrip be used to grapple, or as an unarmed strike, in the right circumstances?

How can I fix this gap between bookcases I made?

LWC and complex parameters

A poker game description that does not feel gimmicky

Is domain driven design an anti-SQL pattern?

How could a lack of term limits lead to a "dictatorship?"

aging parents with no investments

Crop image to path created in TikZ?

Is there a name of the flying bionic bird?

Prime joint compound before latex paint?

How to move the player while also allowing forces to affect it

Symmetry in quantum mechanics

Does the average primeness of natural numbers tend to zero?

Why doesn't a const reference extend the life of a temporary object passed via a function?

Is it wise to hold on to stock that has plummeted and then stabilized?

"My colleague's body is amazing"

Can a planet have a different gravitational pull depending on its location in orbit around its sun?

Manga about a female worker who got dragged into another world together with this high school girl and she was just told she's not needed anymore

What does 'script /dev/null' do?

What is GPS' 19 year rollover and does it present a cybersecurity issue?



Multivariate Quadratic Regression


Equations For Quadratic RegressionComplexity of first, second and zero order optimizationPoint-wise error estimate in polynomial regressionMultivariate regression with nonindependent variablesAdvice to solve a system of 8th order univariate polynomialsWhat are the limitations of linear regression + feature / label transformation?Logistic regression MLE example. What is this “logistic function”?Linear objective function with quadratic constraintsHow to find the closed form formula for $hat{beta}$ while using ordinary least squares estimation?Is Bilinear regression a form of Multivariate polynomial regression?Matrix Regression for linear ODE systemExistence and uniqueness of least square fits













2












$begingroup$


I would like to make a polynomial regression, but for multivariate input data. In the univariate case, one can write polynomial regression as a multivariate linear regression problem and can come up with the closed form for ordinary least squares of



$$
begin{pmatrix}a\b\cend{pmatrix} = (mathbf X^T mathbf X)^{-1} mathbf X^T mathbf Y
$$



(see e.g. Equations For Quadratic Regression or https://en.wikipedia.org/wiki/Polynomial_regression).



However, in my case, the quadratic regression is multivariate, so



$$
min_{a,b,C} sum_{i=1}^N y_i - (a + b^Tcdot x_i + x_i^Tcdot Ccdot x_i)^2
$$



where $C$ is a symmetric matrix, $b$ and $x_i$ are vectors, $y_i$ and $a$ are scalars and $N$ is the number of samples (we can assume we have enough samples to have an overdetermined system).



Does a closed form exist here as well, and if so, what does it look like?



If not, how do I do the regression? Obviously I could use regular optimization methods, like BFGS, with the constraints, that C is symmetric, but that is not as efficient as I would hope for.










share|cite|improve this question











$endgroup$

















    2












    $begingroup$


    I would like to make a polynomial regression, but for multivariate input data. In the univariate case, one can write polynomial regression as a multivariate linear regression problem and can come up with the closed form for ordinary least squares of



    $$
    begin{pmatrix}a\b\cend{pmatrix} = (mathbf X^T mathbf X)^{-1} mathbf X^T mathbf Y
    $$



    (see e.g. Equations For Quadratic Regression or https://en.wikipedia.org/wiki/Polynomial_regression).



    However, in my case, the quadratic regression is multivariate, so



    $$
    min_{a,b,C} sum_{i=1}^N y_i - (a + b^Tcdot x_i + x_i^Tcdot Ccdot x_i)^2
    $$



    where $C$ is a symmetric matrix, $b$ and $x_i$ are vectors, $y_i$ and $a$ are scalars and $N$ is the number of samples (we can assume we have enough samples to have an overdetermined system).



    Does a closed form exist here as well, and if so, what does it look like?



    If not, how do I do the regression? Obviously I could use regular optimization methods, like BFGS, with the constraints, that C is symmetric, but that is not as efficient as I would hope for.










    share|cite|improve this question











    $endgroup$















      2












      2








      2





      $begingroup$


      I would like to make a polynomial regression, but for multivariate input data. In the univariate case, one can write polynomial regression as a multivariate linear regression problem and can come up with the closed form for ordinary least squares of



      $$
      begin{pmatrix}a\b\cend{pmatrix} = (mathbf X^T mathbf X)^{-1} mathbf X^T mathbf Y
      $$



      (see e.g. Equations For Quadratic Regression or https://en.wikipedia.org/wiki/Polynomial_regression).



      However, in my case, the quadratic regression is multivariate, so



      $$
      min_{a,b,C} sum_{i=1}^N y_i - (a + b^Tcdot x_i + x_i^Tcdot Ccdot x_i)^2
      $$



      where $C$ is a symmetric matrix, $b$ and $x_i$ are vectors, $y_i$ and $a$ are scalars and $N$ is the number of samples (we can assume we have enough samples to have an overdetermined system).



      Does a closed form exist here as well, and if so, what does it look like?



      If not, how do I do the regression? Obviously I could use regular optimization methods, like BFGS, with the constraints, that C is symmetric, but that is not as efficient as I would hope for.










      share|cite|improve this question











      $endgroup$




      I would like to make a polynomial regression, but for multivariate input data. In the univariate case, one can write polynomial regression as a multivariate linear regression problem and can come up with the closed form for ordinary least squares of



      $$
      begin{pmatrix}a\b\cend{pmatrix} = (mathbf X^T mathbf X)^{-1} mathbf X^T mathbf Y
      $$



      (see e.g. Equations For Quadratic Regression or https://en.wikipedia.org/wiki/Polynomial_regression).



      However, in my case, the quadratic regression is multivariate, so



      $$
      min_{a,b,C} sum_{i=1}^N y_i - (a + b^Tcdot x_i + x_i^Tcdot Ccdot x_i)^2
      $$



      where $C$ is a symmetric matrix, $b$ and $x_i$ are vectors, $y_i$ and $a$ are scalars and $N$ is the number of samples (we can assume we have enough samples to have an overdetermined system).



      Does a closed form exist here as well, and if so, what does it look like?



      If not, how do I do the regression? Obviously I could use regular optimization methods, like BFGS, with the constraints, that C is symmetric, but that is not as efficient as I would hope for.







      regression multivariate-polynomial






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited Mar 20 at 19:51







      Make42

















      asked Mar 20 at 18:45









      Make42Make42

      212110




      212110






















          2 Answers
          2






          active

          oldest

          votes


















          1












          $begingroup$

          You can do multi-variate quadratic regression in the usual way. Let's label the row (and column) indices of the design matrix $A$, and the row index of the value vector $b$, by
          index $s({p_1, p_2, p_3, cdots})$ which pertains to the coefficient of $x_i^{p_1}x_2^{p_2}cdots$. For example, the row labeled $s({ 1, 0, 2})$ will be the row pertaining to the coefficient of $x_1x_3^2$.



          Then the elements of $A$ are calculated as
          $$
          A_{s({p_1, p_2, p_3, cdots}),s({q_1, q_2, q_3, cdots})} = sum x_1^{p_1+q_1}
          x_2^{p_2+q_2} x_3^{p_3+q_3} cdots
          $$

          and the elements of $b$ are
          $$
          b_{s({p_1, p_2, p_3, cdots})} = sum y,x_1^{p_1}
          x_2^{p_2} x_3^{p_3} cdots
          $$

          where of course all the sums are taken over the set of data points.



          For example, for a 2-variable quadratic fit $y = a + bu + cv + du^2 + e uv + fv^2$ you need to solve
          $$
          pmatrix{N &sum u_i &sum v_i & sum u_i^2 & sum u_iv_i & sum v_i^2 \
          sum u_i & sum u_i^2 & sum u_i v_i & sum u_i^3 & sum u_i^2v_i & sum u_i v_i^2 \
          sum v_i & sum u_iv_i & sum v_i^2 & sum u_i^2v_i & sum u_iv_i^2 & sum v_i^3 \
          sum u_i^2 & sum u_i^3 & sum u_i^2 v_i & sum u_i^4 & sum u_i^3v_i & sum u_i^2 v_i^2 \
          sum u_iv_i & sum u_i^2v_i & sum u_i v_i^2 & sum u_i^3v_i & sum u_i^2v_i^2 & sum u_i v_i^3 \
          sum v_i^2 & sum u_iv_i^2 & sum v_i^3 & sum u_i^2v_i^2 & sum u_iv_i^3 & sum v_i^4 }
          pmatrix{a\b\c\d\e\f}
          =pmatrix{sum y_i \ sum y_i u_i \ sum y_iv_i \ sum y_iu_i^2\ sum y_iu_iv_i \
          sum y_iv_i^2}
          $$






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Is there a way to write the design matrix $A$ as the result of matrix multiplications, instead of calculating each element independently? Then the calculation of the design matrix would be much more efficient in the respective computer systems like Matlab or numpy.
            $endgroup$
            – Make42
            Mar 20 at 19:47










          • $begingroup$
            So $C = begin{pmatrix}d&e\e&fend{pmatrix}$, $b^T = (b,c)$ (first my vector than your scalars) and my $a$ equals your $a$?
            $endgroup$
            – Make42
            Mar 20 at 19:55












          • $begingroup$
            Could it be that you missed two $sum$-symbols in your design matrix?
            $endgroup$
            – Make42
            Mar 20 at 23:00










          • $begingroup$
            Could be... corrected now
            $endgroup$
            – Mark Fischler
            Mar 21 at 2:25










          • $begingroup$
            The calculation of the matrix is incredibly efficient in any sort of procedural language, where you simply accumulate, as each point is processed, the various components. I've seen Matlab code that tried to use the powerful matrix syntax by artificially creating a column matrix out of each point - it is both ugly and very inefficient.
            $endgroup$
            – Mark Fischler
            Mar 21 at 2:28



















          1












          $begingroup$

          Disclaimer: Approach 1 is from Mark Fischler, but I want to reference the approach in my second approach and I need the labels under the matrices for referencing, so I restate the approach. Apparently, adding the second approach to Mark's answer is not wanted by the moderators.





          Approach 1



          You can do multi-variate quadratic regression in the usual way. Let's label the row (and column) indices of the design matrix $A$, and the row index of the value vector $b$, by
          index $s({p_1, p_2, p_3, cdots})$ which pertains to the coefficient of $x_i^{p_1}x_2^{p_2}cdots$. For example, the row labeled $s({ 1, 0, 2})$ will be the row pertaining to the coefficient of $x_1x_3^2$.



          Then the elements of $A$ are calculated as
          $$
          A_{s({p_1, p_2, p_3, cdots}),s({q_1, q_2, q_3, cdots})} = sum x_1^{p_1+q_1}
          x_2^{p_2+q_2} x_3^{p_3+q_3} cdots
          $$

          and the elements of $b$ are
          $$
          b_{s({p_1, p_2, p_3, cdots})} = sum y,x_1^{p_1}
          x_2^{p_2} x_3^{p_3} cdots
          $$

          where of course all the sums are taken over the set of data points.



          For example, for a 2-variable quadratic fit $y = a + bu + cv + du^2 + e uv + fv^2$ you need to solve
          $$
          underbrace{pmatrix{N &sum u_i &sum v_i & sum u_i^2 & sum u_iv_i & sum v_i^2 \
          sum v_i & sum u_iv_i & sum v_i^2 & sum u_i^2v_i & sum u_iv_i^2 & sum v_i^3 \
          sum u_i^2 & sum u_i^3 & sum u_i^2 v_i & sum u_i^4 & sum u_i^3v_i & sum u_i^2 v_i^2 \
          sum u_iv_i & sum u_i^2v_i & sum u_i v_i^2 & sum u_i^3v_i & sum u_i^2v_i^2 & sum u_i v_i^3 \
          sum v_i^2 & sum u_iv_i^2 & sum v_i^3 & sum u_i^2v_i^2 & sum u_iv_i^3 & sum v_i^4 }}_{mathbf A}
          pmatrix{a^*\b^*\c^*\d^*\e^*\f^*}
          =
          underbrace{
          pmatrix{sum y_i \ sum y_i u_i \ sum y_iv_i \ sum y_iu_i^2\ sum y_iu_iv_i \
          sum y_iv_i^2}
          }_{mathbf b}
          $$



          where $a^*, b^*, c^*, d^*, e^*, f^*$ are the optimal values of $a, b, c, d, e, f$ after the quadratic fit.



          Approach 2



          Alternatively we can consider



          begin{align}
          mathbf Y &= mathbf Xcdotpmatrix{a\dots\f}%
          \
          underbrace{pmatrix{y_{1}\y_{2}\y_{3}\vdots \y_{n}}}_{mathbf Y}
          &=
          underbrace{pmatrix{
          1&u_1&v_1&u_1^2 & u_1v_1 & v_1^2\
          1&u_2&v_2&u_2^2 & u_2v_2 & v_2^2\
          1&u_3&v_3&u_3^2 & u_3v_3 & v_3^2\
          vdots &vdots &vdots &vdots &vdots&vdots \
          1&u_n&v_n&u_n^2 & u_nv_n & v_n^2\
          }}_{mathbf X}
          cdot
          pmatrix{a\b\c\d\e\f}
          end{align}



          We can use this to use the regular formula from https://en.wikipedia.org/wiki/Polynomial_regression for Ordinary Least Squares and get



          begin{align}
          pmatrix{a^*\b^*\c^*\d^*\e^*\f^*} =
          {(underbrace{mathbf X^{mathsf T}cdotmathbf X}_{mathbf A} )}^{-1}
          cdot
          underbrace{mathbf{X}^{mathsf T}cdot vec {y}}_{mathbf b}
          end{align}



          Calculate your original quadratic function



          You can simply



          begin{align}
          alpha^* &= a^*\
          mathbf beta^* &= pmatrix{b^*\c^*}\
          mathbf Gamma^* &= pmatrix{d^*&e^*\e^*&f^*}
          end{align}



          for your original problem



          $$
          min_{A,B,C} sum_{i=1}^N y_i - (alpha + mathbf beta^Tcdot x_i + x_i^Tcdot mathbf Gammacdot x_i)^2
          $$



          where $alpha$ is a scalar, $mathbf beta$ is a vector and $mathbf Gamma$ is a matrix.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            The second approach is fine mathematically, but is $6$ to $10$ times more calculation (less efficient) than the first.
            $endgroup$
            – Mark Fischler
            Mar 23 at 7:35










          • $begingroup$
            @MarkFischler: Btw: I would be happy if you would take this answer and edit it into your own (basically you could copy-paste). I will accept your post as the answer: I would like to have a complete picture in the accepted answer (the mods, did not let me edit your post).
            $endgroup$
            – Make42
            Mar 25 at 11:12










          • $begingroup$
            @MarkFischler: If you explain why the second approach is so much slower, I would appreciate this too: The sums need to be done either why, so I am surprised. The only way I see an advantage of approach 1 is if I was able to only calculate the upper triangular matrix and then copy it down to the lower triangle. This would be ~2 times faster.
            $endgroup$
            – Make42
            Mar 25 at 11:14












          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3155866%2fmultivariate-quadratic-regression%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          2 Answers
          2






          active

          oldest

          votes








          2 Answers
          2






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1












          $begingroup$

          You can do multi-variate quadratic regression in the usual way. Let's label the row (and column) indices of the design matrix $A$, and the row index of the value vector $b$, by
          index $s({p_1, p_2, p_3, cdots})$ which pertains to the coefficient of $x_i^{p_1}x_2^{p_2}cdots$. For example, the row labeled $s({ 1, 0, 2})$ will be the row pertaining to the coefficient of $x_1x_3^2$.



          Then the elements of $A$ are calculated as
          $$
          A_{s({p_1, p_2, p_3, cdots}),s({q_1, q_2, q_3, cdots})} = sum x_1^{p_1+q_1}
          x_2^{p_2+q_2} x_3^{p_3+q_3} cdots
          $$

          and the elements of $b$ are
          $$
          b_{s({p_1, p_2, p_3, cdots})} = sum y,x_1^{p_1}
          x_2^{p_2} x_3^{p_3} cdots
          $$

          where of course all the sums are taken over the set of data points.



          For example, for a 2-variable quadratic fit $y = a + bu + cv + du^2 + e uv + fv^2$ you need to solve
          $$
          pmatrix{N &sum u_i &sum v_i & sum u_i^2 & sum u_iv_i & sum v_i^2 \
          sum u_i & sum u_i^2 & sum u_i v_i & sum u_i^3 & sum u_i^2v_i & sum u_i v_i^2 \
          sum v_i & sum u_iv_i & sum v_i^2 & sum u_i^2v_i & sum u_iv_i^2 & sum v_i^3 \
          sum u_i^2 & sum u_i^3 & sum u_i^2 v_i & sum u_i^4 & sum u_i^3v_i & sum u_i^2 v_i^2 \
          sum u_iv_i & sum u_i^2v_i & sum u_i v_i^2 & sum u_i^3v_i & sum u_i^2v_i^2 & sum u_i v_i^3 \
          sum v_i^2 & sum u_iv_i^2 & sum v_i^3 & sum u_i^2v_i^2 & sum u_iv_i^3 & sum v_i^4 }
          pmatrix{a\b\c\d\e\f}
          =pmatrix{sum y_i \ sum y_i u_i \ sum y_iv_i \ sum y_iu_i^2\ sum y_iu_iv_i \
          sum y_iv_i^2}
          $$






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Is there a way to write the design matrix $A$ as the result of matrix multiplications, instead of calculating each element independently? Then the calculation of the design matrix would be much more efficient in the respective computer systems like Matlab or numpy.
            $endgroup$
            – Make42
            Mar 20 at 19:47










          • $begingroup$
            So $C = begin{pmatrix}d&e\e&fend{pmatrix}$, $b^T = (b,c)$ (first my vector than your scalars) and my $a$ equals your $a$?
            $endgroup$
            – Make42
            Mar 20 at 19:55












          • $begingroup$
            Could it be that you missed two $sum$-symbols in your design matrix?
            $endgroup$
            – Make42
            Mar 20 at 23:00










          • $begingroup$
            Could be... corrected now
            $endgroup$
            – Mark Fischler
            Mar 21 at 2:25










          • $begingroup$
            The calculation of the matrix is incredibly efficient in any sort of procedural language, where you simply accumulate, as each point is processed, the various components. I've seen Matlab code that tried to use the powerful matrix syntax by artificially creating a column matrix out of each point - it is both ugly and very inefficient.
            $endgroup$
            – Mark Fischler
            Mar 21 at 2:28
















          1












          $begingroup$

          You can do multi-variate quadratic regression in the usual way. Let's label the row (and column) indices of the design matrix $A$, and the row index of the value vector $b$, by
          index $s({p_1, p_2, p_3, cdots})$ which pertains to the coefficient of $x_i^{p_1}x_2^{p_2}cdots$. For example, the row labeled $s({ 1, 0, 2})$ will be the row pertaining to the coefficient of $x_1x_3^2$.



          Then the elements of $A$ are calculated as
          $$
          A_{s({p_1, p_2, p_3, cdots}),s({q_1, q_2, q_3, cdots})} = sum x_1^{p_1+q_1}
          x_2^{p_2+q_2} x_3^{p_3+q_3} cdots
          $$

          and the elements of $b$ are
          $$
          b_{s({p_1, p_2, p_3, cdots})} = sum y,x_1^{p_1}
          x_2^{p_2} x_3^{p_3} cdots
          $$

          where of course all the sums are taken over the set of data points.



          For example, for a 2-variable quadratic fit $y = a + bu + cv + du^2 + e uv + fv^2$ you need to solve
          $$
          pmatrix{N &sum u_i &sum v_i & sum u_i^2 & sum u_iv_i & sum v_i^2 \
          sum u_i & sum u_i^2 & sum u_i v_i & sum u_i^3 & sum u_i^2v_i & sum u_i v_i^2 \
          sum v_i & sum u_iv_i & sum v_i^2 & sum u_i^2v_i & sum u_iv_i^2 & sum v_i^3 \
          sum u_i^2 & sum u_i^3 & sum u_i^2 v_i & sum u_i^4 & sum u_i^3v_i & sum u_i^2 v_i^2 \
          sum u_iv_i & sum u_i^2v_i & sum u_i v_i^2 & sum u_i^3v_i & sum u_i^2v_i^2 & sum u_i v_i^3 \
          sum v_i^2 & sum u_iv_i^2 & sum v_i^3 & sum u_i^2v_i^2 & sum u_iv_i^3 & sum v_i^4 }
          pmatrix{a\b\c\d\e\f}
          =pmatrix{sum y_i \ sum y_i u_i \ sum y_iv_i \ sum y_iu_i^2\ sum y_iu_iv_i \
          sum y_iv_i^2}
          $$






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Is there a way to write the design matrix $A$ as the result of matrix multiplications, instead of calculating each element independently? Then the calculation of the design matrix would be much more efficient in the respective computer systems like Matlab or numpy.
            $endgroup$
            – Make42
            Mar 20 at 19:47










          • $begingroup$
            So $C = begin{pmatrix}d&e\e&fend{pmatrix}$, $b^T = (b,c)$ (first my vector than your scalars) and my $a$ equals your $a$?
            $endgroup$
            – Make42
            Mar 20 at 19:55












          • $begingroup$
            Could it be that you missed two $sum$-symbols in your design matrix?
            $endgroup$
            – Make42
            Mar 20 at 23:00










          • $begingroup$
            Could be... corrected now
            $endgroup$
            – Mark Fischler
            Mar 21 at 2:25










          • $begingroup$
            The calculation of the matrix is incredibly efficient in any sort of procedural language, where you simply accumulate, as each point is processed, the various components. I've seen Matlab code that tried to use the powerful matrix syntax by artificially creating a column matrix out of each point - it is both ugly and very inefficient.
            $endgroup$
            – Mark Fischler
            Mar 21 at 2:28














          1












          1








          1





          $begingroup$

          You can do multi-variate quadratic regression in the usual way. Let's label the row (and column) indices of the design matrix $A$, and the row index of the value vector $b$, by
          index $s({p_1, p_2, p_3, cdots})$ which pertains to the coefficient of $x_i^{p_1}x_2^{p_2}cdots$. For example, the row labeled $s({ 1, 0, 2})$ will be the row pertaining to the coefficient of $x_1x_3^2$.



          Then the elements of $A$ are calculated as
          $$
          A_{s({p_1, p_2, p_3, cdots}),s({q_1, q_2, q_3, cdots})} = sum x_1^{p_1+q_1}
          x_2^{p_2+q_2} x_3^{p_3+q_3} cdots
          $$

          and the elements of $b$ are
          $$
          b_{s({p_1, p_2, p_3, cdots})} = sum y,x_1^{p_1}
          x_2^{p_2} x_3^{p_3} cdots
          $$

          where of course all the sums are taken over the set of data points.



          For example, for a 2-variable quadratic fit $y = a + bu + cv + du^2 + e uv + fv^2$ you need to solve
          $$
          pmatrix{N &sum u_i &sum v_i & sum u_i^2 & sum u_iv_i & sum v_i^2 \
          sum u_i & sum u_i^2 & sum u_i v_i & sum u_i^3 & sum u_i^2v_i & sum u_i v_i^2 \
          sum v_i & sum u_iv_i & sum v_i^2 & sum u_i^2v_i & sum u_iv_i^2 & sum v_i^3 \
          sum u_i^2 & sum u_i^3 & sum u_i^2 v_i & sum u_i^4 & sum u_i^3v_i & sum u_i^2 v_i^2 \
          sum u_iv_i & sum u_i^2v_i & sum u_i v_i^2 & sum u_i^3v_i & sum u_i^2v_i^2 & sum u_i v_i^3 \
          sum v_i^2 & sum u_iv_i^2 & sum v_i^3 & sum u_i^2v_i^2 & sum u_iv_i^3 & sum v_i^4 }
          pmatrix{a\b\c\d\e\f}
          =pmatrix{sum y_i \ sum y_i u_i \ sum y_iv_i \ sum y_iu_i^2\ sum y_iu_iv_i \
          sum y_iv_i^2}
          $$






          share|cite|improve this answer











          $endgroup$



          You can do multi-variate quadratic regression in the usual way. Let's label the row (and column) indices of the design matrix $A$, and the row index of the value vector $b$, by
          index $s({p_1, p_2, p_3, cdots})$ which pertains to the coefficient of $x_i^{p_1}x_2^{p_2}cdots$. For example, the row labeled $s({ 1, 0, 2})$ will be the row pertaining to the coefficient of $x_1x_3^2$.



          Then the elements of $A$ are calculated as
          $$
          A_{s({p_1, p_2, p_3, cdots}),s({q_1, q_2, q_3, cdots})} = sum x_1^{p_1+q_1}
          x_2^{p_2+q_2} x_3^{p_3+q_3} cdots
          $$

          and the elements of $b$ are
          $$
          b_{s({p_1, p_2, p_3, cdots})} = sum y,x_1^{p_1}
          x_2^{p_2} x_3^{p_3} cdots
          $$

          where of course all the sums are taken over the set of data points.



          For example, for a 2-variable quadratic fit $y = a + bu + cv + du^2 + e uv + fv^2$ you need to solve
          $$
          pmatrix{N &sum u_i &sum v_i & sum u_i^2 & sum u_iv_i & sum v_i^2 \
          sum u_i & sum u_i^2 & sum u_i v_i & sum u_i^3 & sum u_i^2v_i & sum u_i v_i^2 \
          sum v_i & sum u_iv_i & sum v_i^2 & sum u_i^2v_i & sum u_iv_i^2 & sum v_i^3 \
          sum u_i^2 & sum u_i^3 & sum u_i^2 v_i & sum u_i^4 & sum u_i^3v_i & sum u_i^2 v_i^2 \
          sum u_iv_i & sum u_i^2v_i & sum u_i v_i^2 & sum u_i^3v_i & sum u_i^2v_i^2 & sum u_i v_i^3 \
          sum v_i^2 & sum u_iv_i^2 & sum v_i^3 & sum u_i^2v_i^2 & sum u_iv_i^3 & sum v_i^4 }
          pmatrix{a\b\c\d\e\f}
          =pmatrix{sum y_i \ sum y_i u_i \ sum y_iv_i \ sum y_iu_i^2\ sum y_iu_iv_i \
          sum y_iv_i^2}
          $$







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Mar 21 at 2:25

























          answered Mar 20 at 19:14









          Mark FischlerMark Fischler

          34k12552




          34k12552












          • $begingroup$
            Is there a way to write the design matrix $A$ as the result of matrix multiplications, instead of calculating each element independently? Then the calculation of the design matrix would be much more efficient in the respective computer systems like Matlab or numpy.
            $endgroup$
            – Make42
            Mar 20 at 19:47










          • $begingroup$
            So $C = begin{pmatrix}d&e\e&fend{pmatrix}$, $b^T = (b,c)$ (first my vector than your scalars) and my $a$ equals your $a$?
            $endgroup$
            – Make42
            Mar 20 at 19:55












          • $begingroup$
            Could it be that you missed two $sum$-symbols in your design matrix?
            $endgroup$
            – Make42
            Mar 20 at 23:00










          • $begingroup$
            Could be... corrected now
            $endgroup$
            – Mark Fischler
            Mar 21 at 2:25










          • $begingroup$
            The calculation of the matrix is incredibly efficient in any sort of procedural language, where you simply accumulate, as each point is processed, the various components. I've seen Matlab code that tried to use the powerful matrix syntax by artificially creating a column matrix out of each point - it is both ugly and very inefficient.
            $endgroup$
            – Mark Fischler
            Mar 21 at 2:28


















          • $begingroup$
            Is there a way to write the design matrix $A$ as the result of matrix multiplications, instead of calculating each element independently? Then the calculation of the design matrix would be much more efficient in the respective computer systems like Matlab or numpy.
            $endgroup$
            – Make42
            Mar 20 at 19:47










          • $begingroup$
            So $C = begin{pmatrix}d&e\e&fend{pmatrix}$, $b^T = (b,c)$ (first my vector than your scalars) and my $a$ equals your $a$?
            $endgroup$
            – Make42
            Mar 20 at 19:55












          • $begingroup$
            Could it be that you missed two $sum$-symbols in your design matrix?
            $endgroup$
            – Make42
            Mar 20 at 23:00










          • $begingroup$
            Could be... corrected now
            $endgroup$
            – Mark Fischler
            Mar 21 at 2:25










          • $begingroup$
            The calculation of the matrix is incredibly efficient in any sort of procedural language, where you simply accumulate, as each point is processed, the various components. I've seen Matlab code that tried to use the powerful matrix syntax by artificially creating a column matrix out of each point - it is both ugly and very inefficient.
            $endgroup$
            – Mark Fischler
            Mar 21 at 2:28
















          $begingroup$
          Is there a way to write the design matrix $A$ as the result of matrix multiplications, instead of calculating each element independently? Then the calculation of the design matrix would be much more efficient in the respective computer systems like Matlab or numpy.
          $endgroup$
          – Make42
          Mar 20 at 19:47




          $begingroup$
          Is there a way to write the design matrix $A$ as the result of matrix multiplications, instead of calculating each element independently? Then the calculation of the design matrix would be much more efficient in the respective computer systems like Matlab or numpy.
          $endgroup$
          – Make42
          Mar 20 at 19:47












          $begingroup$
          So $C = begin{pmatrix}d&e\e&fend{pmatrix}$, $b^T = (b,c)$ (first my vector than your scalars) and my $a$ equals your $a$?
          $endgroup$
          – Make42
          Mar 20 at 19:55






          $begingroup$
          So $C = begin{pmatrix}d&e\e&fend{pmatrix}$, $b^T = (b,c)$ (first my vector than your scalars) and my $a$ equals your $a$?
          $endgroup$
          – Make42
          Mar 20 at 19:55














          $begingroup$
          Could it be that you missed two $sum$-symbols in your design matrix?
          $endgroup$
          – Make42
          Mar 20 at 23:00




          $begingroup$
          Could it be that you missed two $sum$-symbols in your design matrix?
          $endgroup$
          – Make42
          Mar 20 at 23:00












          $begingroup$
          Could be... corrected now
          $endgroup$
          – Mark Fischler
          Mar 21 at 2:25




          $begingroup$
          Could be... corrected now
          $endgroup$
          – Mark Fischler
          Mar 21 at 2:25












          $begingroup$
          The calculation of the matrix is incredibly efficient in any sort of procedural language, where you simply accumulate, as each point is processed, the various components. I've seen Matlab code that tried to use the powerful matrix syntax by artificially creating a column matrix out of each point - it is both ugly and very inefficient.
          $endgroup$
          – Mark Fischler
          Mar 21 at 2:28




          $begingroup$
          The calculation of the matrix is incredibly efficient in any sort of procedural language, where you simply accumulate, as each point is processed, the various components. I've seen Matlab code that tried to use the powerful matrix syntax by artificially creating a column matrix out of each point - it is both ugly and very inefficient.
          $endgroup$
          – Mark Fischler
          Mar 21 at 2:28











          1












          $begingroup$

          Disclaimer: Approach 1 is from Mark Fischler, but I want to reference the approach in my second approach and I need the labels under the matrices for referencing, so I restate the approach. Apparently, adding the second approach to Mark's answer is not wanted by the moderators.





          Approach 1



          You can do multi-variate quadratic regression in the usual way. Let's label the row (and column) indices of the design matrix $A$, and the row index of the value vector $b$, by
          index $s({p_1, p_2, p_3, cdots})$ which pertains to the coefficient of $x_i^{p_1}x_2^{p_2}cdots$. For example, the row labeled $s({ 1, 0, 2})$ will be the row pertaining to the coefficient of $x_1x_3^2$.



          Then the elements of $A$ are calculated as
          $$
          A_{s({p_1, p_2, p_3, cdots}),s({q_1, q_2, q_3, cdots})} = sum x_1^{p_1+q_1}
          x_2^{p_2+q_2} x_3^{p_3+q_3} cdots
          $$

          and the elements of $b$ are
          $$
          b_{s({p_1, p_2, p_3, cdots})} = sum y,x_1^{p_1}
          x_2^{p_2} x_3^{p_3} cdots
          $$

          where of course all the sums are taken over the set of data points.



          For example, for a 2-variable quadratic fit $y = a + bu + cv + du^2 + e uv + fv^2$ you need to solve
          $$
          underbrace{pmatrix{N &sum u_i &sum v_i & sum u_i^2 & sum u_iv_i & sum v_i^2 \
          sum v_i & sum u_iv_i & sum v_i^2 & sum u_i^2v_i & sum u_iv_i^2 & sum v_i^3 \
          sum u_i^2 & sum u_i^3 & sum u_i^2 v_i & sum u_i^4 & sum u_i^3v_i & sum u_i^2 v_i^2 \
          sum u_iv_i & sum u_i^2v_i & sum u_i v_i^2 & sum u_i^3v_i & sum u_i^2v_i^2 & sum u_i v_i^3 \
          sum v_i^2 & sum u_iv_i^2 & sum v_i^3 & sum u_i^2v_i^2 & sum u_iv_i^3 & sum v_i^4 }}_{mathbf A}
          pmatrix{a^*\b^*\c^*\d^*\e^*\f^*}
          =
          underbrace{
          pmatrix{sum y_i \ sum y_i u_i \ sum y_iv_i \ sum y_iu_i^2\ sum y_iu_iv_i \
          sum y_iv_i^2}
          }_{mathbf b}
          $$



          where $a^*, b^*, c^*, d^*, e^*, f^*$ are the optimal values of $a, b, c, d, e, f$ after the quadratic fit.



          Approach 2



          Alternatively we can consider



          begin{align}
          mathbf Y &= mathbf Xcdotpmatrix{a\dots\f}%
          \
          underbrace{pmatrix{y_{1}\y_{2}\y_{3}\vdots \y_{n}}}_{mathbf Y}
          &=
          underbrace{pmatrix{
          1&u_1&v_1&u_1^2 & u_1v_1 & v_1^2\
          1&u_2&v_2&u_2^2 & u_2v_2 & v_2^2\
          1&u_3&v_3&u_3^2 & u_3v_3 & v_3^2\
          vdots &vdots &vdots &vdots &vdots&vdots \
          1&u_n&v_n&u_n^2 & u_nv_n & v_n^2\
          }}_{mathbf X}
          cdot
          pmatrix{a\b\c\d\e\f}
          end{align}



          We can use this to use the regular formula from https://en.wikipedia.org/wiki/Polynomial_regression for Ordinary Least Squares and get



          begin{align}
          pmatrix{a^*\b^*\c^*\d^*\e^*\f^*} =
          {(underbrace{mathbf X^{mathsf T}cdotmathbf X}_{mathbf A} )}^{-1}
          cdot
          underbrace{mathbf{X}^{mathsf T}cdot vec {y}}_{mathbf b}
          end{align}



          Calculate your original quadratic function



          You can simply



          begin{align}
          alpha^* &= a^*\
          mathbf beta^* &= pmatrix{b^*\c^*}\
          mathbf Gamma^* &= pmatrix{d^*&e^*\e^*&f^*}
          end{align}



          for your original problem



          $$
          min_{A,B,C} sum_{i=1}^N y_i - (alpha + mathbf beta^Tcdot x_i + x_i^Tcdot mathbf Gammacdot x_i)^2
          $$



          where $alpha$ is a scalar, $mathbf beta$ is a vector and $mathbf Gamma$ is a matrix.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            The second approach is fine mathematically, but is $6$ to $10$ times more calculation (less efficient) than the first.
            $endgroup$
            – Mark Fischler
            Mar 23 at 7:35










          • $begingroup$
            @MarkFischler: Btw: I would be happy if you would take this answer and edit it into your own (basically you could copy-paste). I will accept your post as the answer: I would like to have a complete picture in the accepted answer (the mods, did not let me edit your post).
            $endgroup$
            – Make42
            Mar 25 at 11:12










          • $begingroup$
            @MarkFischler: If you explain why the second approach is so much slower, I would appreciate this too: The sums need to be done either why, so I am surprised. The only way I see an advantage of approach 1 is if I was able to only calculate the upper triangular matrix and then copy it down to the lower triangle. This would be ~2 times faster.
            $endgroup$
            – Make42
            Mar 25 at 11:14
















          1












          $begingroup$

          Disclaimer: Approach 1 is from Mark Fischler, but I want to reference the approach in my second approach and I need the labels under the matrices for referencing, so I restate the approach. Apparently, adding the second approach to Mark's answer is not wanted by the moderators.





          Approach 1



          You can do multi-variate quadratic regression in the usual way. Let's label the row (and column) indices of the design matrix $A$, and the row index of the value vector $b$, by
          index $s({p_1, p_2, p_3, cdots})$ which pertains to the coefficient of $x_i^{p_1}x_2^{p_2}cdots$. For example, the row labeled $s({ 1, 0, 2})$ will be the row pertaining to the coefficient of $x_1x_3^2$.



          Then the elements of $A$ are calculated as
          $$
          A_{s({p_1, p_2, p_3, cdots}),s({q_1, q_2, q_3, cdots})} = sum x_1^{p_1+q_1}
          x_2^{p_2+q_2} x_3^{p_3+q_3} cdots
          $$

          and the elements of $b$ are
          $$
          b_{s({p_1, p_2, p_3, cdots})} = sum y,x_1^{p_1}
          x_2^{p_2} x_3^{p_3} cdots
          $$

          where of course all the sums are taken over the set of data points.



          For example, for a 2-variable quadratic fit $y = a + bu + cv + du^2 + e uv + fv^2$ you need to solve
          $$
          underbrace{pmatrix{N &sum u_i &sum v_i & sum u_i^2 & sum u_iv_i & sum v_i^2 \
          sum v_i & sum u_iv_i & sum v_i^2 & sum u_i^2v_i & sum u_iv_i^2 & sum v_i^3 \
          sum u_i^2 & sum u_i^3 & sum u_i^2 v_i & sum u_i^4 & sum u_i^3v_i & sum u_i^2 v_i^2 \
          sum u_iv_i & sum u_i^2v_i & sum u_i v_i^2 & sum u_i^3v_i & sum u_i^2v_i^2 & sum u_i v_i^3 \
          sum v_i^2 & sum u_iv_i^2 & sum v_i^3 & sum u_i^2v_i^2 & sum u_iv_i^3 & sum v_i^4 }}_{mathbf A}
          pmatrix{a^*\b^*\c^*\d^*\e^*\f^*}
          =
          underbrace{
          pmatrix{sum y_i \ sum y_i u_i \ sum y_iv_i \ sum y_iu_i^2\ sum y_iu_iv_i \
          sum y_iv_i^2}
          }_{mathbf b}
          $$



          where $a^*, b^*, c^*, d^*, e^*, f^*$ are the optimal values of $a, b, c, d, e, f$ after the quadratic fit.



          Approach 2



          Alternatively we can consider



          begin{align}
          mathbf Y &= mathbf Xcdotpmatrix{a\dots\f}%
          \
          underbrace{pmatrix{y_{1}\y_{2}\y_{3}\vdots \y_{n}}}_{mathbf Y}
          &=
          underbrace{pmatrix{
          1&u_1&v_1&u_1^2 & u_1v_1 & v_1^2\
          1&u_2&v_2&u_2^2 & u_2v_2 & v_2^2\
          1&u_3&v_3&u_3^2 & u_3v_3 & v_3^2\
          vdots &vdots &vdots &vdots &vdots&vdots \
          1&u_n&v_n&u_n^2 & u_nv_n & v_n^2\
          }}_{mathbf X}
          cdot
          pmatrix{a\b\c\d\e\f}
          end{align}



          We can use this to use the regular formula from https://en.wikipedia.org/wiki/Polynomial_regression for Ordinary Least Squares and get



          begin{align}
          pmatrix{a^*\b^*\c^*\d^*\e^*\f^*} =
          {(underbrace{mathbf X^{mathsf T}cdotmathbf X}_{mathbf A} )}^{-1}
          cdot
          underbrace{mathbf{X}^{mathsf T}cdot vec {y}}_{mathbf b}
          end{align}



          Calculate your original quadratic function



          You can simply



          begin{align}
          alpha^* &= a^*\
          mathbf beta^* &= pmatrix{b^*\c^*}\
          mathbf Gamma^* &= pmatrix{d^*&e^*\e^*&f^*}
          end{align}



          for your original problem



          $$
          min_{A,B,C} sum_{i=1}^N y_i - (alpha + mathbf beta^Tcdot x_i + x_i^Tcdot mathbf Gammacdot x_i)^2
          $$



          where $alpha$ is a scalar, $mathbf beta$ is a vector and $mathbf Gamma$ is a matrix.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            The second approach is fine mathematically, but is $6$ to $10$ times more calculation (less efficient) than the first.
            $endgroup$
            – Mark Fischler
            Mar 23 at 7:35










          • $begingroup$
            @MarkFischler: Btw: I would be happy if you would take this answer and edit it into your own (basically you could copy-paste). I will accept your post as the answer: I would like to have a complete picture in the accepted answer (the mods, did not let me edit your post).
            $endgroup$
            – Make42
            Mar 25 at 11:12










          • $begingroup$
            @MarkFischler: If you explain why the second approach is so much slower, I would appreciate this too: The sums need to be done either why, so I am surprised. The only way I see an advantage of approach 1 is if I was able to only calculate the upper triangular matrix and then copy it down to the lower triangle. This would be ~2 times faster.
            $endgroup$
            – Make42
            Mar 25 at 11:14














          1












          1








          1





          $begingroup$

          Disclaimer: Approach 1 is from Mark Fischler, but I want to reference the approach in my second approach and I need the labels under the matrices for referencing, so I restate the approach. Apparently, adding the second approach to Mark's answer is not wanted by the moderators.





          Approach 1



          You can do multi-variate quadratic regression in the usual way. Let's label the row (and column) indices of the design matrix $A$, and the row index of the value vector $b$, by
          index $s({p_1, p_2, p_3, cdots})$ which pertains to the coefficient of $x_i^{p_1}x_2^{p_2}cdots$. For example, the row labeled $s({ 1, 0, 2})$ will be the row pertaining to the coefficient of $x_1x_3^2$.



          Then the elements of $A$ are calculated as
          $$
          A_{s({p_1, p_2, p_3, cdots}),s({q_1, q_2, q_3, cdots})} = sum x_1^{p_1+q_1}
          x_2^{p_2+q_2} x_3^{p_3+q_3} cdots
          $$

          and the elements of $b$ are
          $$
          b_{s({p_1, p_2, p_3, cdots})} = sum y,x_1^{p_1}
          x_2^{p_2} x_3^{p_3} cdots
          $$

          where of course all the sums are taken over the set of data points.



          For example, for a 2-variable quadratic fit $y = a + bu + cv + du^2 + e uv + fv^2$ you need to solve
          $$
          underbrace{pmatrix{N &sum u_i &sum v_i & sum u_i^2 & sum u_iv_i & sum v_i^2 \
          sum v_i & sum u_iv_i & sum v_i^2 & sum u_i^2v_i & sum u_iv_i^2 & sum v_i^3 \
          sum u_i^2 & sum u_i^3 & sum u_i^2 v_i & sum u_i^4 & sum u_i^3v_i & sum u_i^2 v_i^2 \
          sum u_iv_i & sum u_i^2v_i & sum u_i v_i^2 & sum u_i^3v_i & sum u_i^2v_i^2 & sum u_i v_i^3 \
          sum v_i^2 & sum u_iv_i^2 & sum v_i^3 & sum u_i^2v_i^2 & sum u_iv_i^3 & sum v_i^4 }}_{mathbf A}
          pmatrix{a^*\b^*\c^*\d^*\e^*\f^*}
          =
          underbrace{
          pmatrix{sum y_i \ sum y_i u_i \ sum y_iv_i \ sum y_iu_i^2\ sum y_iu_iv_i \
          sum y_iv_i^2}
          }_{mathbf b}
          $$



          where $a^*, b^*, c^*, d^*, e^*, f^*$ are the optimal values of $a, b, c, d, e, f$ after the quadratic fit.



          Approach 2



          Alternatively we can consider



          begin{align}
          mathbf Y &= mathbf Xcdotpmatrix{a\dots\f}%
          \
          underbrace{pmatrix{y_{1}\y_{2}\y_{3}\vdots \y_{n}}}_{mathbf Y}
          &=
          underbrace{pmatrix{
          1&u_1&v_1&u_1^2 & u_1v_1 & v_1^2\
          1&u_2&v_2&u_2^2 & u_2v_2 & v_2^2\
          1&u_3&v_3&u_3^2 & u_3v_3 & v_3^2\
          vdots &vdots &vdots &vdots &vdots&vdots \
          1&u_n&v_n&u_n^2 & u_nv_n & v_n^2\
          }}_{mathbf X}
          cdot
          pmatrix{a\b\c\d\e\f}
          end{align}



          We can use this to use the regular formula from https://en.wikipedia.org/wiki/Polynomial_regression for Ordinary Least Squares and get



          begin{align}
          pmatrix{a^*\b^*\c^*\d^*\e^*\f^*} =
          {(underbrace{mathbf X^{mathsf T}cdotmathbf X}_{mathbf A} )}^{-1}
          cdot
          underbrace{mathbf{X}^{mathsf T}cdot vec {y}}_{mathbf b}
          end{align}



          Calculate your original quadratic function



          You can simply



          begin{align}
          alpha^* &= a^*\
          mathbf beta^* &= pmatrix{b^*\c^*}\
          mathbf Gamma^* &= pmatrix{d^*&e^*\e^*&f^*}
          end{align}



          for your original problem



          $$
          min_{A,B,C} sum_{i=1}^N y_i - (alpha + mathbf beta^Tcdot x_i + x_i^Tcdot mathbf Gammacdot x_i)^2
          $$



          where $alpha$ is a scalar, $mathbf beta$ is a vector and $mathbf Gamma$ is a matrix.






          share|cite|improve this answer









          $endgroup$



          Disclaimer: Approach 1 is from Mark Fischler, but I want to reference the approach in my second approach and I need the labels under the matrices for referencing, so I restate the approach. Apparently, adding the second approach to Mark's answer is not wanted by the moderators.





          Approach 1



          You can do multi-variate quadratic regression in the usual way. Let's label the row (and column) indices of the design matrix $A$, and the row index of the value vector $b$, by
          index $s({p_1, p_2, p_3, cdots})$ which pertains to the coefficient of $x_i^{p_1}x_2^{p_2}cdots$. For example, the row labeled $s({ 1, 0, 2})$ will be the row pertaining to the coefficient of $x_1x_3^2$.



          Then the elements of $A$ are calculated as
          $$
          A_{s({p_1, p_2, p_3, cdots}),s({q_1, q_2, q_3, cdots})} = sum x_1^{p_1+q_1}
          x_2^{p_2+q_2} x_3^{p_3+q_3} cdots
          $$

          and the elements of $b$ are
          $$
          b_{s({p_1, p_2, p_3, cdots})} = sum y,x_1^{p_1}
          x_2^{p_2} x_3^{p_3} cdots
          $$

          where of course all the sums are taken over the set of data points.



          For example, for a 2-variable quadratic fit $y = a + bu + cv + du^2 + e uv + fv^2$ you need to solve
          $$
          underbrace{pmatrix{N &sum u_i &sum v_i & sum u_i^2 & sum u_iv_i & sum v_i^2 \
          sum v_i & sum u_iv_i & sum v_i^2 & sum u_i^2v_i & sum u_iv_i^2 & sum v_i^3 \
          sum u_i^2 & sum u_i^3 & sum u_i^2 v_i & sum u_i^4 & sum u_i^3v_i & sum u_i^2 v_i^2 \
          sum u_iv_i & sum u_i^2v_i & sum u_i v_i^2 & sum u_i^3v_i & sum u_i^2v_i^2 & sum u_i v_i^3 \
          sum v_i^2 & sum u_iv_i^2 & sum v_i^3 & sum u_i^2v_i^2 & sum u_iv_i^3 & sum v_i^4 }}_{mathbf A}
          pmatrix{a^*\b^*\c^*\d^*\e^*\f^*}
          =
          underbrace{
          pmatrix{sum y_i \ sum y_i u_i \ sum y_iv_i \ sum y_iu_i^2\ sum y_iu_iv_i \
          sum y_iv_i^2}
          }_{mathbf b}
          $$



          where $a^*, b^*, c^*, d^*, e^*, f^*$ are the optimal values of $a, b, c, d, e, f$ after the quadratic fit.



          Approach 2



          Alternatively we can consider



          begin{align}
          mathbf Y &= mathbf Xcdotpmatrix{a\dots\f}%
          \
          underbrace{pmatrix{y_{1}\y_{2}\y_{3}\vdots \y_{n}}}_{mathbf Y}
          &=
          underbrace{pmatrix{
          1&u_1&v_1&u_1^2 & u_1v_1 & v_1^2\
          1&u_2&v_2&u_2^2 & u_2v_2 & v_2^2\
          1&u_3&v_3&u_3^2 & u_3v_3 & v_3^2\
          vdots &vdots &vdots &vdots &vdots&vdots \
          1&u_n&v_n&u_n^2 & u_nv_n & v_n^2\
          }}_{mathbf X}
          cdot
          pmatrix{a\b\c\d\e\f}
          end{align}



          We can use this to use the regular formula from https://en.wikipedia.org/wiki/Polynomial_regression for Ordinary Least Squares and get



          begin{align}
          pmatrix{a^*\b^*\c^*\d^*\e^*\f^*} =
          {(underbrace{mathbf X^{mathsf T}cdotmathbf X}_{mathbf A} )}^{-1}
          cdot
          underbrace{mathbf{X}^{mathsf T}cdot vec {y}}_{mathbf b}
          end{align}



          Calculate your original quadratic function



          You can simply



          begin{align}
          alpha^* &= a^*\
          mathbf beta^* &= pmatrix{b^*\c^*}\
          mathbf Gamma^* &= pmatrix{d^*&e^*\e^*&f^*}
          end{align}



          for your original problem



          $$
          min_{A,B,C} sum_{i=1}^N y_i - (alpha + mathbf beta^Tcdot x_i + x_i^Tcdot mathbf Gammacdot x_i)^2
          $$



          where $alpha$ is a scalar, $mathbf beta$ is a vector and $mathbf Gamma$ is a matrix.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Mar 21 at 19:35









          Make42Make42

          212110




          212110












          • $begingroup$
            The second approach is fine mathematically, but is $6$ to $10$ times more calculation (less efficient) than the first.
            $endgroup$
            – Mark Fischler
            Mar 23 at 7:35










          • $begingroup$
            @MarkFischler: Btw: I would be happy if you would take this answer and edit it into your own (basically you could copy-paste). I will accept your post as the answer: I would like to have a complete picture in the accepted answer (the mods, did not let me edit your post).
            $endgroup$
            – Make42
            Mar 25 at 11:12










          • $begingroup$
            @MarkFischler: If you explain why the second approach is so much slower, I would appreciate this too: The sums need to be done either why, so I am surprised. The only way I see an advantage of approach 1 is if I was able to only calculate the upper triangular matrix and then copy it down to the lower triangle. This would be ~2 times faster.
            $endgroup$
            – Make42
            Mar 25 at 11:14


















          • $begingroup$
            The second approach is fine mathematically, but is $6$ to $10$ times more calculation (less efficient) than the first.
            $endgroup$
            – Mark Fischler
            Mar 23 at 7:35










          • $begingroup$
            @MarkFischler: Btw: I would be happy if you would take this answer and edit it into your own (basically you could copy-paste). I will accept your post as the answer: I would like to have a complete picture in the accepted answer (the mods, did not let me edit your post).
            $endgroup$
            – Make42
            Mar 25 at 11:12










          • $begingroup$
            @MarkFischler: If you explain why the second approach is so much slower, I would appreciate this too: The sums need to be done either why, so I am surprised. The only way I see an advantage of approach 1 is if I was able to only calculate the upper triangular matrix and then copy it down to the lower triangle. This would be ~2 times faster.
            $endgroup$
            – Make42
            Mar 25 at 11:14
















          $begingroup$
          The second approach is fine mathematically, but is $6$ to $10$ times more calculation (less efficient) than the first.
          $endgroup$
          – Mark Fischler
          Mar 23 at 7:35




          $begingroup$
          The second approach is fine mathematically, but is $6$ to $10$ times more calculation (less efficient) than the first.
          $endgroup$
          – Mark Fischler
          Mar 23 at 7:35












          $begingroup$
          @MarkFischler: Btw: I would be happy if you would take this answer and edit it into your own (basically you could copy-paste). I will accept your post as the answer: I would like to have a complete picture in the accepted answer (the mods, did not let me edit your post).
          $endgroup$
          – Make42
          Mar 25 at 11:12




          $begingroup$
          @MarkFischler: Btw: I would be happy if you would take this answer and edit it into your own (basically you could copy-paste). I will accept your post as the answer: I would like to have a complete picture in the accepted answer (the mods, did not let me edit your post).
          $endgroup$
          – Make42
          Mar 25 at 11:12












          $begingroup$
          @MarkFischler: If you explain why the second approach is so much slower, I would appreciate this too: The sums need to be done either why, so I am surprised. The only way I see an advantage of approach 1 is if I was able to only calculate the upper triangular matrix and then copy it down to the lower triangle. This would be ~2 times faster.
          $endgroup$
          – Make42
          Mar 25 at 11:14




          $begingroup$
          @MarkFischler: If you explain why the second approach is so much slower, I would appreciate this too: The sums need to be done either why, so I am surprised. The only way I see an advantage of approach 1 is if I was able to only calculate the upper triangular matrix and then copy it down to the lower triangle. This would be ~2 times faster.
          $endgroup$
          – Make42
          Mar 25 at 11:14


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Mathematics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3155866%2fmultivariate-quadratic-regression%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Magento 2 - Add success message with knockout Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?Success / Error message on ajax request$.widget is not a function when loading a homepage after add custom jQuery on custom themeHow can bind jQuery to current document in Magento 2 When template load by ajaxRedirect page using plugin in Magento 2Magento 2 - Update quantity and totals of cart page without page reload?Magento 2: Quote data not loaded on knockout checkoutMagento 2 : I need to change add to cart success message after adding product into cart through pluginMagento 2.2.5 How to add additional products to cart from new checkout step?Magento 2 Add error/success message with knockoutCan't validate Post Code on checkout page

          Fil:Tokke komm.svg

          Where did Arya get these scars? Unicorn Meta Zoo #1: Why another podcast? Announcing the arrival of Valued Associate #679: Cesar Manara Favourite questions and answers from the 1st quarter of 2019Why did Arya refuse to end it?Has the pronunciation of Arya Stark's name changed?Has Arya forgiven people?Why did Arya Stark lose her vision?Why can Arya still use the faces?Has the Narrow Sea become narrower?Does Arya Stark know how to make poisons outside of the House of Black and White?Why did Nymeria leave Arya?Why did Arya not kill the Lannister soldiers she encountered in the Riverlands?What is the current canonical age of Sansa, Bran and Arya Stark?