How to perform Gaussian elimination to invert a matrix if the matrix contains zeros on the diagonal?How to...

Air travel with refrigerated insulin

What is it called when someone votes for an option that's not their first choice?

Would mining huge amounts of resources on the Moon change its orbit?

How are passwords stolen from companies if they only store hashes?

How to find the largest number(s) in a list of elements, possibly non-unique?

What (if any) is the reason to buy in small local stores?

Was World War I a war of liberals against authoritarians?

Hackerrank All Women's Codesprint 2019: Name the Product

Help with identifying unique aircraft over NE Pennsylvania

Recursively updating the MLE as new observations stream in

UK Tourist Visa- Enquiry

Why is "la Gestapo" feminine?

Have the tides ever turned twice on any open problem?

Norwegian Refugee travel document

Gauss brackets with double vertical lines

Turning a hard to access nut?

Writing in a Christian voice

How can an organ that provides biological immortality be unable to regenerate?

Is xar preinstalled on macOS?

Emojional cryptic crossword

The English Debate

PTIJ: At the Passover Seder, is one allowed to speak more than once during Maggid?

Why are there no stars visible in cislunar space?

Would this string work as string?



How to perform Gaussian elimination to invert a matrix if the matrix contains zeros on the diagonal?


How to determine the transition matrices when doing Gaussian elimination?Why doesn't Gaussian elimination change the solution set?Why can't a triangular matrix with only zeros in its diagonal be invertible?Hilbert Matrix, Gaussian Elimination with varying pivot strategies, and computation error.Why Gaussian elimination on the columns changes the column space?How to find the $LU$ factorization of a matrix $A$ when elimination breaks downComplexity of matrix multiplication and Gaussian eliminationTime Complexity - Reducing Square Matrix to Diagonal Matrix (using Gaussian Row Elimination)Characteristic polynomial of the matrix with zeros on the diagonal and ones elsewhereUnderstanding the formula for a diagonal matrix $A = P^{-1}DP$.













1












$begingroup$


I'm coding the method that inverts a matrix through Gaussian elimination. I've coded everything assuming that there are no zeros on the diagonal.



For the situation where a diagonal element is zero, what should I do? It's certain that some matrices with zero on the diagonal are still invertible, so how should I invert them by Gaussian elimination?










share|cite|improve this question









New contributor




Kid_Learning_C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$








  • 7




    $begingroup$
    Try interchanging rows.
    $endgroup$
    – Chris Leary
    Mar 12 at 4:01










  • $begingroup$
    @ChrisLeary If I interchange rows of a matrix and then invert it, would the result equal to the invert of the original matrix??
    $endgroup$
    – Kid_Learning_C
    Mar 12 at 4:05






  • 1




    $begingroup$
    It is a mistake to focus only on the effect of exchanging rows of $A$. Please review my Answer, and note that the elementary row operations of "Gaussian elimination with partial pivoting" are applied to the augmented matrix $[A|I]$. Any sequence of elementary row operations that transform $A$ into the identity $I$ will simultaneously transform the identity matrix $I$ into the inverse $A^{-1}$ of $A$.
    $endgroup$
    – hardmath
    Mar 12 at 15:50
















1












$begingroup$


I'm coding the method that inverts a matrix through Gaussian elimination. I've coded everything assuming that there are no zeros on the diagonal.



For the situation where a diagonal element is zero, what should I do? It's certain that some matrices with zero on the diagonal are still invertible, so how should I invert them by Gaussian elimination?










share|cite|improve this question









New contributor




Kid_Learning_C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$








  • 7




    $begingroup$
    Try interchanging rows.
    $endgroup$
    – Chris Leary
    Mar 12 at 4:01










  • $begingroup$
    @ChrisLeary If I interchange rows of a matrix and then invert it, would the result equal to the invert of the original matrix??
    $endgroup$
    – Kid_Learning_C
    Mar 12 at 4:05






  • 1




    $begingroup$
    It is a mistake to focus only on the effect of exchanging rows of $A$. Please review my Answer, and note that the elementary row operations of "Gaussian elimination with partial pivoting" are applied to the augmented matrix $[A|I]$. Any sequence of elementary row operations that transform $A$ into the identity $I$ will simultaneously transform the identity matrix $I$ into the inverse $A^{-1}$ of $A$.
    $endgroup$
    – hardmath
    Mar 12 at 15:50














1












1








1





$begingroup$


I'm coding the method that inverts a matrix through Gaussian elimination. I've coded everything assuming that there are no zeros on the diagonal.



For the situation where a diagonal element is zero, what should I do? It's certain that some matrices with zero on the diagonal are still invertible, so how should I invert them by Gaussian elimination?










share|cite|improve this question









New contributor




Kid_Learning_C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$




I'm coding the method that inverts a matrix through Gaussian elimination. I've coded everything assuming that there are no zeros on the diagonal.



For the situation where a diagonal element is zero, what should I do? It's certain that some matrices with zero on the diagonal are still invertible, so how should I invert them by Gaussian elimination?







linear-algebra matrices gaussian-elimination






share|cite|improve this question









New contributor




Kid_Learning_C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|cite|improve this question









New contributor




Kid_Learning_C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|cite|improve this question




share|cite|improve this question








edited Mar 13 at 18:28









Rodrigo de Azevedo

13.1k41960




13.1k41960






New contributor




Kid_Learning_C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked Mar 12 at 3:59









Kid_Learning_CKid_Learning_C

82




82




New contributor




Kid_Learning_C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Kid_Learning_C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Kid_Learning_C is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








  • 7




    $begingroup$
    Try interchanging rows.
    $endgroup$
    – Chris Leary
    Mar 12 at 4:01










  • $begingroup$
    @ChrisLeary If I interchange rows of a matrix and then invert it, would the result equal to the invert of the original matrix??
    $endgroup$
    – Kid_Learning_C
    Mar 12 at 4:05






  • 1




    $begingroup$
    It is a mistake to focus only on the effect of exchanging rows of $A$. Please review my Answer, and note that the elementary row operations of "Gaussian elimination with partial pivoting" are applied to the augmented matrix $[A|I]$. Any sequence of elementary row operations that transform $A$ into the identity $I$ will simultaneously transform the identity matrix $I$ into the inverse $A^{-1}$ of $A$.
    $endgroup$
    – hardmath
    Mar 12 at 15:50














  • 7




    $begingroup$
    Try interchanging rows.
    $endgroup$
    – Chris Leary
    Mar 12 at 4:01










  • $begingroup$
    @ChrisLeary If I interchange rows of a matrix and then invert it, would the result equal to the invert of the original matrix??
    $endgroup$
    – Kid_Learning_C
    Mar 12 at 4:05






  • 1




    $begingroup$
    It is a mistake to focus only on the effect of exchanging rows of $A$. Please review my Answer, and note that the elementary row operations of "Gaussian elimination with partial pivoting" are applied to the augmented matrix $[A|I]$. Any sequence of elementary row operations that transform $A$ into the identity $I$ will simultaneously transform the identity matrix $I$ into the inverse $A^{-1}$ of $A$.
    $endgroup$
    – hardmath
    Mar 12 at 15:50








7




7




$begingroup$
Try interchanging rows.
$endgroup$
– Chris Leary
Mar 12 at 4:01




$begingroup$
Try interchanging rows.
$endgroup$
– Chris Leary
Mar 12 at 4:01












$begingroup$
@ChrisLeary If I interchange rows of a matrix and then invert it, would the result equal to the invert of the original matrix??
$endgroup$
– Kid_Learning_C
Mar 12 at 4:05




$begingroup$
@ChrisLeary If I interchange rows of a matrix and then invert it, would the result equal to the invert of the original matrix??
$endgroup$
– Kid_Learning_C
Mar 12 at 4:05




1




1




$begingroup$
It is a mistake to focus only on the effect of exchanging rows of $A$. Please review my Answer, and note that the elementary row operations of "Gaussian elimination with partial pivoting" are applied to the augmented matrix $[A|I]$. Any sequence of elementary row operations that transform $A$ into the identity $I$ will simultaneously transform the identity matrix $I$ into the inverse $A^{-1}$ of $A$.
$endgroup$
– hardmath
Mar 12 at 15:50




$begingroup$
It is a mistake to focus only on the effect of exchanging rows of $A$. Please review my Answer, and note that the elementary row operations of "Gaussian elimination with partial pivoting" are applied to the augmented matrix $[A|I]$. Any sequence of elementary row operations that transform $A$ into the identity $I$ will simultaneously transform the identity matrix $I$ into the inverse $A^{-1}$ of $A$.
$endgroup$
– hardmath
Mar 12 at 15:50










1 Answer
1






active

oldest

votes


















1












$begingroup$

Create an "augmented matrix" of the form $[A|I]$ and use elementary row operations to put this into reduced row echelon form. The result will be $[I|A^{-1}]$ if $A$ is an invertible matrix.





Let's do an example, mainly for the benefit of future Readers. The OP mentions a "method [to] invert a matrix through Gaussian elimination ... assuming that there are no zero on the diagonal." It follows that the code already implements two of the three elementary row operations, multiplying a row by a nonzero scalar and adding (equiv. subtracting) a multiple of one row to (from) another row. What else is needed when we confront a zero "on the diagonal" (i.e. in a matrix position where we want a leading one) is the operation of exchanging two rows.



Using this operation does not "break" the matrix any more than do the other two kinds of elementary row operations. Yes, it changes the matrix, but the key here is that we want to change a matrix into its reduced row echelon form.



In particular the goal of inverting matrix $A$ will be achieved by finding the reduced row echelon form of the augmented matrix $[A|I]$. The required form can be found by an algorithmically determined sequence of elementary row operations, but the sequence is not unique. When no row exchanges are used, the method is called "Gaussian elimination without pivoting". This is possible precisely when the leading principal minors are nonzero, as this is another way of saying that zeros will not appear "on the diagonal," where we want the reduced row echelon form to have leading ones.



That is, elimination algorithms which use row exchanges commonly call those steps "pivoting". How hard one works to choose the "pivot" row is subject to a tradeoff in numeric stability, but in what follows we will present the case of partial pivoting (PDF). This amounts to taking the columns in whatever order they are presented (full pivoting meaning a strategy that might introduce leading ones in a wider consideration of all columns not yet reduced).



For our example we take:



$$ A = begin{bmatrix} 0 & 1 & 1 \ 1 & 0 & 1 \ 1 & 1 & 0 end{bmatrix} $$



Thus the augmented matrix we will reduce is $[A|I]$:



$$ begin{bmatrix}
textbf 0 & 1 & 1 & 1 & 0 & 0 \
1 & 0 & 1 & 0 & 1 & 0 \
1 & 1 & 0 & 0 & 0 & 1
end{bmatrix} $$



A reasonable rule when a zero appears where a leading one is wanted is to swap the row with the zero and one below it with an entry of largest maximum absolute value in the same column. Here we immediately start with a zero in the $(1,1)$ position, and below that both entries in that first column have the maximum absolute value $1$. So let's exchange the first and the second rows:



$$ begin{bmatrix}
textbf 1 & 0 & 1 & 0 & 1 & 0 \
0 & 1 & 1 & 1 & 0 & 0 \
1 & 1 & 0 & 0 & 0 & 1
end{bmatrix} $$



It is unnecessary to "scale" the leading entry in the current row (we already have a leading one). We proceed to clear the entries below it using addition/subtraction of a multiple of the first row to each other rows:



$$ begin{bmatrix}
textbf 1 & 0 & 1 & 0 & 1 & 0 \
0 & 1 & 1 & 1 & 0 & 0 \
0 & 1 & -1 & 0 & -1 & 1
end{bmatrix} $$



Having reduced the first column, we turn our attention to the second column. Fortuitously the $(2,2)$ entry is already a leading one in that row, and we proceed to add multiples of that second row to clear out the other entries in the second column:



$$ begin{bmatrix}
1 & 0 & 1 & 0 & 1 & 0 \
0 & textbf 1 & 1 & 1 & 0 & 0 \
0 & 0 & -2 & -1 & -1 & 1
end{bmatrix} $$



Finally we scale the leading entry in the third row, so that the $(3,3)$ value becomes a leading one:



$$ begin{bmatrix}
1 & 0 & 1 & 0 & 1 & 0 \
0 & 1 & 1 & 1 & 0 & 0 \
0 & 0 & textbf 1 & 1/2 & 1/2 & -1/2
end{bmatrix} $$



Adding multiples of this new third row allows us to zero-out the other entries in the third column:



$$ begin{bmatrix}
1 & 0 & 0 & -1/2 & 1/2 & 1/2 \
0 & 1 & 0 & 1/2 & -1/2 & 1/2 \
0 & 0 & textbf 1 & 1/2 & 1/2 & -1/2
end{bmatrix} $$



Now we have the reduced row echelon form, $[I|A^{-1}]$. To verify we can multiply:



$$ begin{bmatrix} 0 & 1 & 1 \ 1 & 0 & 1 \ 1 & 1 & 0 end{bmatrix} ;
begin{bmatrix} -1/2 & 1/2 & 1/2 \ 1/2 & -1/2 & 1/2 \ 1/2 & 1/2 & -1/2 end{bmatrix} = I $$






share|cite|improve this answer











$endgroup$













  • $begingroup$
    OP's question is what to do when $0$ is encountered on the main diagonal and pivoting on it is not possible
    $endgroup$
    – gt6989b
    Mar 12 at 4:11






  • 1




    $begingroup$
    @gt6989b: Pivoting is precisely what one needs to do to put a nonzero entry into the current column and row. If the entries at that position and below it in the current column are all zero, THEN one cannot produce a leading one there and the matrix is not invertible. However an exchange of the current row with one below it is an elementary row operation and should succeed in producing a nonzero entry provided the matrix is invertible (up to floating point precision). All this is standard in an undergraduate discrete math or linear algebra class.
    $endgroup$
    – hardmath
    Mar 12 at 4:14












  • $begingroup$
    @hardmath If I interchange rows of a matrix and then invert it, would the result equal to the invert of the original matrix??
    $endgroup$
    – Kid_Learning_C
    Mar 12 at 15:31










  • $begingroup$
    @hardmath We need the invert of the original matrix. The invert of the permuted matrix is not what we need. Is there a relationship between the 2 inverts then?
    $endgroup$
    – Kid_Learning_C
    Mar 12 at 15:41










  • $begingroup$
    That's what I showed you above. When you obtain the reduced row echelon form of the matrix $[A|I]$, using any sequence of elementary row operations including as necessary row exchanges, the result will be $[I|A^{-1}]$ whenever $A$ is an invertible matrix.
    $endgroup$
    – hardmath
    Mar 12 at 15:47











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});






Kid_Learning_C is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3144629%2fhow-to-perform-gaussian-elimination-to-invert-a-matrix-if-the-matrix-contains-ze%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1












$begingroup$

Create an "augmented matrix" of the form $[A|I]$ and use elementary row operations to put this into reduced row echelon form. The result will be $[I|A^{-1}]$ if $A$ is an invertible matrix.





Let's do an example, mainly for the benefit of future Readers. The OP mentions a "method [to] invert a matrix through Gaussian elimination ... assuming that there are no zero on the diagonal." It follows that the code already implements two of the three elementary row operations, multiplying a row by a nonzero scalar and adding (equiv. subtracting) a multiple of one row to (from) another row. What else is needed when we confront a zero "on the diagonal" (i.e. in a matrix position where we want a leading one) is the operation of exchanging two rows.



Using this operation does not "break" the matrix any more than do the other two kinds of elementary row operations. Yes, it changes the matrix, but the key here is that we want to change a matrix into its reduced row echelon form.



In particular the goal of inverting matrix $A$ will be achieved by finding the reduced row echelon form of the augmented matrix $[A|I]$. The required form can be found by an algorithmically determined sequence of elementary row operations, but the sequence is not unique. When no row exchanges are used, the method is called "Gaussian elimination without pivoting". This is possible precisely when the leading principal minors are nonzero, as this is another way of saying that zeros will not appear "on the diagonal," where we want the reduced row echelon form to have leading ones.



That is, elimination algorithms which use row exchanges commonly call those steps "pivoting". How hard one works to choose the "pivot" row is subject to a tradeoff in numeric stability, but in what follows we will present the case of partial pivoting (PDF). This amounts to taking the columns in whatever order they are presented (full pivoting meaning a strategy that might introduce leading ones in a wider consideration of all columns not yet reduced).



For our example we take:



$$ A = begin{bmatrix} 0 & 1 & 1 \ 1 & 0 & 1 \ 1 & 1 & 0 end{bmatrix} $$



Thus the augmented matrix we will reduce is $[A|I]$:



$$ begin{bmatrix}
textbf 0 & 1 & 1 & 1 & 0 & 0 \
1 & 0 & 1 & 0 & 1 & 0 \
1 & 1 & 0 & 0 & 0 & 1
end{bmatrix} $$



A reasonable rule when a zero appears where a leading one is wanted is to swap the row with the zero and one below it with an entry of largest maximum absolute value in the same column. Here we immediately start with a zero in the $(1,1)$ position, and below that both entries in that first column have the maximum absolute value $1$. So let's exchange the first and the second rows:



$$ begin{bmatrix}
textbf 1 & 0 & 1 & 0 & 1 & 0 \
0 & 1 & 1 & 1 & 0 & 0 \
1 & 1 & 0 & 0 & 0 & 1
end{bmatrix} $$



It is unnecessary to "scale" the leading entry in the current row (we already have a leading one). We proceed to clear the entries below it using addition/subtraction of a multiple of the first row to each other rows:



$$ begin{bmatrix}
textbf 1 & 0 & 1 & 0 & 1 & 0 \
0 & 1 & 1 & 1 & 0 & 0 \
0 & 1 & -1 & 0 & -1 & 1
end{bmatrix} $$



Having reduced the first column, we turn our attention to the second column. Fortuitously the $(2,2)$ entry is already a leading one in that row, and we proceed to add multiples of that second row to clear out the other entries in the second column:



$$ begin{bmatrix}
1 & 0 & 1 & 0 & 1 & 0 \
0 & textbf 1 & 1 & 1 & 0 & 0 \
0 & 0 & -2 & -1 & -1 & 1
end{bmatrix} $$



Finally we scale the leading entry in the third row, so that the $(3,3)$ value becomes a leading one:



$$ begin{bmatrix}
1 & 0 & 1 & 0 & 1 & 0 \
0 & 1 & 1 & 1 & 0 & 0 \
0 & 0 & textbf 1 & 1/2 & 1/2 & -1/2
end{bmatrix} $$



Adding multiples of this new third row allows us to zero-out the other entries in the third column:



$$ begin{bmatrix}
1 & 0 & 0 & -1/2 & 1/2 & 1/2 \
0 & 1 & 0 & 1/2 & -1/2 & 1/2 \
0 & 0 & textbf 1 & 1/2 & 1/2 & -1/2
end{bmatrix} $$



Now we have the reduced row echelon form, $[I|A^{-1}]$. To verify we can multiply:



$$ begin{bmatrix} 0 & 1 & 1 \ 1 & 0 & 1 \ 1 & 1 & 0 end{bmatrix} ;
begin{bmatrix} -1/2 & 1/2 & 1/2 \ 1/2 & -1/2 & 1/2 \ 1/2 & 1/2 & -1/2 end{bmatrix} = I $$






share|cite|improve this answer











$endgroup$













  • $begingroup$
    OP's question is what to do when $0$ is encountered on the main diagonal and pivoting on it is not possible
    $endgroup$
    – gt6989b
    Mar 12 at 4:11






  • 1




    $begingroup$
    @gt6989b: Pivoting is precisely what one needs to do to put a nonzero entry into the current column and row. If the entries at that position and below it in the current column are all zero, THEN one cannot produce a leading one there and the matrix is not invertible. However an exchange of the current row with one below it is an elementary row operation and should succeed in producing a nonzero entry provided the matrix is invertible (up to floating point precision). All this is standard in an undergraduate discrete math or linear algebra class.
    $endgroup$
    – hardmath
    Mar 12 at 4:14












  • $begingroup$
    @hardmath If I interchange rows of a matrix and then invert it, would the result equal to the invert of the original matrix??
    $endgroup$
    – Kid_Learning_C
    Mar 12 at 15:31










  • $begingroup$
    @hardmath We need the invert of the original matrix. The invert of the permuted matrix is not what we need. Is there a relationship between the 2 inverts then?
    $endgroup$
    – Kid_Learning_C
    Mar 12 at 15:41










  • $begingroup$
    That's what I showed you above. When you obtain the reduced row echelon form of the matrix $[A|I]$, using any sequence of elementary row operations including as necessary row exchanges, the result will be $[I|A^{-1}]$ whenever $A$ is an invertible matrix.
    $endgroup$
    – hardmath
    Mar 12 at 15:47
















1












$begingroup$

Create an "augmented matrix" of the form $[A|I]$ and use elementary row operations to put this into reduced row echelon form. The result will be $[I|A^{-1}]$ if $A$ is an invertible matrix.





Let's do an example, mainly for the benefit of future Readers. The OP mentions a "method [to] invert a matrix through Gaussian elimination ... assuming that there are no zero on the diagonal." It follows that the code already implements two of the three elementary row operations, multiplying a row by a nonzero scalar and adding (equiv. subtracting) a multiple of one row to (from) another row. What else is needed when we confront a zero "on the diagonal" (i.e. in a matrix position where we want a leading one) is the operation of exchanging two rows.



Using this operation does not "break" the matrix any more than do the other two kinds of elementary row operations. Yes, it changes the matrix, but the key here is that we want to change a matrix into its reduced row echelon form.



In particular the goal of inverting matrix $A$ will be achieved by finding the reduced row echelon form of the augmented matrix $[A|I]$. The required form can be found by an algorithmically determined sequence of elementary row operations, but the sequence is not unique. When no row exchanges are used, the method is called "Gaussian elimination without pivoting". This is possible precisely when the leading principal minors are nonzero, as this is another way of saying that zeros will not appear "on the diagonal," where we want the reduced row echelon form to have leading ones.



That is, elimination algorithms which use row exchanges commonly call those steps "pivoting". How hard one works to choose the "pivot" row is subject to a tradeoff in numeric stability, but in what follows we will present the case of partial pivoting (PDF). This amounts to taking the columns in whatever order they are presented (full pivoting meaning a strategy that might introduce leading ones in a wider consideration of all columns not yet reduced).



For our example we take:



$$ A = begin{bmatrix} 0 & 1 & 1 \ 1 & 0 & 1 \ 1 & 1 & 0 end{bmatrix} $$



Thus the augmented matrix we will reduce is $[A|I]$:



$$ begin{bmatrix}
textbf 0 & 1 & 1 & 1 & 0 & 0 \
1 & 0 & 1 & 0 & 1 & 0 \
1 & 1 & 0 & 0 & 0 & 1
end{bmatrix} $$



A reasonable rule when a zero appears where a leading one is wanted is to swap the row with the zero and one below it with an entry of largest maximum absolute value in the same column. Here we immediately start with a zero in the $(1,1)$ position, and below that both entries in that first column have the maximum absolute value $1$. So let's exchange the first and the second rows:



$$ begin{bmatrix}
textbf 1 & 0 & 1 & 0 & 1 & 0 \
0 & 1 & 1 & 1 & 0 & 0 \
1 & 1 & 0 & 0 & 0 & 1
end{bmatrix} $$



It is unnecessary to "scale" the leading entry in the current row (we already have a leading one). We proceed to clear the entries below it using addition/subtraction of a multiple of the first row to each other rows:



$$ begin{bmatrix}
textbf 1 & 0 & 1 & 0 & 1 & 0 \
0 & 1 & 1 & 1 & 0 & 0 \
0 & 1 & -1 & 0 & -1 & 1
end{bmatrix} $$



Having reduced the first column, we turn our attention to the second column. Fortuitously the $(2,2)$ entry is already a leading one in that row, and we proceed to add multiples of that second row to clear out the other entries in the second column:



$$ begin{bmatrix}
1 & 0 & 1 & 0 & 1 & 0 \
0 & textbf 1 & 1 & 1 & 0 & 0 \
0 & 0 & -2 & -1 & -1 & 1
end{bmatrix} $$



Finally we scale the leading entry in the third row, so that the $(3,3)$ value becomes a leading one:



$$ begin{bmatrix}
1 & 0 & 1 & 0 & 1 & 0 \
0 & 1 & 1 & 1 & 0 & 0 \
0 & 0 & textbf 1 & 1/2 & 1/2 & -1/2
end{bmatrix} $$



Adding multiples of this new third row allows us to zero-out the other entries in the third column:



$$ begin{bmatrix}
1 & 0 & 0 & -1/2 & 1/2 & 1/2 \
0 & 1 & 0 & 1/2 & -1/2 & 1/2 \
0 & 0 & textbf 1 & 1/2 & 1/2 & -1/2
end{bmatrix} $$



Now we have the reduced row echelon form, $[I|A^{-1}]$. To verify we can multiply:



$$ begin{bmatrix} 0 & 1 & 1 \ 1 & 0 & 1 \ 1 & 1 & 0 end{bmatrix} ;
begin{bmatrix} -1/2 & 1/2 & 1/2 \ 1/2 & -1/2 & 1/2 \ 1/2 & 1/2 & -1/2 end{bmatrix} = I $$






share|cite|improve this answer











$endgroup$













  • $begingroup$
    OP's question is what to do when $0$ is encountered on the main diagonal and pivoting on it is not possible
    $endgroup$
    – gt6989b
    Mar 12 at 4:11






  • 1




    $begingroup$
    @gt6989b: Pivoting is precisely what one needs to do to put a nonzero entry into the current column and row. If the entries at that position and below it in the current column are all zero, THEN one cannot produce a leading one there and the matrix is not invertible. However an exchange of the current row with one below it is an elementary row operation and should succeed in producing a nonzero entry provided the matrix is invertible (up to floating point precision). All this is standard in an undergraduate discrete math or linear algebra class.
    $endgroup$
    – hardmath
    Mar 12 at 4:14












  • $begingroup$
    @hardmath If I interchange rows of a matrix and then invert it, would the result equal to the invert of the original matrix??
    $endgroup$
    – Kid_Learning_C
    Mar 12 at 15:31










  • $begingroup$
    @hardmath We need the invert of the original matrix. The invert of the permuted matrix is not what we need. Is there a relationship between the 2 inverts then?
    $endgroup$
    – Kid_Learning_C
    Mar 12 at 15:41










  • $begingroup$
    That's what I showed you above. When you obtain the reduced row echelon form of the matrix $[A|I]$, using any sequence of elementary row operations including as necessary row exchanges, the result will be $[I|A^{-1}]$ whenever $A$ is an invertible matrix.
    $endgroup$
    – hardmath
    Mar 12 at 15:47














1












1








1





$begingroup$

Create an "augmented matrix" of the form $[A|I]$ and use elementary row operations to put this into reduced row echelon form. The result will be $[I|A^{-1}]$ if $A$ is an invertible matrix.





Let's do an example, mainly for the benefit of future Readers. The OP mentions a "method [to] invert a matrix through Gaussian elimination ... assuming that there are no zero on the diagonal." It follows that the code already implements two of the three elementary row operations, multiplying a row by a nonzero scalar and adding (equiv. subtracting) a multiple of one row to (from) another row. What else is needed when we confront a zero "on the diagonal" (i.e. in a matrix position where we want a leading one) is the operation of exchanging two rows.



Using this operation does not "break" the matrix any more than do the other two kinds of elementary row operations. Yes, it changes the matrix, but the key here is that we want to change a matrix into its reduced row echelon form.



In particular the goal of inverting matrix $A$ will be achieved by finding the reduced row echelon form of the augmented matrix $[A|I]$. The required form can be found by an algorithmically determined sequence of elementary row operations, but the sequence is not unique. When no row exchanges are used, the method is called "Gaussian elimination without pivoting". This is possible precisely when the leading principal minors are nonzero, as this is another way of saying that zeros will not appear "on the diagonal," where we want the reduced row echelon form to have leading ones.



That is, elimination algorithms which use row exchanges commonly call those steps "pivoting". How hard one works to choose the "pivot" row is subject to a tradeoff in numeric stability, but in what follows we will present the case of partial pivoting (PDF). This amounts to taking the columns in whatever order they are presented (full pivoting meaning a strategy that might introduce leading ones in a wider consideration of all columns not yet reduced).



For our example we take:



$$ A = begin{bmatrix} 0 & 1 & 1 \ 1 & 0 & 1 \ 1 & 1 & 0 end{bmatrix} $$



Thus the augmented matrix we will reduce is $[A|I]$:



$$ begin{bmatrix}
textbf 0 & 1 & 1 & 1 & 0 & 0 \
1 & 0 & 1 & 0 & 1 & 0 \
1 & 1 & 0 & 0 & 0 & 1
end{bmatrix} $$



A reasonable rule when a zero appears where a leading one is wanted is to swap the row with the zero and one below it with an entry of largest maximum absolute value in the same column. Here we immediately start with a zero in the $(1,1)$ position, and below that both entries in that first column have the maximum absolute value $1$. So let's exchange the first and the second rows:



$$ begin{bmatrix}
textbf 1 & 0 & 1 & 0 & 1 & 0 \
0 & 1 & 1 & 1 & 0 & 0 \
1 & 1 & 0 & 0 & 0 & 1
end{bmatrix} $$



It is unnecessary to "scale" the leading entry in the current row (we already have a leading one). We proceed to clear the entries below it using addition/subtraction of a multiple of the first row to each other rows:



$$ begin{bmatrix}
textbf 1 & 0 & 1 & 0 & 1 & 0 \
0 & 1 & 1 & 1 & 0 & 0 \
0 & 1 & -1 & 0 & -1 & 1
end{bmatrix} $$



Having reduced the first column, we turn our attention to the second column. Fortuitously the $(2,2)$ entry is already a leading one in that row, and we proceed to add multiples of that second row to clear out the other entries in the second column:



$$ begin{bmatrix}
1 & 0 & 1 & 0 & 1 & 0 \
0 & textbf 1 & 1 & 1 & 0 & 0 \
0 & 0 & -2 & -1 & -1 & 1
end{bmatrix} $$



Finally we scale the leading entry in the third row, so that the $(3,3)$ value becomes a leading one:



$$ begin{bmatrix}
1 & 0 & 1 & 0 & 1 & 0 \
0 & 1 & 1 & 1 & 0 & 0 \
0 & 0 & textbf 1 & 1/2 & 1/2 & -1/2
end{bmatrix} $$



Adding multiples of this new third row allows us to zero-out the other entries in the third column:



$$ begin{bmatrix}
1 & 0 & 0 & -1/2 & 1/2 & 1/2 \
0 & 1 & 0 & 1/2 & -1/2 & 1/2 \
0 & 0 & textbf 1 & 1/2 & 1/2 & -1/2
end{bmatrix} $$



Now we have the reduced row echelon form, $[I|A^{-1}]$. To verify we can multiply:



$$ begin{bmatrix} 0 & 1 & 1 \ 1 & 0 & 1 \ 1 & 1 & 0 end{bmatrix} ;
begin{bmatrix} -1/2 & 1/2 & 1/2 \ 1/2 & -1/2 & 1/2 \ 1/2 & 1/2 & -1/2 end{bmatrix} = I $$






share|cite|improve this answer











$endgroup$



Create an "augmented matrix" of the form $[A|I]$ and use elementary row operations to put this into reduced row echelon form. The result will be $[I|A^{-1}]$ if $A$ is an invertible matrix.





Let's do an example, mainly for the benefit of future Readers. The OP mentions a "method [to] invert a matrix through Gaussian elimination ... assuming that there are no zero on the diagonal." It follows that the code already implements two of the three elementary row operations, multiplying a row by a nonzero scalar and adding (equiv. subtracting) a multiple of one row to (from) another row. What else is needed when we confront a zero "on the diagonal" (i.e. in a matrix position where we want a leading one) is the operation of exchanging two rows.



Using this operation does not "break" the matrix any more than do the other two kinds of elementary row operations. Yes, it changes the matrix, but the key here is that we want to change a matrix into its reduced row echelon form.



In particular the goal of inverting matrix $A$ will be achieved by finding the reduced row echelon form of the augmented matrix $[A|I]$. The required form can be found by an algorithmically determined sequence of elementary row operations, but the sequence is not unique. When no row exchanges are used, the method is called "Gaussian elimination without pivoting". This is possible precisely when the leading principal minors are nonzero, as this is another way of saying that zeros will not appear "on the diagonal," where we want the reduced row echelon form to have leading ones.



That is, elimination algorithms which use row exchanges commonly call those steps "pivoting". How hard one works to choose the "pivot" row is subject to a tradeoff in numeric stability, but in what follows we will present the case of partial pivoting (PDF). This amounts to taking the columns in whatever order they are presented (full pivoting meaning a strategy that might introduce leading ones in a wider consideration of all columns not yet reduced).



For our example we take:



$$ A = begin{bmatrix} 0 & 1 & 1 \ 1 & 0 & 1 \ 1 & 1 & 0 end{bmatrix} $$



Thus the augmented matrix we will reduce is $[A|I]$:



$$ begin{bmatrix}
textbf 0 & 1 & 1 & 1 & 0 & 0 \
1 & 0 & 1 & 0 & 1 & 0 \
1 & 1 & 0 & 0 & 0 & 1
end{bmatrix} $$



A reasonable rule when a zero appears where a leading one is wanted is to swap the row with the zero and one below it with an entry of largest maximum absolute value in the same column. Here we immediately start with a zero in the $(1,1)$ position, and below that both entries in that first column have the maximum absolute value $1$. So let's exchange the first and the second rows:



$$ begin{bmatrix}
textbf 1 & 0 & 1 & 0 & 1 & 0 \
0 & 1 & 1 & 1 & 0 & 0 \
1 & 1 & 0 & 0 & 0 & 1
end{bmatrix} $$



It is unnecessary to "scale" the leading entry in the current row (we already have a leading one). We proceed to clear the entries below it using addition/subtraction of a multiple of the first row to each other rows:



$$ begin{bmatrix}
textbf 1 & 0 & 1 & 0 & 1 & 0 \
0 & 1 & 1 & 1 & 0 & 0 \
0 & 1 & -1 & 0 & -1 & 1
end{bmatrix} $$



Having reduced the first column, we turn our attention to the second column. Fortuitously the $(2,2)$ entry is already a leading one in that row, and we proceed to add multiples of that second row to clear out the other entries in the second column:



$$ begin{bmatrix}
1 & 0 & 1 & 0 & 1 & 0 \
0 & textbf 1 & 1 & 1 & 0 & 0 \
0 & 0 & -2 & -1 & -1 & 1
end{bmatrix} $$



Finally we scale the leading entry in the third row, so that the $(3,3)$ value becomes a leading one:



$$ begin{bmatrix}
1 & 0 & 1 & 0 & 1 & 0 \
0 & 1 & 1 & 1 & 0 & 0 \
0 & 0 & textbf 1 & 1/2 & 1/2 & -1/2
end{bmatrix} $$



Adding multiples of this new third row allows us to zero-out the other entries in the third column:



$$ begin{bmatrix}
1 & 0 & 0 & -1/2 & 1/2 & 1/2 \
0 & 1 & 0 & 1/2 & -1/2 & 1/2 \
0 & 0 & textbf 1 & 1/2 & 1/2 & -1/2
end{bmatrix} $$



Now we have the reduced row echelon form, $[I|A^{-1}]$. To verify we can multiply:



$$ begin{bmatrix} 0 & 1 & 1 \ 1 & 0 & 1 \ 1 & 1 & 0 end{bmatrix} ;
begin{bmatrix} -1/2 & 1/2 & 1/2 \ 1/2 & -1/2 & 1/2 \ 1/2 & 1/2 & -1/2 end{bmatrix} = I $$







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Mar 13 at 18:13

























answered Mar 12 at 4:10









hardmathhardmath

29.2k953101




29.2k953101












  • $begingroup$
    OP's question is what to do when $0$ is encountered on the main diagonal and pivoting on it is not possible
    $endgroup$
    – gt6989b
    Mar 12 at 4:11






  • 1




    $begingroup$
    @gt6989b: Pivoting is precisely what one needs to do to put a nonzero entry into the current column and row. If the entries at that position and below it in the current column are all zero, THEN one cannot produce a leading one there and the matrix is not invertible. However an exchange of the current row with one below it is an elementary row operation and should succeed in producing a nonzero entry provided the matrix is invertible (up to floating point precision). All this is standard in an undergraduate discrete math or linear algebra class.
    $endgroup$
    – hardmath
    Mar 12 at 4:14












  • $begingroup$
    @hardmath If I interchange rows of a matrix and then invert it, would the result equal to the invert of the original matrix??
    $endgroup$
    – Kid_Learning_C
    Mar 12 at 15:31










  • $begingroup$
    @hardmath We need the invert of the original matrix. The invert of the permuted matrix is not what we need. Is there a relationship between the 2 inverts then?
    $endgroup$
    – Kid_Learning_C
    Mar 12 at 15:41










  • $begingroup$
    That's what I showed you above. When you obtain the reduced row echelon form of the matrix $[A|I]$, using any sequence of elementary row operations including as necessary row exchanges, the result will be $[I|A^{-1}]$ whenever $A$ is an invertible matrix.
    $endgroup$
    – hardmath
    Mar 12 at 15:47


















  • $begingroup$
    OP's question is what to do when $0$ is encountered on the main diagonal and pivoting on it is not possible
    $endgroup$
    – gt6989b
    Mar 12 at 4:11






  • 1




    $begingroup$
    @gt6989b: Pivoting is precisely what one needs to do to put a nonzero entry into the current column and row. If the entries at that position and below it in the current column are all zero, THEN one cannot produce a leading one there and the matrix is not invertible. However an exchange of the current row with one below it is an elementary row operation and should succeed in producing a nonzero entry provided the matrix is invertible (up to floating point precision). All this is standard in an undergraduate discrete math or linear algebra class.
    $endgroup$
    – hardmath
    Mar 12 at 4:14












  • $begingroup$
    @hardmath If I interchange rows of a matrix and then invert it, would the result equal to the invert of the original matrix??
    $endgroup$
    – Kid_Learning_C
    Mar 12 at 15:31










  • $begingroup$
    @hardmath We need the invert of the original matrix. The invert of the permuted matrix is not what we need. Is there a relationship between the 2 inverts then?
    $endgroup$
    – Kid_Learning_C
    Mar 12 at 15:41










  • $begingroup$
    That's what I showed you above. When you obtain the reduced row echelon form of the matrix $[A|I]$, using any sequence of elementary row operations including as necessary row exchanges, the result will be $[I|A^{-1}]$ whenever $A$ is an invertible matrix.
    $endgroup$
    – hardmath
    Mar 12 at 15:47
















$begingroup$
OP's question is what to do when $0$ is encountered on the main diagonal and pivoting on it is not possible
$endgroup$
– gt6989b
Mar 12 at 4:11




$begingroup$
OP's question is what to do when $0$ is encountered on the main diagonal and pivoting on it is not possible
$endgroup$
– gt6989b
Mar 12 at 4:11




1




1




$begingroup$
@gt6989b: Pivoting is precisely what one needs to do to put a nonzero entry into the current column and row. If the entries at that position and below it in the current column are all zero, THEN one cannot produce a leading one there and the matrix is not invertible. However an exchange of the current row with one below it is an elementary row operation and should succeed in producing a nonzero entry provided the matrix is invertible (up to floating point precision). All this is standard in an undergraduate discrete math or linear algebra class.
$endgroup$
– hardmath
Mar 12 at 4:14






$begingroup$
@gt6989b: Pivoting is precisely what one needs to do to put a nonzero entry into the current column and row. If the entries at that position and below it in the current column are all zero, THEN one cannot produce a leading one there and the matrix is not invertible. However an exchange of the current row with one below it is an elementary row operation and should succeed in producing a nonzero entry provided the matrix is invertible (up to floating point precision). All this is standard in an undergraduate discrete math or linear algebra class.
$endgroup$
– hardmath
Mar 12 at 4:14














$begingroup$
@hardmath If I interchange rows of a matrix and then invert it, would the result equal to the invert of the original matrix??
$endgroup$
– Kid_Learning_C
Mar 12 at 15:31




$begingroup$
@hardmath If I interchange rows of a matrix and then invert it, would the result equal to the invert of the original matrix??
$endgroup$
– Kid_Learning_C
Mar 12 at 15:31












$begingroup$
@hardmath We need the invert of the original matrix. The invert of the permuted matrix is not what we need. Is there a relationship between the 2 inverts then?
$endgroup$
– Kid_Learning_C
Mar 12 at 15:41




$begingroup$
@hardmath We need the invert of the original matrix. The invert of the permuted matrix is not what we need. Is there a relationship between the 2 inverts then?
$endgroup$
– Kid_Learning_C
Mar 12 at 15:41












$begingroup$
That's what I showed you above. When you obtain the reduced row echelon form of the matrix $[A|I]$, using any sequence of elementary row operations including as necessary row exchanges, the result will be $[I|A^{-1}]$ whenever $A$ is an invertible matrix.
$endgroup$
– hardmath
Mar 12 at 15:47




$begingroup$
That's what I showed you above. When you obtain the reduced row echelon form of the matrix $[A|I]$, using any sequence of elementary row operations including as necessary row exchanges, the result will be $[I|A^{-1}]$ whenever $A$ is an invertible matrix.
$endgroup$
– hardmath
Mar 12 at 15:47










Kid_Learning_C is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















Kid_Learning_C is a new contributor. Be nice, and check out our Code of Conduct.













Kid_Learning_C is a new contributor. Be nice, and check out our Code of Conduct.












Kid_Learning_C is a new contributor. Be nice, and check out our Code of Conduct.
















Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3144629%2fhow-to-perform-gaussian-elimination-to-invert-a-matrix-if-the-matrix-contains-ze%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Nidaros erkebispedøme

Birsay

Was Woodrow Wilson really a Liberal?Was World War I a war of liberals against authoritarians?Founding Fathers...