Determinant of a block lower triangular matrixDeterminant of a block matrixProofs of Determinants of Block...

What is this diamond of every day?

Is a piano played in the same way as a harmonium?

What would be the most expensive material to an intergalactic society?

How exactly does an Ethernet collision happen in the cable, since nodes use different circuits for Tx and Rx?

Recommendation letter by significant other if you worked with them professionally?

What will happen if my luggage gets delayed?

What is better: yes / no radio, or simple checkbox?

When a wind turbine does not produce enough electricity how does the power company compensate for the loss?

Getting the || sign while using Kurier

Signed and unsigned numbers

Plausibility of Mushroom Buildings

Proving a statement about real numbers

How do spaceships determine each other's mass in space?

Source permutation

How can I manipulate the output of Information?

Outlet with 3 sets of wires

Shifting between bemols (flats) and diesis (sharps)in the key signature

Minimizing with differential evolution

Haman going to the second feast dirty

Did Amazon pay $0 in taxes last year?

Why does cron require MTA for logging?

I can't die. Who am I?

Expressing logarithmic equations without logs

Specifying a starting column with colortbl package and xcolor



Determinant of a block lower triangular matrix


Determinant of a block matrixProofs of Determinants of Block matricesProof: Determinant of a block matrixDeterminant of block-triangular matrix made of 3 matricesCan anyone help me prove block diagonal matrix?Eigenvalues and elementary row operationsDeriving a Formula for the determinant of a block matrix.a block matrix proof about characteristic polynomialsDeterminant of block $n times n$ matrixCharacteristic polynomial of operator is the product of characteristic polynomials of two induced operators (restricted and quotient space operators).Determinant of a $2 times 2$ block matrixDeterminant of a block matrixProof: Determinant of a block matrixDeterminant of block matrix with certain propertiesDeterminant of block matrices of block matrices with different dimensionsDeterminant of block matrix when $CD^T=DC^T$Determinant of block-triangular matrix made of 3 matricesDeterminant of an anti-diagonal block matrixDeterminant of non-triangular block matrixDeterminant of non-all-square block matrix













21












$begingroup$


I'm trying to prove the following: Let $A$ be a $ktimes k$ matrix, let $D$ have size $ntimes n$, and $C$ have size $ntimes k$. Then,



$$detleft(begin{array}{cc}
A&0\
C&D
end{array}right) = det(A)det(D).$$



Can I just say that $AD - 0C = AD$, and I'm done?










share|cite|improve this question











$endgroup$








  • 14




    $begingroup$
    No, you cannot just say that; it doesn't even make sense, because $AD$ cannot be computed: $A$ has size $ktimes k$, and $B$ has size $ntimes n$. Unless $n=k$, $AD$ doesn't make sense.
    $endgroup$
    – Arturo Magidin
    Oct 24 '11 at 4:30










  • $begingroup$
    Then, what can I do?
    $endgroup$
    – Buddy Holly
    Oct 24 '11 at 4:32






  • 4




    $begingroup$
    Depends on your definition of the determinant. If it is a sum over all permutations of (in this case) $n+k$, then you should figure out which terms you know for sure are equal to $0$; the formula will drop out of that if you are careful enough. If your definition of determinant is via expansion by minors, then I suggest expanding along the first row and using induction on $k$.
    $endgroup$
    – Arturo Magidin
    Oct 24 '11 at 4:34












  • $begingroup$
    Can you show how to do it both ways?
    $endgroup$
    – Buddy Holly
    Oct 24 '11 at 4:44






  • 3




    $begingroup$
    @BuddyHolly: I said that it depends on what your definition of determinant is, and sketched two possibilities. Rather than bother to even tell us what your definition of determinant is, you replied by asking me to do two proofs for you; why should I do a double effort when you seem unwilling to do even the small effort required to tell us what your definition is?
    $endgroup$
    – Arturo Magidin
    Oct 24 '11 at 13:41
















21












$begingroup$


I'm trying to prove the following: Let $A$ be a $ktimes k$ matrix, let $D$ have size $ntimes n$, and $C$ have size $ntimes k$. Then,



$$detleft(begin{array}{cc}
A&0\
C&D
end{array}right) = det(A)det(D).$$



Can I just say that $AD - 0C = AD$, and I'm done?










share|cite|improve this question











$endgroup$








  • 14




    $begingroup$
    No, you cannot just say that; it doesn't even make sense, because $AD$ cannot be computed: $A$ has size $ktimes k$, and $B$ has size $ntimes n$. Unless $n=k$, $AD$ doesn't make sense.
    $endgroup$
    – Arturo Magidin
    Oct 24 '11 at 4:30










  • $begingroup$
    Then, what can I do?
    $endgroup$
    – Buddy Holly
    Oct 24 '11 at 4:32






  • 4




    $begingroup$
    Depends on your definition of the determinant. If it is a sum over all permutations of (in this case) $n+k$, then you should figure out which terms you know for sure are equal to $0$; the formula will drop out of that if you are careful enough. If your definition of determinant is via expansion by minors, then I suggest expanding along the first row and using induction on $k$.
    $endgroup$
    – Arturo Magidin
    Oct 24 '11 at 4:34












  • $begingroup$
    Can you show how to do it both ways?
    $endgroup$
    – Buddy Holly
    Oct 24 '11 at 4:44






  • 3




    $begingroup$
    @BuddyHolly: I said that it depends on what your definition of determinant is, and sketched two possibilities. Rather than bother to even tell us what your definition of determinant is, you replied by asking me to do two proofs for you; why should I do a double effort when you seem unwilling to do even the small effort required to tell us what your definition is?
    $endgroup$
    – Arturo Magidin
    Oct 24 '11 at 13:41














21












21








21


16



$begingroup$


I'm trying to prove the following: Let $A$ be a $ktimes k$ matrix, let $D$ have size $ntimes n$, and $C$ have size $ntimes k$. Then,



$$detleft(begin{array}{cc}
A&0\
C&D
end{array}right) = det(A)det(D).$$



Can I just say that $AD - 0C = AD$, and I'm done?










share|cite|improve this question











$endgroup$




I'm trying to prove the following: Let $A$ be a $ktimes k$ matrix, let $D$ have size $ntimes n$, and $C$ have size $ntimes k$. Then,



$$detleft(begin{array}{cc}
A&0\
C&D
end{array}right) = det(A)det(D).$$



Can I just say that $AD - 0C = AD$, and I'm done?







linear-algebra matrices determinant block-matrices






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited 2 days ago









Rodrigo de Azevedo

13k41960




13k41960










asked Oct 24 '11 at 4:22









Buddy HollyBuddy Holly

3342522




3342522








  • 14




    $begingroup$
    No, you cannot just say that; it doesn't even make sense, because $AD$ cannot be computed: $A$ has size $ktimes k$, and $B$ has size $ntimes n$. Unless $n=k$, $AD$ doesn't make sense.
    $endgroup$
    – Arturo Magidin
    Oct 24 '11 at 4:30










  • $begingroup$
    Then, what can I do?
    $endgroup$
    – Buddy Holly
    Oct 24 '11 at 4:32






  • 4




    $begingroup$
    Depends on your definition of the determinant. If it is a sum over all permutations of (in this case) $n+k$, then you should figure out which terms you know for sure are equal to $0$; the formula will drop out of that if you are careful enough. If your definition of determinant is via expansion by minors, then I suggest expanding along the first row and using induction on $k$.
    $endgroup$
    – Arturo Magidin
    Oct 24 '11 at 4:34












  • $begingroup$
    Can you show how to do it both ways?
    $endgroup$
    – Buddy Holly
    Oct 24 '11 at 4:44






  • 3




    $begingroup$
    @BuddyHolly: I said that it depends on what your definition of determinant is, and sketched two possibilities. Rather than bother to even tell us what your definition of determinant is, you replied by asking me to do two proofs for you; why should I do a double effort when you seem unwilling to do even the small effort required to tell us what your definition is?
    $endgroup$
    – Arturo Magidin
    Oct 24 '11 at 13:41














  • 14




    $begingroup$
    No, you cannot just say that; it doesn't even make sense, because $AD$ cannot be computed: $A$ has size $ktimes k$, and $B$ has size $ntimes n$. Unless $n=k$, $AD$ doesn't make sense.
    $endgroup$
    – Arturo Magidin
    Oct 24 '11 at 4:30










  • $begingroup$
    Then, what can I do?
    $endgroup$
    – Buddy Holly
    Oct 24 '11 at 4:32






  • 4




    $begingroup$
    Depends on your definition of the determinant. If it is a sum over all permutations of (in this case) $n+k$, then you should figure out which terms you know for sure are equal to $0$; the formula will drop out of that if you are careful enough. If your definition of determinant is via expansion by minors, then I suggest expanding along the first row and using induction on $k$.
    $endgroup$
    – Arturo Magidin
    Oct 24 '11 at 4:34












  • $begingroup$
    Can you show how to do it both ways?
    $endgroup$
    – Buddy Holly
    Oct 24 '11 at 4:44






  • 3




    $begingroup$
    @BuddyHolly: I said that it depends on what your definition of determinant is, and sketched two possibilities. Rather than bother to even tell us what your definition of determinant is, you replied by asking me to do two proofs for you; why should I do a double effort when you seem unwilling to do even the small effort required to tell us what your definition is?
    $endgroup$
    – Arturo Magidin
    Oct 24 '11 at 13:41








14




14




$begingroup$
No, you cannot just say that; it doesn't even make sense, because $AD$ cannot be computed: $A$ has size $ktimes k$, and $B$ has size $ntimes n$. Unless $n=k$, $AD$ doesn't make sense.
$endgroup$
– Arturo Magidin
Oct 24 '11 at 4:30




$begingroup$
No, you cannot just say that; it doesn't even make sense, because $AD$ cannot be computed: $A$ has size $ktimes k$, and $B$ has size $ntimes n$. Unless $n=k$, $AD$ doesn't make sense.
$endgroup$
– Arturo Magidin
Oct 24 '11 at 4:30












$begingroup$
Then, what can I do?
$endgroup$
– Buddy Holly
Oct 24 '11 at 4:32




$begingroup$
Then, what can I do?
$endgroup$
– Buddy Holly
Oct 24 '11 at 4:32




4




4




$begingroup$
Depends on your definition of the determinant. If it is a sum over all permutations of (in this case) $n+k$, then you should figure out which terms you know for sure are equal to $0$; the formula will drop out of that if you are careful enough. If your definition of determinant is via expansion by minors, then I suggest expanding along the first row and using induction on $k$.
$endgroup$
– Arturo Magidin
Oct 24 '11 at 4:34






$begingroup$
Depends on your definition of the determinant. If it is a sum over all permutations of (in this case) $n+k$, then you should figure out which terms you know for sure are equal to $0$; the formula will drop out of that if you are careful enough. If your definition of determinant is via expansion by minors, then I suggest expanding along the first row and using induction on $k$.
$endgroup$
– Arturo Magidin
Oct 24 '11 at 4:34














$begingroup$
Can you show how to do it both ways?
$endgroup$
– Buddy Holly
Oct 24 '11 at 4:44




$begingroup$
Can you show how to do it both ways?
$endgroup$
– Buddy Holly
Oct 24 '11 at 4:44




3




3




$begingroup$
@BuddyHolly: I said that it depends on what your definition of determinant is, and sketched two possibilities. Rather than bother to even tell us what your definition of determinant is, you replied by asking me to do two proofs for you; why should I do a double effort when you seem unwilling to do even the small effort required to tell us what your definition is?
$endgroup$
– Arturo Magidin
Oct 24 '11 at 13:41




$begingroup$
@BuddyHolly: I said that it depends on what your definition of determinant is, and sketched two possibilities. Rather than bother to even tell us what your definition of determinant is, you replied by asking me to do two proofs for you; why should I do a double effort when you seem unwilling to do even the small effort required to tell us what your definition is?
$endgroup$
– Arturo Magidin
Oct 24 '11 at 13:41










6 Answers
6






active

oldest

votes


















27












$begingroup$

If $A$ is singular, its rows are linearly dependent, hence the rows of the entire matrix are linearly dependent, hence boths sides of the equation vanish.



If $A$ is not singular, we have



$$pmatrix{I&0\-CA^{-1}&I}pmatrix{A&0\C&D}=pmatrix{A&0\0&D};.$$



The determinants of the two new matrices are perhaps easier to derive from the Laplace expansion than that of the entire matrix. They are $1$ and $det A det D$, respectively, and the result follows.



Another way to express this is that if $A$ is not singular you can get rid of $C$ by Gaussian elimination.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    How do you know that the first matrix has a determinant of 1?
    $endgroup$
    – larry
    Apr 1 '14 at 3:51






  • 2




    $begingroup$
    @larry It is triangular. Determinants of triangular matrices are the products of their diagonals. The diagonals of the first matrix are all 1.
    $endgroup$
    – Emily
    Apr 23 '14 at 20:42










  • $begingroup$
    How second determinant is det(A)det(D)
    $endgroup$
    – codeomnitrix
    Aug 15 '14 at 9:57






  • 2




    $begingroup$
    It's the THIRD matrix whose determinant is $det(A) det(D)$.
    $endgroup$
    – John Hughes
    Oct 12 '14 at 20:06






  • 10




    $begingroup$
    This answer falls into the trap of proving an elementary result (one of the first one should know about determinants, immediately falling out of the Leibniz formula) by using statements that are special cases or consequences of the result to be proved. The result about triangular matrices that @Arkamis refers too can be obtained by iterating a decomposition into block-triangular matrices until hitting $1times1$ blocks. But more fundamentally, the RHS matrix is just a special case of a block triangular matrix, and proving its determinant is $det Adet D$ is not really any easier than the OP.
    $endgroup$
    – Marc van Leeuwen
    Apr 12 '15 at 6:11





















17





+100







$begingroup$

As @user153012 is asking for a proof in full detail, here is a brute-force approach using an explicit expression of a determinant of an $n$ by $n$ matrix, say $A = (a[i,j])$, $$det A = sum_{sigmain S_n}operatorname{sgn}sigma prod_i a[{i,sigma(i)}],$$ where $S_n$ is the symmetric group on $[n] = {1,dots, n}$ and $operatorname{sgn}sigma$ denotes the signature of $sigma$.



In matrix $$B = left(begin{array}{cc}
A&0\
C&D
end{array}right),$$ we have $$b[i,j] = begin{cases}a[i,j] & text{if }i le k, j le k;\ d[i-k, j-k] & text{if }i > k, j > k; \ 0 & text{if }i le k, j > k; \ c[i-k,j] & text{otherwise}.end{cases}$$ Observe in $$det B = sum_{sigmain S_{n+k}}operatorname{sgn}sigmaprod_i b[i, sigma(i)],$$ if $sigma(i) = j$ such that $ile k$ and $j > k$, then the corresponding summand $prod_i b[i,sigma(i)]$ is $0$. Any permutation $sigmain S_{n+k}$ for which no such $i$ and $j$ exist can be uniquely "decomposed" into two permutations, $pi$ and $tau$, where $piin S_k$ and $tauin S_n$ such that $sigma(i) = pi(i)$ for $i le k$ and $sigma(k+i) = k+tau(i)$ for $i le n$. Moreover, we have $operatorname{sgn}sigma = operatorname{sgn}pioperatorname{sgn}tau$. Denote the collection of such permutations by $S_n'$. Therefore, we can write $$begin{eqnarray}det B &=& sum_{sigmain S_n'}operatorname{sgn}sigmaprod_{i=1}^k b[i,sigma(i)]prod_{i=k+1}^{k+n} b[i,sigma(i)] \ &=& sum_{sigmain S_n'}operatorname{sgn}sigmaprod_{i=1}^k a[i,sigma(i)]prod_{i=1}^nd[i,sigma(i+k)-k] \ & = & sum_{piin S_k,tauin S_n}operatorname{sgn}pioperatorname{sgn}tauprod_{i=1}^k a[i,pi(i)]prod_{i=1}^nd[i,tau(i)] \ &=& sum_{piin S_k}operatorname{sgn}piprod_{i=1}^k a[i,pi(i)]sum_{tauin S_{n}}operatorname{sgn}tauprod_{i=1}^nd[i,tau(i)] \ & = & det Adet D.end{eqnarray}$$ QED.



Update As @Marc van Leeuwen mentioned in the comment, a similar formula holds for permanents.The proof is basically the same as the proof for determinant except one has to drop off all those signatures of permutations.






share|cite|improve this answer











$endgroup$





















    12












    $begingroup$

    This is a fundamental result about determinants, and like most of such results it holds for matrices with entries in any commutative (unitary) ring. It is therefore good to have a proof that does not rely on the coefficients being in a field; I will use the Leibniz formula as definition of the determinant rather than a characterisation as alternating $n$-linear form. (And my apologies to those who find that distasteful; for some purposes using the definition is really best.)



    Writing for a permutation $sigma$ and a matrix $M$ the abbreviation $defsg{operatorname{sg}}M[sigma]=sg(sigma)prod_iM_{i,sigma(i)}$, the Leibniz formula says, for any $mtimes m$ matrix $M$
    $$
    det(M)=sum_{sigmain S_m}M[sigma]
    $$
    The result is based on the following simple fact about symmetric groups




    The subgroup of $S_{k+n}$ of permutations permuting the first $k$ elements among each other is canonically isomorphic to $S_ktimes S_n$, and if $sigmain S_{k+n}$ corresponds to $(pi,rho)in S_ktimes S_n$ then $sg(sigma)=sg(pi)sg(rho)$.




    In fact this is basically just saying that if the first $k$ elements are permuted among each other, then so are the remaining $n$ elements, and the sign of the whole permutation is the product of the signs of its restrictions to those two subsets.



    Now if $M=bigl(begin{smallmatrix}A&0\C&Dend{smallmatrix}bigr)$ note that $M[sigma]=0$ unless $sigma$ permutes first $k$ indices among each other. From this it follows that
    $$
    det(M)=sum_{sigmain S_{k+n}}M[sigma]
    =sum_{(pi,rho)in S_ktimes S_n}A[pi]D[rho]
    =left(sum_{piin S_k}A[pi]right)left(sum_{rhoin S_n}D[rho]right)
    =det Adet D.
    $$





    Alternatively, if one is willing to assume the property $det(MN)=det(M)det(N)$ (not really easier than the one to be proved here, but maybe better known), and if one considers the special cases where either $A=I$ or $D=I$ to be clear (because one can obtain the result in these cases by repeated Laplace expansion by rows respectively by columns), then one can employ any decomposition of the form
    $$
    pmatrix{A&0\C&D} = pmatrix{A&0\C_0&I} pmatrix{I&0\C_1&D}
    qquadtext{with $C=C_0+C_1$}
    $$
    (for instance with one of $C_0,C_1$ equal to zero) to obtain the desired result.






    share|cite|improve this answer











    $endgroup$





















      4












      $begingroup$

      Yet another proof, in the case of fields, can be obtained if you are willing to enlarge your field to an algebraically closed one. Then any matrix can be put in (lower) triangular form, with the eigenvalues on the diagonal. In particular, there are invertible matrices $S, T$ of appropriate sizes such that
      $S^{-1} A S$ and $T^{-1} D T$ are in lower triangular form. Then
      $$
      begin{bmatrix}
      S& 0\
      0 & T\
      end{bmatrix}^{-1}
      begin{bmatrix}
      A&0\
      C&D
      end{bmatrix}
      begin{bmatrix}
      S& 0\
      0 & T\
      end{bmatrix}
      =
      begin{bmatrix}
      S^{-1} A S&0\
      T^{-1} C S&T^{-1} D T
      end{bmatrix}
      $$
      is in lower triangular form, and the rest is more or less straightforward. Clearly I rely on the multiplicativity of determinants here, or on the fact that the determinant is invariant under conjugation.






      share|cite|improve this answer











      $endgroup$









      • 1




        $begingroup$
        Wow! Using algebraically closed fields to prove an elementary property of determinants. Something similar to my comment to the answer by joriki would apply here, although I admit that in principle one could argue that the determinant of a triangular matrix is the product of its diagonal entries without either using the result to be proved here, or an argument (Leibniz formula) that would just as easily prove it directly. Notably repeated Laplace expansion does the trick. But then wouldn't you agree a two-factor decomposition as at the end of my answer does the job more economically?
        $endgroup$
        – Marc van Leeuwen
        Apr 13 '15 at 8:22






      • 2




        $begingroup$
        @MarcvanLeeuwen, I had upvoted your answer, which is definitely the best of the lot. Still, I find that putting matrices in triangular forms (admittedly over an a.c. field) is a generally useful technique (for instance in the the proof of Cayley-Hamilton as given by - if I remember correctly - Lang in one of his books) and I couldn't resist recording this version (no doubt courting downvotes).
        $endgroup$
        – Andreas Caranti
        Apr 13 '15 at 8:57










      • $begingroup$
        Such a cool proof...thanks so much, @AndreasCaranti :-)
        $endgroup$
        – User001
        Oct 2 '15 at 0:31



















      2












      $begingroup$

      Here is a sketch of what I consider a simpler direct proof (assuming complete eigenspaces --- it can be adapted for degenerate eigenspaces). Let's assume there is no zero eigenvalue, otherwise it's easy to find the eigenvector that the full matrix sends to zero, and so the determinant must be zero.



      We're going to prove that the eigenvalues are the same. Then the fact that the determinant is the product of the eigenvalues finishes the proof.



      Consider the eigenvectors of $D$. Now (pre)pad those with zeros (that is create a longer vector with zeros in the first few entries). These are eigenvectors of the full matrix.



      Now consider eigenvectors of $A$. We're going to postpad with something. We have to figure out what it is. Let $lambda$ be an eigenvalue of $A$ and $v_1$ its eigenvector. We'll pad it with $v_2$. So $v$ is made up of the vectors $v_1$ and $v_2$. The full matrix time $v$ has its first entries being $lambda v_1$. It's second set of entries is $C v_1 + D v_2$. Set this equal to $lambda v_1$ and solve for $v_2$. Since $D$ does not have a zero eigenvalue, it's solvable. Now we have an eigenvector for the full matrix that corresponds to $lambda$.






      share|cite|improve this answer









      $endgroup$





















        1












        $begingroup$

        Here is an approach that does not rely on any explicit definition of the determinant, nor any concept of the inverse. Instead, we can start with the 3 basic properties that the determinant function should satisfy. These three properties are:



        (1) Det(I) = 1



        (2) The Det() function is multilinear in each of the rows (columns) individually, assuming all other rows (columns) are held constant



        (3) If the matrix M is not full rank, Det(M)=0



        Artin showed that these three properties alone uniquely determine the form of the determinant function (I don't prove this here). Property 3 I am using here is slightly more general than that used by Artin, but it is equally intuitive and allows me to skip a step. First, you can show that



        $$
        Det
        begin{pmatrix}
        A & 0 \
        C & D
        end{pmatrix}
        =
        Det
        begin{pmatrix}
        A & 0 \
        0 & D
        end{pmatrix}
        $$



        To show this, we can expand the determinant of the original matrix M as
        $$
        (4) Det(M)= Det(M_1)+Det(M_2)
        $$
        where $M_1$ is the same as the original matrix but with the first k entries of the $k+1^{th}$ row set to 0, and $M_2$
        is the same as the original matrix but with the last n-k elements of the $k+1^{th}$ row seet to 0. Note that the $k+1^{th}$ row of $M_1$ and $M_2$ sum to the $k+1^{th}$ row of M, and therefore (4) holds according to property (2). Note that
        $
        Det(M_1)=0
        $
        since the resulting matrix is clearly not full-rank (there are now k+1 rows that have non zero entries in only k columns).
        Therefore we have $Det(M) = Det(M_2)$. We can then repeat this process for each row to show that the above claim is true.



        Now, let us denote
        $$
        M^*=begin{pmatrix}
        A & 0 \
        0 & D
        end{pmatrix}
        ,
        A^*=
        begin{pmatrix}
        A & 0 \
        0 & I
        end{pmatrix}
        ,
        D^*=
        begin{pmatrix}
        I & 0 \
        0 & D
        end{pmatrix}
        $$
        Note that
        $
        M^*=A^*B^*
        $
        I claim that
        $
        Det(M^*) = Det(A^*)*Det(D^*).
        $ To show this we can show that the function
        $$(5) F(D^*)=F(d^*_1,...,d^*_n)=frac{Det(A^*D^*)}{Det(A^*)}$$
        also satisfies properties (1)-(3). Clearly if $D=I$, then the RHS of (5) reduces to 1. To show (ii), We can write the j-th column of $M^*=A^**D^*$ as $A^*d_j$. Since we already know the determinant is multilinear in each column, and $A^*d^*_j$ is a linear function of d^*_j, it follows that $F(d^*_1,...,d^*_n)$ is also multininear in each of the columns ${d^*_1,...,d^*_n}$. Finally, to show (iii), we can note that if $d^*_i=d^*_j$ then $A^*d^*_i=A^*d^*_j$, so the numerator on the RHS would be 0. Therefore, $F(d^*_1,...,d^*_n)=Det(d^*_1,...,d^*_n)$, so $Det(A^**D^*)=Det(D^*)*Det(A^*)$, as desired.



        In order to finish the proof, one final step is needed. We need to show that $Det(A^*)=Det(A)$ and $Det(D^*)=Det(D)$. We use the same approach as above, by defining a target function which we show to be equal to the determinant. We do this by showing that the function $C(A)=Det(A^*)$ also satisfies properties (1)-(3) as a function of the columns of A, and therefore must be equal to its determinant. The basic ideas are that if A is the identity then so is $A^*$, so property (1) follows; any inear operation on a row (column) of A is also a linear operation on a row (column) of $A^*$, so (2) holds, and if A is not full rank, then neither is $A^*$, so (3) holds. Therefore, the function C(A), which extends A to $A^*$ by adding n-k standard basis vectors and then takes the determinant of the expanded matrix, is in fact equal to $Det(A)$.



        Given all of this, we immediately get the result, since
        $$Det(M)=Det(M^*)=Det(A^**D^*)=Det(A^*)*Det(D^*)=Det(A)*Det(D)
        $$






        share|cite|improve this answer











        $endgroup$













          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f75293%2fdeterminant-of-a-block-lower-triangular-matrix%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          6 Answers
          6






          active

          oldest

          votes








          6 Answers
          6






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          27












          $begingroup$

          If $A$ is singular, its rows are linearly dependent, hence the rows of the entire matrix are linearly dependent, hence boths sides of the equation vanish.



          If $A$ is not singular, we have



          $$pmatrix{I&0\-CA^{-1}&I}pmatrix{A&0\C&D}=pmatrix{A&0\0&D};.$$



          The determinants of the two new matrices are perhaps easier to derive from the Laplace expansion than that of the entire matrix. They are $1$ and $det A det D$, respectively, and the result follows.



          Another way to express this is that if $A$ is not singular you can get rid of $C$ by Gaussian elimination.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            How do you know that the first matrix has a determinant of 1?
            $endgroup$
            – larry
            Apr 1 '14 at 3:51






          • 2




            $begingroup$
            @larry It is triangular. Determinants of triangular matrices are the products of their diagonals. The diagonals of the first matrix are all 1.
            $endgroup$
            – Emily
            Apr 23 '14 at 20:42










          • $begingroup$
            How second determinant is det(A)det(D)
            $endgroup$
            – codeomnitrix
            Aug 15 '14 at 9:57






          • 2




            $begingroup$
            It's the THIRD matrix whose determinant is $det(A) det(D)$.
            $endgroup$
            – John Hughes
            Oct 12 '14 at 20:06






          • 10




            $begingroup$
            This answer falls into the trap of proving an elementary result (one of the first one should know about determinants, immediately falling out of the Leibniz formula) by using statements that are special cases or consequences of the result to be proved. The result about triangular matrices that @Arkamis refers too can be obtained by iterating a decomposition into block-triangular matrices until hitting $1times1$ blocks. But more fundamentally, the RHS matrix is just a special case of a block triangular matrix, and proving its determinant is $det Adet D$ is not really any easier than the OP.
            $endgroup$
            – Marc van Leeuwen
            Apr 12 '15 at 6:11


















          27












          $begingroup$

          If $A$ is singular, its rows are linearly dependent, hence the rows of the entire matrix are linearly dependent, hence boths sides of the equation vanish.



          If $A$ is not singular, we have



          $$pmatrix{I&0\-CA^{-1}&I}pmatrix{A&0\C&D}=pmatrix{A&0\0&D};.$$



          The determinants of the two new matrices are perhaps easier to derive from the Laplace expansion than that of the entire matrix. They are $1$ and $det A det D$, respectively, and the result follows.



          Another way to express this is that if $A$ is not singular you can get rid of $C$ by Gaussian elimination.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            How do you know that the first matrix has a determinant of 1?
            $endgroup$
            – larry
            Apr 1 '14 at 3:51






          • 2




            $begingroup$
            @larry It is triangular. Determinants of triangular matrices are the products of their diagonals. The diagonals of the first matrix are all 1.
            $endgroup$
            – Emily
            Apr 23 '14 at 20:42










          • $begingroup$
            How second determinant is det(A)det(D)
            $endgroup$
            – codeomnitrix
            Aug 15 '14 at 9:57






          • 2




            $begingroup$
            It's the THIRD matrix whose determinant is $det(A) det(D)$.
            $endgroup$
            – John Hughes
            Oct 12 '14 at 20:06






          • 10




            $begingroup$
            This answer falls into the trap of proving an elementary result (one of the first one should know about determinants, immediately falling out of the Leibniz formula) by using statements that are special cases or consequences of the result to be proved. The result about triangular matrices that @Arkamis refers too can be obtained by iterating a decomposition into block-triangular matrices until hitting $1times1$ blocks. But more fundamentally, the RHS matrix is just a special case of a block triangular matrix, and proving its determinant is $det Adet D$ is not really any easier than the OP.
            $endgroup$
            – Marc van Leeuwen
            Apr 12 '15 at 6:11
















          27












          27








          27





          $begingroup$

          If $A$ is singular, its rows are linearly dependent, hence the rows of the entire matrix are linearly dependent, hence boths sides of the equation vanish.



          If $A$ is not singular, we have



          $$pmatrix{I&0\-CA^{-1}&I}pmatrix{A&0\C&D}=pmatrix{A&0\0&D};.$$



          The determinants of the two new matrices are perhaps easier to derive from the Laplace expansion than that of the entire matrix. They are $1$ and $det A det D$, respectively, and the result follows.



          Another way to express this is that if $A$ is not singular you can get rid of $C$ by Gaussian elimination.






          share|cite|improve this answer









          $endgroup$



          If $A$ is singular, its rows are linearly dependent, hence the rows of the entire matrix are linearly dependent, hence boths sides of the equation vanish.



          If $A$ is not singular, we have



          $$pmatrix{I&0\-CA^{-1}&I}pmatrix{A&0\C&D}=pmatrix{A&0\0&D};.$$



          The determinants of the two new matrices are perhaps easier to derive from the Laplace expansion than that of the entire matrix. They are $1$ and $det A det D$, respectively, and the result follows.



          Another way to express this is that if $A$ is not singular you can get rid of $C$ by Gaussian elimination.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Oct 24 '11 at 11:20









          jorikijoriki

          171k10188349




          171k10188349












          • $begingroup$
            How do you know that the first matrix has a determinant of 1?
            $endgroup$
            – larry
            Apr 1 '14 at 3:51






          • 2




            $begingroup$
            @larry It is triangular. Determinants of triangular matrices are the products of their diagonals. The diagonals of the first matrix are all 1.
            $endgroup$
            – Emily
            Apr 23 '14 at 20:42










          • $begingroup$
            How second determinant is det(A)det(D)
            $endgroup$
            – codeomnitrix
            Aug 15 '14 at 9:57






          • 2




            $begingroup$
            It's the THIRD matrix whose determinant is $det(A) det(D)$.
            $endgroup$
            – John Hughes
            Oct 12 '14 at 20:06






          • 10




            $begingroup$
            This answer falls into the trap of proving an elementary result (one of the first one should know about determinants, immediately falling out of the Leibniz formula) by using statements that are special cases or consequences of the result to be proved. The result about triangular matrices that @Arkamis refers too can be obtained by iterating a decomposition into block-triangular matrices until hitting $1times1$ blocks. But more fundamentally, the RHS matrix is just a special case of a block triangular matrix, and proving its determinant is $det Adet D$ is not really any easier than the OP.
            $endgroup$
            – Marc van Leeuwen
            Apr 12 '15 at 6:11




















          • $begingroup$
            How do you know that the first matrix has a determinant of 1?
            $endgroup$
            – larry
            Apr 1 '14 at 3:51






          • 2




            $begingroup$
            @larry It is triangular. Determinants of triangular matrices are the products of their diagonals. The diagonals of the first matrix are all 1.
            $endgroup$
            – Emily
            Apr 23 '14 at 20:42










          • $begingroup$
            How second determinant is det(A)det(D)
            $endgroup$
            – codeomnitrix
            Aug 15 '14 at 9:57






          • 2




            $begingroup$
            It's the THIRD matrix whose determinant is $det(A) det(D)$.
            $endgroup$
            – John Hughes
            Oct 12 '14 at 20:06






          • 10




            $begingroup$
            This answer falls into the trap of proving an elementary result (one of the first one should know about determinants, immediately falling out of the Leibniz formula) by using statements that are special cases or consequences of the result to be proved. The result about triangular matrices that @Arkamis refers too can be obtained by iterating a decomposition into block-triangular matrices until hitting $1times1$ blocks. But more fundamentally, the RHS matrix is just a special case of a block triangular matrix, and proving its determinant is $det Adet D$ is not really any easier than the OP.
            $endgroup$
            – Marc van Leeuwen
            Apr 12 '15 at 6:11


















          $begingroup$
          How do you know that the first matrix has a determinant of 1?
          $endgroup$
          – larry
          Apr 1 '14 at 3:51




          $begingroup$
          How do you know that the first matrix has a determinant of 1?
          $endgroup$
          – larry
          Apr 1 '14 at 3:51




          2




          2




          $begingroup$
          @larry It is triangular. Determinants of triangular matrices are the products of their diagonals. The diagonals of the first matrix are all 1.
          $endgroup$
          – Emily
          Apr 23 '14 at 20:42




          $begingroup$
          @larry It is triangular. Determinants of triangular matrices are the products of their diagonals. The diagonals of the first matrix are all 1.
          $endgroup$
          – Emily
          Apr 23 '14 at 20:42












          $begingroup$
          How second determinant is det(A)det(D)
          $endgroup$
          – codeomnitrix
          Aug 15 '14 at 9:57




          $begingroup$
          How second determinant is det(A)det(D)
          $endgroup$
          – codeomnitrix
          Aug 15 '14 at 9:57




          2




          2




          $begingroup$
          It's the THIRD matrix whose determinant is $det(A) det(D)$.
          $endgroup$
          – John Hughes
          Oct 12 '14 at 20:06




          $begingroup$
          It's the THIRD matrix whose determinant is $det(A) det(D)$.
          $endgroup$
          – John Hughes
          Oct 12 '14 at 20:06




          10




          10




          $begingroup$
          This answer falls into the trap of proving an elementary result (one of the first one should know about determinants, immediately falling out of the Leibniz formula) by using statements that are special cases or consequences of the result to be proved. The result about triangular matrices that @Arkamis refers too can be obtained by iterating a decomposition into block-triangular matrices until hitting $1times1$ blocks. But more fundamentally, the RHS matrix is just a special case of a block triangular matrix, and proving its determinant is $det Adet D$ is not really any easier than the OP.
          $endgroup$
          – Marc van Leeuwen
          Apr 12 '15 at 6:11






          $begingroup$
          This answer falls into the trap of proving an elementary result (one of the first one should know about determinants, immediately falling out of the Leibniz formula) by using statements that are special cases or consequences of the result to be proved. The result about triangular matrices that @Arkamis refers too can be obtained by iterating a decomposition into block-triangular matrices until hitting $1times1$ blocks. But more fundamentally, the RHS matrix is just a special case of a block triangular matrix, and proving its determinant is $det Adet D$ is not really any easier than the OP.
          $endgroup$
          – Marc van Leeuwen
          Apr 12 '15 at 6:11













          17





          +100







          $begingroup$

          As @user153012 is asking for a proof in full detail, here is a brute-force approach using an explicit expression of a determinant of an $n$ by $n$ matrix, say $A = (a[i,j])$, $$det A = sum_{sigmain S_n}operatorname{sgn}sigma prod_i a[{i,sigma(i)}],$$ where $S_n$ is the symmetric group on $[n] = {1,dots, n}$ and $operatorname{sgn}sigma$ denotes the signature of $sigma$.



          In matrix $$B = left(begin{array}{cc}
          A&0\
          C&D
          end{array}right),$$ we have $$b[i,j] = begin{cases}a[i,j] & text{if }i le k, j le k;\ d[i-k, j-k] & text{if }i > k, j > k; \ 0 & text{if }i le k, j > k; \ c[i-k,j] & text{otherwise}.end{cases}$$ Observe in $$det B = sum_{sigmain S_{n+k}}operatorname{sgn}sigmaprod_i b[i, sigma(i)],$$ if $sigma(i) = j$ such that $ile k$ and $j > k$, then the corresponding summand $prod_i b[i,sigma(i)]$ is $0$. Any permutation $sigmain S_{n+k}$ for which no such $i$ and $j$ exist can be uniquely "decomposed" into two permutations, $pi$ and $tau$, where $piin S_k$ and $tauin S_n$ such that $sigma(i) = pi(i)$ for $i le k$ and $sigma(k+i) = k+tau(i)$ for $i le n$. Moreover, we have $operatorname{sgn}sigma = operatorname{sgn}pioperatorname{sgn}tau$. Denote the collection of such permutations by $S_n'$. Therefore, we can write $$begin{eqnarray}det B &=& sum_{sigmain S_n'}operatorname{sgn}sigmaprod_{i=1}^k b[i,sigma(i)]prod_{i=k+1}^{k+n} b[i,sigma(i)] \ &=& sum_{sigmain S_n'}operatorname{sgn}sigmaprod_{i=1}^k a[i,sigma(i)]prod_{i=1}^nd[i,sigma(i+k)-k] \ & = & sum_{piin S_k,tauin S_n}operatorname{sgn}pioperatorname{sgn}tauprod_{i=1}^k a[i,pi(i)]prod_{i=1}^nd[i,tau(i)] \ &=& sum_{piin S_k}operatorname{sgn}piprod_{i=1}^k a[i,pi(i)]sum_{tauin S_{n}}operatorname{sgn}tauprod_{i=1}^nd[i,tau(i)] \ & = & det Adet D.end{eqnarray}$$ QED.



          Update As @Marc van Leeuwen mentioned in the comment, a similar formula holds for permanents.The proof is basically the same as the proof for determinant except one has to drop off all those signatures of permutations.






          share|cite|improve this answer











          $endgroup$


















            17





            +100







            $begingroup$

            As @user153012 is asking for a proof in full detail, here is a brute-force approach using an explicit expression of a determinant of an $n$ by $n$ matrix, say $A = (a[i,j])$, $$det A = sum_{sigmain S_n}operatorname{sgn}sigma prod_i a[{i,sigma(i)}],$$ where $S_n$ is the symmetric group on $[n] = {1,dots, n}$ and $operatorname{sgn}sigma$ denotes the signature of $sigma$.



            In matrix $$B = left(begin{array}{cc}
            A&0\
            C&D
            end{array}right),$$ we have $$b[i,j] = begin{cases}a[i,j] & text{if }i le k, j le k;\ d[i-k, j-k] & text{if }i > k, j > k; \ 0 & text{if }i le k, j > k; \ c[i-k,j] & text{otherwise}.end{cases}$$ Observe in $$det B = sum_{sigmain S_{n+k}}operatorname{sgn}sigmaprod_i b[i, sigma(i)],$$ if $sigma(i) = j$ such that $ile k$ and $j > k$, then the corresponding summand $prod_i b[i,sigma(i)]$ is $0$. Any permutation $sigmain S_{n+k}$ for which no such $i$ and $j$ exist can be uniquely "decomposed" into two permutations, $pi$ and $tau$, where $piin S_k$ and $tauin S_n$ such that $sigma(i) = pi(i)$ for $i le k$ and $sigma(k+i) = k+tau(i)$ for $i le n$. Moreover, we have $operatorname{sgn}sigma = operatorname{sgn}pioperatorname{sgn}tau$. Denote the collection of such permutations by $S_n'$. Therefore, we can write $$begin{eqnarray}det B &=& sum_{sigmain S_n'}operatorname{sgn}sigmaprod_{i=1}^k b[i,sigma(i)]prod_{i=k+1}^{k+n} b[i,sigma(i)] \ &=& sum_{sigmain S_n'}operatorname{sgn}sigmaprod_{i=1}^k a[i,sigma(i)]prod_{i=1}^nd[i,sigma(i+k)-k] \ & = & sum_{piin S_k,tauin S_n}operatorname{sgn}pioperatorname{sgn}tauprod_{i=1}^k a[i,pi(i)]prod_{i=1}^nd[i,tau(i)] \ &=& sum_{piin S_k}operatorname{sgn}piprod_{i=1}^k a[i,pi(i)]sum_{tauin S_{n}}operatorname{sgn}tauprod_{i=1}^nd[i,tau(i)] \ & = & det Adet D.end{eqnarray}$$ QED.



            Update As @Marc van Leeuwen mentioned in the comment, a similar formula holds for permanents.The proof is basically the same as the proof for determinant except one has to drop off all those signatures of permutations.






            share|cite|improve this answer











            $endgroup$
















              17





              +100







              17





              +100



              17




              +100



              $begingroup$

              As @user153012 is asking for a proof in full detail, here is a brute-force approach using an explicit expression of a determinant of an $n$ by $n$ matrix, say $A = (a[i,j])$, $$det A = sum_{sigmain S_n}operatorname{sgn}sigma prod_i a[{i,sigma(i)}],$$ where $S_n$ is the symmetric group on $[n] = {1,dots, n}$ and $operatorname{sgn}sigma$ denotes the signature of $sigma$.



              In matrix $$B = left(begin{array}{cc}
              A&0\
              C&D
              end{array}right),$$ we have $$b[i,j] = begin{cases}a[i,j] & text{if }i le k, j le k;\ d[i-k, j-k] & text{if }i > k, j > k; \ 0 & text{if }i le k, j > k; \ c[i-k,j] & text{otherwise}.end{cases}$$ Observe in $$det B = sum_{sigmain S_{n+k}}operatorname{sgn}sigmaprod_i b[i, sigma(i)],$$ if $sigma(i) = j$ such that $ile k$ and $j > k$, then the corresponding summand $prod_i b[i,sigma(i)]$ is $0$. Any permutation $sigmain S_{n+k}$ for which no such $i$ and $j$ exist can be uniquely "decomposed" into two permutations, $pi$ and $tau$, where $piin S_k$ and $tauin S_n$ such that $sigma(i) = pi(i)$ for $i le k$ and $sigma(k+i) = k+tau(i)$ for $i le n$. Moreover, we have $operatorname{sgn}sigma = operatorname{sgn}pioperatorname{sgn}tau$. Denote the collection of such permutations by $S_n'$. Therefore, we can write $$begin{eqnarray}det B &=& sum_{sigmain S_n'}operatorname{sgn}sigmaprod_{i=1}^k b[i,sigma(i)]prod_{i=k+1}^{k+n} b[i,sigma(i)] \ &=& sum_{sigmain S_n'}operatorname{sgn}sigmaprod_{i=1}^k a[i,sigma(i)]prod_{i=1}^nd[i,sigma(i+k)-k] \ & = & sum_{piin S_k,tauin S_n}operatorname{sgn}pioperatorname{sgn}tauprod_{i=1}^k a[i,pi(i)]prod_{i=1}^nd[i,tau(i)] \ &=& sum_{piin S_k}operatorname{sgn}piprod_{i=1}^k a[i,pi(i)]sum_{tauin S_{n}}operatorname{sgn}tauprod_{i=1}^nd[i,tau(i)] \ & = & det Adet D.end{eqnarray}$$ QED.



              Update As @Marc van Leeuwen mentioned in the comment, a similar formula holds for permanents.The proof is basically the same as the proof for determinant except one has to drop off all those signatures of permutations.






              share|cite|improve this answer











              $endgroup$



              As @user153012 is asking for a proof in full detail, here is a brute-force approach using an explicit expression of a determinant of an $n$ by $n$ matrix, say $A = (a[i,j])$, $$det A = sum_{sigmain S_n}operatorname{sgn}sigma prod_i a[{i,sigma(i)}],$$ where $S_n$ is the symmetric group on $[n] = {1,dots, n}$ and $operatorname{sgn}sigma$ denotes the signature of $sigma$.



              In matrix $$B = left(begin{array}{cc}
              A&0\
              C&D
              end{array}right),$$ we have $$b[i,j] = begin{cases}a[i,j] & text{if }i le k, j le k;\ d[i-k, j-k] & text{if }i > k, j > k; \ 0 & text{if }i le k, j > k; \ c[i-k,j] & text{otherwise}.end{cases}$$ Observe in $$det B = sum_{sigmain S_{n+k}}operatorname{sgn}sigmaprod_i b[i, sigma(i)],$$ if $sigma(i) = j$ such that $ile k$ and $j > k$, then the corresponding summand $prod_i b[i,sigma(i)]$ is $0$. Any permutation $sigmain S_{n+k}$ for which no such $i$ and $j$ exist can be uniquely "decomposed" into two permutations, $pi$ and $tau$, where $piin S_k$ and $tauin S_n$ such that $sigma(i) = pi(i)$ for $i le k$ and $sigma(k+i) = k+tau(i)$ for $i le n$. Moreover, we have $operatorname{sgn}sigma = operatorname{sgn}pioperatorname{sgn}tau$. Denote the collection of such permutations by $S_n'$. Therefore, we can write $$begin{eqnarray}det B &=& sum_{sigmain S_n'}operatorname{sgn}sigmaprod_{i=1}^k b[i,sigma(i)]prod_{i=k+1}^{k+n} b[i,sigma(i)] \ &=& sum_{sigmain S_n'}operatorname{sgn}sigmaprod_{i=1}^k a[i,sigma(i)]prod_{i=1}^nd[i,sigma(i+k)-k] \ & = & sum_{piin S_k,tauin S_n}operatorname{sgn}pioperatorname{sgn}tauprod_{i=1}^k a[i,pi(i)]prod_{i=1}^nd[i,tau(i)] \ &=& sum_{piin S_k}operatorname{sgn}piprod_{i=1}^k a[i,pi(i)]sum_{tauin S_{n}}operatorname{sgn}tauprod_{i=1}^nd[i,tau(i)] \ & = & det Adet D.end{eqnarray}$$ QED.



              Update As @Marc van Leeuwen mentioned in the comment, a similar formula holds for permanents.The proof is basically the same as the proof for determinant except one has to drop off all those signatures of permutations.







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited Aug 4 '15 at 20:39









              darij grinberg

              11.1k33167




              11.1k33167










              answered Apr 5 '15 at 14:08









              Zilin J.Zilin J.

              3,780818




              3,780818























                  12












                  $begingroup$

                  This is a fundamental result about determinants, and like most of such results it holds for matrices with entries in any commutative (unitary) ring. It is therefore good to have a proof that does not rely on the coefficients being in a field; I will use the Leibniz formula as definition of the determinant rather than a characterisation as alternating $n$-linear form. (And my apologies to those who find that distasteful; for some purposes using the definition is really best.)



                  Writing for a permutation $sigma$ and a matrix $M$ the abbreviation $defsg{operatorname{sg}}M[sigma]=sg(sigma)prod_iM_{i,sigma(i)}$, the Leibniz formula says, for any $mtimes m$ matrix $M$
                  $$
                  det(M)=sum_{sigmain S_m}M[sigma]
                  $$
                  The result is based on the following simple fact about symmetric groups




                  The subgroup of $S_{k+n}$ of permutations permuting the first $k$ elements among each other is canonically isomorphic to $S_ktimes S_n$, and if $sigmain S_{k+n}$ corresponds to $(pi,rho)in S_ktimes S_n$ then $sg(sigma)=sg(pi)sg(rho)$.




                  In fact this is basically just saying that if the first $k$ elements are permuted among each other, then so are the remaining $n$ elements, and the sign of the whole permutation is the product of the signs of its restrictions to those two subsets.



                  Now if $M=bigl(begin{smallmatrix}A&0\C&Dend{smallmatrix}bigr)$ note that $M[sigma]=0$ unless $sigma$ permutes first $k$ indices among each other. From this it follows that
                  $$
                  det(M)=sum_{sigmain S_{k+n}}M[sigma]
                  =sum_{(pi,rho)in S_ktimes S_n}A[pi]D[rho]
                  =left(sum_{piin S_k}A[pi]right)left(sum_{rhoin S_n}D[rho]right)
                  =det Adet D.
                  $$





                  Alternatively, if one is willing to assume the property $det(MN)=det(M)det(N)$ (not really easier than the one to be proved here, but maybe better known), and if one considers the special cases where either $A=I$ or $D=I$ to be clear (because one can obtain the result in these cases by repeated Laplace expansion by rows respectively by columns), then one can employ any decomposition of the form
                  $$
                  pmatrix{A&0\C&D} = pmatrix{A&0\C_0&I} pmatrix{I&0\C_1&D}
                  qquadtext{with $C=C_0+C_1$}
                  $$
                  (for instance with one of $C_0,C_1$ equal to zero) to obtain the desired result.






                  share|cite|improve this answer











                  $endgroup$


















                    12












                    $begingroup$

                    This is a fundamental result about determinants, and like most of such results it holds for matrices with entries in any commutative (unitary) ring. It is therefore good to have a proof that does not rely on the coefficients being in a field; I will use the Leibniz formula as definition of the determinant rather than a characterisation as alternating $n$-linear form. (And my apologies to those who find that distasteful; for some purposes using the definition is really best.)



                    Writing for a permutation $sigma$ and a matrix $M$ the abbreviation $defsg{operatorname{sg}}M[sigma]=sg(sigma)prod_iM_{i,sigma(i)}$, the Leibniz formula says, for any $mtimes m$ matrix $M$
                    $$
                    det(M)=sum_{sigmain S_m}M[sigma]
                    $$
                    The result is based on the following simple fact about symmetric groups




                    The subgroup of $S_{k+n}$ of permutations permuting the first $k$ elements among each other is canonically isomorphic to $S_ktimes S_n$, and if $sigmain S_{k+n}$ corresponds to $(pi,rho)in S_ktimes S_n$ then $sg(sigma)=sg(pi)sg(rho)$.




                    In fact this is basically just saying that if the first $k$ elements are permuted among each other, then so are the remaining $n$ elements, and the sign of the whole permutation is the product of the signs of its restrictions to those two subsets.



                    Now if $M=bigl(begin{smallmatrix}A&0\C&Dend{smallmatrix}bigr)$ note that $M[sigma]=0$ unless $sigma$ permutes first $k$ indices among each other. From this it follows that
                    $$
                    det(M)=sum_{sigmain S_{k+n}}M[sigma]
                    =sum_{(pi,rho)in S_ktimes S_n}A[pi]D[rho]
                    =left(sum_{piin S_k}A[pi]right)left(sum_{rhoin S_n}D[rho]right)
                    =det Adet D.
                    $$





                    Alternatively, if one is willing to assume the property $det(MN)=det(M)det(N)$ (not really easier than the one to be proved here, but maybe better known), and if one considers the special cases where either $A=I$ or $D=I$ to be clear (because one can obtain the result in these cases by repeated Laplace expansion by rows respectively by columns), then one can employ any decomposition of the form
                    $$
                    pmatrix{A&0\C&D} = pmatrix{A&0\C_0&I} pmatrix{I&0\C_1&D}
                    qquadtext{with $C=C_0+C_1$}
                    $$
                    (for instance with one of $C_0,C_1$ equal to zero) to obtain the desired result.






                    share|cite|improve this answer











                    $endgroup$
















                      12












                      12








                      12





                      $begingroup$

                      This is a fundamental result about determinants, and like most of such results it holds for matrices with entries in any commutative (unitary) ring. It is therefore good to have a proof that does not rely on the coefficients being in a field; I will use the Leibniz formula as definition of the determinant rather than a characterisation as alternating $n$-linear form. (And my apologies to those who find that distasteful; for some purposes using the definition is really best.)



                      Writing for a permutation $sigma$ and a matrix $M$ the abbreviation $defsg{operatorname{sg}}M[sigma]=sg(sigma)prod_iM_{i,sigma(i)}$, the Leibniz formula says, for any $mtimes m$ matrix $M$
                      $$
                      det(M)=sum_{sigmain S_m}M[sigma]
                      $$
                      The result is based on the following simple fact about symmetric groups




                      The subgroup of $S_{k+n}$ of permutations permuting the first $k$ elements among each other is canonically isomorphic to $S_ktimes S_n$, and if $sigmain S_{k+n}$ corresponds to $(pi,rho)in S_ktimes S_n$ then $sg(sigma)=sg(pi)sg(rho)$.




                      In fact this is basically just saying that if the first $k$ elements are permuted among each other, then so are the remaining $n$ elements, and the sign of the whole permutation is the product of the signs of its restrictions to those two subsets.



                      Now if $M=bigl(begin{smallmatrix}A&0\C&Dend{smallmatrix}bigr)$ note that $M[sigma]=0$ unless $sigma$ permutes first $k$ indices among each other. From this it follows that
                      $$
                      det(M)=sum_{sigmain S_{k+n}}M[sigma]
                      =sum_{(pi,rho)in S_ktimes S_n}A[pi]D[rho]
                      =left(sum_{piin S_k}A[pi]right)left(sum_{rhoin S_n}D[rho]right)
                      =det Adet D.
                      $$





                      Alternatively, if one is willing to assume the property $det(MN)=det(M)det(N)$ (not really easier than the one to be proved here, but maybe better known), and if one considers the special cases where either $A=I$ or $D=I$ to be clear (because one can obtain the result in these cases by repeated Laplace expansion by rows respectively by columns), then one can employ any decomposition of the form
                      $$
                      pmatrix{A&0\C&D} = pmatrix{A&0\C_0&I} pmatrix{I&0\C_1&D}
                      qquadtext{with $C=C_0+C_1$}
                      $$
                      (for instance with one of $C_0,C_1$ equal to zero) to obtain the desired result.






                      share|cite|improve this answer











                      $endgroup$



                      This is a fundamental result about determinants, and like most of such results it holds for matrices with entries in any commutative (unitary) ring. It is therefore good to have a proof that does not rely on the coefficients being in a field; I will use the Leibniz formula as definition of the determinant rather than a characterisation as alternating $n$-linear form. (And my apologies to those who find that distasteful; for some purposes using the definition is really best.)



                      Writing for a permutation $sigma$ and a matrix $M$ the abbreviation $defsg{operatorname{sg}}M[sigma]=sg(sigma)prod_iM_{i,sigma(i)}$, the Leibniz formula says, for any $mtimes m$ matrix $M$
                      $$
                      det(M)=sum_{sigmain S_m}M[sigma]
                      $$
                      The result is based on the following simple fact about symmetric groups




                      The subgroup of $S_{k+n}$ of permutations permuting the first $k$ elements among each other is canonically isomorphic to $S_ktimes S_n$, and if $sigmain S_{k+n}$ corresponds to $(pi,rho)in S_ktimes S_n$ then $sg(sigma)=sg(pi)sg(rho)$.




                      In fact this is basically just saying that if the first $k$ elements are permuted among each other, then so are the remaining $n$ elements, and the sign of the whole permutation is the product of the signs of its restrictions to those two subsets.



                      Now if $M=bigl(begin{smallmatrix}A&0\C&Dend{smallmatrix}bigr)$ note that $M[sigma]=0$ unless $sigma$ permutes first $k$ indices among each other. From this it follows that
                      $$
                      det(M)=sum_{sigmain S_{k+n}}M[sigma]
                      =sum_{(pi,rho)in S_ktimes S_n}A[pi]D[rho]
                      =left(sum_{piin S_k}A[pi]right)left(sum_{rhoin S_n}D[rho]right)
                      =det Adet D.
                      $$





                      Alternatively, if one is willing to assume the property $det(MN)=det(M)det(N)$ (not really easier than the one to be proved here, but maybe better known), and if one considers the special cases where either $A=I$ or $D=I$ to be clear (because one can obtain the result in these cases by repeated Laplace expansion by rows respectively by columns), then one can employ any decomposition of the form
                      $$
                      pmatrix{A&0\C&D} = pmatrix{A&0\C_0&I} pmatrix{I&0\C_1&D}
                      qquadtext{with $C=C_0+C_1$}
                      $$
                      (for instance with one of $C_0,C_1$ equal to zero) to obtain the desired result.







                      share|cite|improve this answer














                      share|cite|improve this answer



                      share|cite|improve this answer








                      edited Aug 4 '15 at 20:46









                      darij grinberg

                      11.1k33167




                      11.1k33167










                      answered Apr 12 '15 at 6:50









                      Marc van LeeuwenMarc van Leeuwen

                      88.1k5111228




                      88.1k5111228























                          4












                          $begingroup$

                          Yet another proof, in the case of fields, can be obtained if you are willing to enlarge your field to an algebraically closed one. Then any matrix can be put in (lower) triangular form, with the eigenvalues on the diagonal. In particular, there are invertible matrices $S, T$ of appropriate sizes such that
                          $S^{-1} A S$ and $T^{-1} D T$ are in lower triangular form. Then
                          $$
                          begin{bmatrix}
                          S& 0\
                          0 & T\
                          end{bmatrix}^{-1}
                          begin{bmatrix}
                          A&0\
                          C&D
                          end{bmatrix}
                          begin{bmatrix}
                          S& 0\
                          0 & T\
                          end{bmatrix}
                          =
                          begin{bmatrix}
                          S^{-1} A S&0\
                          T^{-1} C S&T^{-1} D T
                          end{bmatrix}
                          $$
                          is in lower triangular form, and the rest is more or less straightforward. Clearly I rely on the multiplicativity of determinants here, or on the fact that the determinant is invariant under conjugation.






                          share|cite|improve this answer











                          $endgroup$









                          • 1




                            $begingroup$
                            Wow! Using algebraically closed fields to prove an elementary property of determinants. Something similar to my comment to the answer by joriki would apply here, although I admit that in principle one could argue that the determinant of a triangular matrix is the product of its diagonal entries without either using the result to be proved here, or an argument (Leibniz formula) that would just as easily prove it directly. Notably repeated Laplace expansion does the trick. But then wouldn't you agree a two-factor decomposition as at the end of my answer does the job more economically?
                            $endgroup$
                            – Marc van Leeuwen
                            Apr 13 '15 at 8:22






                          • 2




                            $begingroup$
                            @MarcvanLeeuwen, I had upvoted your answer, which is definitely the best of the lot. Still, I find that putting matrices in triangular forms (admittedly over an a.c. field) is a generally useful technique (for instance in the the proof of Cayley-Hamilton as given by - if I remember correctly - Lang in one of his books) and I couldn't resist recording this version (no doubt courting downvotes).
                            $endgroup$
                            – Andreas Caranti
                            Apr 13 '15 at 8:57










                          • $begingroup$
                            Such a cool proof...thanks so much, @AndreasCaranti :-)
                            $endgroup$
                            – User001
                            Oct 2 '15 at 0:31
















                          4












                          $begingroup$

                          Yet another proof, in the case of fields, can be obtained if you are willing to enlarge your field to an algebraically closed one. Then any matrix can be put in (lower) triangular form, with the eigenvalues on the diagonal. In particular, there are invertible matrices $S, T$ of appropriate sizes such that
                          $S^{-1} A S$ and $T^{-1} D T$ are in lower triangular form. Then
                          $$
                          begin{bmatrix}
                          S& 0\
                          0 & T\
                          end{bmatrix}^{-1}
                          begin{bmatrix}
                          A&0\
                          C&D
                          end{bmatrix}
                          begin{bmatrix}
                          S& 0\
                          0 & T\
                          end{bmatrix}
                          =
                          begin{bmatrix}
                          S^{-1} A S&0\
                          T^{-1} C S&T^{-1} D T
                          end{bmatrix}
                          $$
                          is in lower triangular form, and the rest is more or less straightforward. Clearly I rely on the multiplicativity of determinants here, or on the fact that the determinant is invariant under conjugation.






                          share|cite|improve this answer











                          $endgroup$









                          • 1




                            $begingroup$
                            Wow! Using algebraically closed fields to prove an elementary property of determinants. Something similar to my comment to the answer by joriki would apply here, although I admit that in principle one could argue that the determinant of a triangular matrix is the product of its diagonal entries without either using the result to be proved here, or an argument (Leibniz formula) that would just as easily prove it directly. Notably repeated Laplace expansion does the trick. But then wouldn't you agree a two-factor decomposition as at the end of my answer does the job more economically?
                            $endgroup$
                            – Marc van Leeuwen
                            Apr 13 '15 at 8:22






                          • 2




                            $begingroup$
                            @MarcvanLeeuwen, I had upvoted your answer, which is definitely the best of the lot. Still, I find that putting matrices in triangular forms (admittedly over an a.c. field) is a generally useful technique (for instance in the the proof of Cayley-Hamilton as given by - if I remember correctly - Lang in one of his books) and I couldn't resist recording this version (no doubt courting downvotes).
                            $endgroup$
                            – Andreas Caranti
                            Apr 13 '15 at 8:57










                          • $begingroup$
                            Such a cool proof...thanks so much, @AndreasCaranti :-)
                            $endgroup$
                            – User001
                            Oct 2 '15 at 0:31














                          4












                          4








                          4





                          $begingroup$

                          Yet another proof, in the case of fields, can be obtained if you are willing to enlarge your field to an algebraically closed one. Then any matrix can be put in (lower) triangular form, with the eigenvalues on the diagonal. In particular, there are invertible matrices $S, T$ of appropriate sizes such that
                          $S^{-1} A S$ and $T^{-1} D T$ are in lower triangular form. Then
                          $$
                          begin{bmatrix}
                          S& 0\
                          0 & T\
                          end{bmatrix}^{-1}
                          begin{bmatrix}
                          A&0\
                          C&D
                          end{bmatrix}
                          begin{bmatrix}
                          S& 0\
                          0 & T\
                          end{bmatrix}
                          =
                          begin{bmatrix}
                          S^{-1} A S&0\
                          T^{-1} C S&T^{-1} D T
                          end{bmatrix}
                          $$
                          is in lower triangular form, and the rest is more or less straightforward. Clearly I rely on the multiplicativity of determinants here, or on the fact that the determinant is invariant under conjugation.






                          share|cite|improve this answer











                          $endgroup$



                          Yet another proof, in the case of fields, can be obtained if you are willing to enlarge your field to an algebraically closed one. Then any matrix can be put in (lower) triangular form, with the eigenvalues on the diagonal. In particular, there are invertible matrices $S, T$ of appropriate sizes such that
                          $S^{-1} A S$ and $T^{-1} D T$ are in lower triangular form. Then
                          $$
                          begin{bmatrix}
                          S& 0\
                          0 & T\
                          end{bmatrix}^{-1}
                          begin{bmatrix}
                          A&0\
                          C&D
                          end{bmatrix}
                          begin{bmatrix}
                          S& 0\
                          0 & T\
                          end{bmatrix}
                          =
                          begin{bmatrix}
                          S^{-1} A S&0\
                          T^{-1} C S&T^{-1} D T
                          end{bmatrix}
                          $$
                          is in lower triangular form, and the rest is more or less straightforward. Clearly I rely on the multiplicativity of determinants here, or on the fact that the determinant is invariant under conjugation.







                          share|cite|improve this answer














                          share|cite|improve this answer



                          share|cite|improve this answer








                          edited Apr 12 '15 at 21:55

























                          answered Apr 12 '15 at 11:06









                          Andreas CarantiAndreas Caranti

                          56.7k34397




                          56.7k34397








                          • 1




                            $begingroup$
                            Wow! Using algebraically closed fields to prove an elementary property of determinants. Something similar to my comment to the answer by joriki would apply here, although I admit that in principle one could argue that the determinant of a triangular matrix is the product of its diagonal entries without either using the result to be proved here, or an argument (Leibniz formula) that would just as easily prove it directly. Notably repeated Laplace expansion does the trick. But then wouldn't you agree a two-factor decomposition as at the end of my answer does the job more economically?
                            $endgroup$
                            – Marc van Leeuwen
                            Apr 13 '15 at 8:22






                          • 2




                            $begingroup$
                            @MarcvanLeeuwen, I had upvoted your answer, which is definitely the best of the lot. Still, I find that putting matrices in triangular forms (admittedly over an a.c. field) is a generally useful technique (for instance in the the proof of Cayley-Hamilton as given by - if I remember correctly - Lang in one of his books) and I couldn't resist recording this version (no doubt courting downvotes).
                            $endgroup$
                            – Andreas Caranti
                            Apr 13 '15 at 8:57










                          • $begingroup$
                            Such a cool proof...thanks so much, @AndreasCaranti :-)
                            $endgroup$
                            – User001
                            Oct 2 '15 at 0:31














                          • 1




                            $begingroup$
                            Wow! Using algebraically closed fields to prove an elementary property of determinants. Something similar to my comment to the answer by joriki would apply here, although I admit that in principle one could argue that the determinant of a triangular matrix is the product of its diagonal entries without either using the result to be proved here, or an argument (Leibniz formula) that would just as easily prove it directly. Notably repeated Laplace expansion does the trick. But then wouldn't you agree a two-factor decomposition as at the end of my answer does the job more economically?
                            $endgroup$
                            – Marc van Leeuwen
                            Apr 13 '15 at 8:22






                          • 2




                            $begingroup$
                            @MarcvanLeeuwen, I had upvoted your answer, which is definitely the best of the lot. Still, I find that putting matrices in triangular forms (admittedly over an a.c. field) is a generally useful technique (for instance in the the proof of Cayley-Hamilton as given by - if I remember correctly - Lang in one of his books) and I couldn't resist recording this version (no doubt courting downvotes).
                            $endgroup$
                            – Andreas Caranti
                            Apr 13 '15 at 8:57










                          • $begingroup$
                            Such a cool proof...thanks so much, @AndreasCaranti :-)
                            $endgroup$
                            – User001
                            Oct 2 '15 at 0:31








                          1




                          1




                          $begingroup$
                          Wow! Using algebraically closed fields to prove an elementary property of determinants. Something similar to my comment to the answer by joriki would apply here, although I admit that in principle one could argue that the determinant of a triangular matrix is the product of its diagonal entries without either using the result to be proved here, or an argument (Leibniz formula) that would just as easily prove it directly. Notably repeated Laplace expansion does the trick. But then wouldn't you agree a two-factor decomposition as at the end of my answer does the job more economically?
                          $endgroup$
                          – Marc van Leeuwen
                          Apr 13 '15 at 8:22




                          $begingroup$
                          Wow! Using algebraically closed fields to prove an elementary property of determinants. Something similar to my comment to the answer by joriki would apply here, although I admit that in principle one could argue that the determinant of a triangular matrix is the product of its diagonal entries without either using the result to be proved here, or an argument (Leibniz formula) that would just as easily prove it directly. Notably repeated Laplace expansion does the trick. But then wouldn't you agree a two-factor decomposition as at the end of my answer does the job more economically?
                          $endgroup$
                          – Marc van Leeuwen
                          Apr 13 '15 at 8:22




                          2




                          2




                          $begingroup$
                          @MarcvanLeeuwen, I had upvoted your answer, which is definitely the best of the lot. Still, I find that putting matrices in triangular forms (admittedly over an a.c. field) is a generally useful technique (for instance in the the proof of Cayley-Hamilton as given by - if I remember correctly - Lang in one of his books) and I couldn't resist recording this version (no doubt courting downvotes).
                          $endgroup$
                          – Andreas Caranti
                          Apr 13 '15 at 8:57




                          $begingroup$
                          @MarcvanLeeuwen, I had upvoted your answer, which is definitely the best of the lot. Still, I find that putting matrices in triangular forms (admittedly over an a.c. field) is a generally useful technique (for instance in the the proof of Cayley-Hamilton as given by - if I remember correctly - Lang in one of his books) and I couldn't resist recording this version (no doubt courting downvotes).
                          $endgroup$
                          – Andreas Caranti
                          Apr 13 '15 at 8:57












                          $begingroup$
                          Such a cool proof...thanks so much, @AndreasCaranti :-)
                          $endgroup$
                          – User001
                          Oct 2 '15 at 0:31




                          $begingroup$
                          Such a cool proof...thanks so much, @AndreasCaranti :-)
                          $endgroup$
                          – User001
                          Oct 2 '15 at 0:31











                          2












                          $begingroup$

                          Here is a sketch of what I consider a simpler direct proof (assuming complete eigenspaces --- it can be adapted for degenerate eigenspaces). Let's assume there is no zero eigenvalue, otherwise it's easy to find the eigenvector that the full matrix sends to zero, and so the determinant must be zero.



                          We're going to prove that the eigenvalues are the same. Then the fact that the determinant is the product of the eigenvalues finishes the proof.



                          Consider the eigenvectors of $D$. Now (pre)pad those with zeros (that is create a longer vector with zeros in the first few entries). These are eigenvectors of the full matrix.



                          Now consider eigenvectors of $A$. We're going to postpad with something. We have to figure out what it is. Let $lambda$ be an eigenvalue of $A$ and $v_1$ its eigenvector. We'll pad it with $v_2$. So $v$ is made up of the vectors $v_1$ and $v_2$. The full matrix time $v$ has its first entries being $lambda v_1$. It's second set of entries is $C v_1 + D v_2$. Set this equal to $lambda v_1$ and solve for $v_2$. Since $D$ does not have a zero eigenvalue, it's solvable. Now we have an eigenvector for the full matrix that corresponds to $lambda$.






                          share|cite|improve this answer









                          $endgroup$


















                            2












                            $begingroup$

                            Here is a sketch of what I consider a simpler direct proof (assuming complete eigenspaces --- it can be adapted for degenerate eigenspaces). Let's assume there is no zero eigenvalue, otherwise it's easy to find the eigenvector that the full matrix sends to zero, and so the determinant must be zero.



                            We're going to prove that the eigenvalues are the same. Then the fact that the determinant is the product of the eigenvalues finishes the proof.



                            Consider the eigenvectors of $D$. Now (pre)pad those with zeros (that is create a longer vector with zeros in the first few entries). These are eigenvectors of the full matrix.



                            Now consider eigenvectors of $A$. We're going to postpad with something. We have to figure out what it is. Let $lambda$ be an eigenvalue of $A$ and $v_1$ its eigenvector. We'll pad it with $v_2$. So $v$ is made up of the vectors $v_1$ and $v_2$. The full matrix time $v$ has its first entries being $lambda v_1$. It's second set of entries is $C v_1 + D v_2$. Set this equal to $lambda v_1$ and solve for $v_2$. Since $D$ does not have a zero eigenvalue, it's solvable. Now we have an eigenvector for the full matrix that corresponds to $lambda$.






                            share|cite|improve this answer









                            $endgroup$
















                              2












                              2








                              2





                              $begingroup$

                              Here is a sketch of what I consider a simpler direct proof (assuming complete eigenspaces --- it can be adapted for degenerate eigenspaces). Let's assume there is no zero eigenvalue, otherwise it's easy to find the eigenvector that the full matrix sends to zero, and so the determinant must be zero.



                              We're going to prove that the eigenvalues are the same. Then the fact that the determinant is the product of the eigenvalues finishes the proof.



                              Consider the eigenvectors of $D$. Now (pre)pad those with zeros (that is create a longer vector with zeros in the first few entries). These are eigenvectors of the full matrix.



                              Now consider eigenvectors of $A$. We're going to postpad with something. We have to figure out what it is. Let $lambda$ be an eigenvalue of $A$ and $v_1$ its eigenvector. We'll pad it with $v_2$. So $v$ is made up of the vectors $v_1$ and $v_2$. The full matrix time $v$ has its first entries being $lambda v_1$. It's second set of entries is $C v_1 + D v_2$. Set this equal to $lambda v_1$ and solve for $v_2$. Since $D$ does not have a zero eigenvalue, it's solvable. Now we have an eigenvector for the full matrix that corresponds to $lambda$.






                              share|cite|improve this answer









                              $endgroup$



                              Here is a sketch of what I consider a simpler direct proof (assuming complete eigenspaces --- it can be adapted for degenerate eigenspaces). Let's assume there is no zero eigenvalue, otherwise it's easy to find the eigenvector that the full matrix sends to zero, and so the determinant must be zero.



                              We're going to prove that the eigenvalues are the same. Then the fact that the determinant is the product of the eigenvalues finishes the proof.



                              Consider the eigenvectors of $D$. Now (pre)pad those with zeros (that is create a longer vector with zeros in the first few entries). These are eigenvectors of the full matrix.



                              Now consider eigenvectors of $A$. We're going to postpad with something. We have to figure out what it is. Let $lambda$ be an eigenvalue of $A$ and $v_1$ its eigenvector. We'll pad it with $v_2$. So $v$ is made up of the vectors $v_1$ and $v_2$. The full matrix time $v$ has its first entries being $lambda v_1$. It's second set of entries is $C v_1 + D v_2$. Set this equal to $lambda v_1$ and solve for $v_2$. Since $D$ does not have a zero eigenvalue, it's solvable. Now we have an eigenvector for the full matrix that corresponds to $lambda$.







                              share|cite|improve this answer












                              share|cite|improve this answer



                              share|cite|improve this answer










                              answered May 25 '16 at 20:00









                              JoelJoel

                              434210




                              434210























                                  1












                                  $begingroup$

                                  Here is an approach that does not rely on any explicit definition of the determinant, nor any concept of the inverse. Instead, we can start with the 3 basic properties that the determinant function should satisfy. These three properties are:



                                  (1) Det(I) = 1



                                  (2) The Det() function is multilinear in each of the rows (columns) individually, assuming all other rows (columns) are held constant



                                  (3) If the matrix M is not full rank, Det(M)=0



                                  Artin showed that these three properties alone uniquely determine the form of the determinant function (I don't prove this here). Property 3 I am using here is slightly more general than that used by Artin, but it is equally intuitive and allows me to skip a step. First, you can show that



                                  $$
                                  Det
                                  begin{pmatrix}
                                  A & 0 \
                                  C & D
                                  end{pmatrix}
                                  =
                                  Det
                                  begin{pmatrix}
                                  A & 0 \
                                  0 & D
                                  end{pmatrix}
                                  $$



                                  To show this, we can expand the determinant of the original matrix M as
                                  $$
                                  (4) Det(M)= Det(M_1)+Det(M_2)
                                  $$
                                  where $M_1$ is the same as the original matrix but with the first k entries of the $k+1^{th}$ row set to 0, and $M_2$
                                  is the same as the original matrix but with the last n-k elements of the $k+1^{th}$ row seet to 0. Note that the $k+1^{th}$ row of $M_1$ and $M_2$ sum to the $k+1^{th}$ row of M, and therefore (4) holds according to property (2). Note that
                                  $
                                  Det(M_1)=0
                                  $
                                  since the resulting matrix is clearly not full-rank (there are now k+1 rows that have non zero entries in only k columns).
                                  Therefore we have $Det(M) = Det(M_2)$. We can then repeat this process for each row to show that the above claim is true.



                                  Now, let us denote
                                  $$
                                  M^*=begin{pmatrix}
                                  A & 0 \
                                  0 & D
                                  end{pmatrix}
                                  ,
                                  A^*=
                                  begin{pmatrix}
                                  A & 0 \
                                  0 & I
                                  end{pmatrix}
                                  ,
                                  D^*=
                                  begin{pmatrix}
                                  I & 0 \
                                  0 & D
                                  end{pmatrix}
                                  $$
                                  Note that
                                  $
                                  M^*=A^*B^*
                                  $
                                  I claim that
                                  $
                                  Det(M^*) = Det(A^*)*Det(D^*).
                                  $ To show this we can show that the function
                                  $$(5) F(D^*)=F(d^*_1,...,d^*_n)=frac{Det(A^*D^*)}{Det(A^*)}$$
                                  also satisfies properties (1)-(3). Clearly if $D=I$, then the RHS of (5) reduces to 1. To show (ii), We can write the j-th column of $M^*=A^**D^*$ as $A^*d_j$. Since we already know the determinant is multilinear in each column, and $A^*d^*_j$ is a linear function of d^*_j, it follows that $F(d^*_1,...,d^*_n)$ is also multininear in each of the columns ${d^*_1,...,d^*_n}$. Finally, to show (iii), we can note that if $d^*_i=d^*_j$ then $A^*d^*_i=A^*d^*_j$, so the numerator on the RHS would be 0. Therefore, $F(d^*_1,...,d^*_n)=Det(d^*_1,...,d^*_n)$, so $Det(A^**D^*)=Det(D^*)*Det(A^*)$, as desired.



                                  In order to finish the proof, one final step is needed. We need to show that $Det(A^*)=Det(A)$ and $Det(D^*)=Det(D)$. We use the same approach as above, by defining a target function which we show to be equal to the determinant. We do this by showing that the function $C(A)=Det(A^*)$ also satisfies properties (1)-(3) as a function of the columns of A, and therefore must be equal to its determinant. The basic ideas are that if A is the identity then so is $A^*$, so property (1) follows; any inear operation on a row (column) of A is also a linear operation on a row (column) of $A^*$, so (2) holds, and if A is not full rank, then neither is $A^*$, so (3) holds. Therefore, the function C(A), which extends A to $A^*$ by adding n-k standard basis vectors and then takes the determinant of the expanded matrix, is in fact equal to $Det(A)$.



                                  Given all of this, we immediately get the result, since
                                  $$Det(M)=Det(M^*)=Det(A^**D^*)=Det(A^*)*Det(D^*)=Det(A)*Det(D)
                                  $$






                                  share|cite|improve this answer











                                  $endgroup$


















                                    1












                                    $begingroup$

                                    Here is an approach that does not rely on any explicit definition of the determinant, nor any concept of the inverse. Instead, we can start with the 3 basic properties that the determinant function should satisfy. These three properties are:



                                    (1) Det(I) = 1



                                    (2) The Det() function is multilinear in each of the rows (columns) individually, assuming all other rows (columns) are held constant



                                    (3) If the matrix M is not full rank, Det(M)=0



                                    Artin showed that these three properties alone uniquely determine the form of the determinant function (I don't prove this here). Property 3 I am using here is slightly more general than that used by Artin, but it is equally intuitive and allows me to skip a step. First, you can show that



                                    $$
                                    Det
                                    begin{pmatrix}
                                    A & 0 \
                                    C & D
                                    end{pmatrix}
                                    =
                                    Det
                                    begin{pmatrix}
                                    A & 0 \
                                    0 & D
                                    end{pmatrix}
                                    $$



                                    To show this, we can expand the determinant of the original matrix M as
                                    $$
                                    (4) Det(M)= Det(M_1)+Det(M_2)
                                    $$
                                    where $M_1$ is the same as the original matrix but with the first k entries of the $k+1^{th}$ row set to 0, and $M_2$
                                    is the same as the original matrix but with the last n-k elements of the $k+1^{th}$ row seet to 0. Note that the $k+1^{th}$ row of $M_1$ and $M_2$ sum to the $k+1^{th}$ row of M, and therefore (4) holds according to property (2). Note that
                                    $
                                    Det(M_1)=0
                                    $
                                    since the resulting matrix is clearly not full-rank (there are now k+1 rows that have non zero entries in only k columns).
                                    Therefore we have $Det(M) = Det(M_2)$. We can then repeat this process for each row to show that the above claim is true.



                                    Now, let us denote
                                    $$
                                    M^*=begin{pmatrix}
                                    A & 0 \
                                    0 & D
                                    end{pmatrix}
                                    ,
                                    A^*=
                                    begin{pmatrix}
                                    A & 0 \
                                    0 & I
                                    end{pmatrix}
                                    ,
                                    D^*=
                                    begin{pmatrix}
                                    I & 0 \
                                    0 & D
                                    end{pmatrix}
                                    $$
                                    Note that
                                    $
                                    M^*=A^*B^*
                                    $
                                    I claim that
                                    $
                                    Det(M^*) = Det(A^*)*Det(D^*).
                                    $ To show this we can show that the function
                                    $$(5) F(D^*)=F(d^*_1,...,d^*_n)=frac{Det(A^*D^*)}{Det(A^*)}$$
                                    also satisfies properties (1)-(3). Clearly if $D=I$, then the RHS of (5) reduces to 1. To show (ii), We can write the j-th column of $M^*=A^**D^*$ as $A^*d_j$. Since we already know the determinant is multilinear in each column, and $A^*d^*_j$ is a linear function of d^*_j, it follows that $F(d^*_1,...,d^*_n)$ is also multininear in each of the columns ${d^*_1,...,d^*_n}$. Finally, to show (iii), we can note that if $d^*_i=d^*_j$ then $A^*d^*_i=A^*d^*_j$, so the numerator on the RHS would be 0. Therefore, $F(d^*_1,...,d^*_n)=Det(d^*_1,...,d^*_n)$, so $Det(A^**D^*)=Det(D^*)*Det(A^*)$, as desired.



                                    In order to finish the proof, one final step is needed. We need to show that $Det(A^*)=Det(A)$ and $Det(D^*)=Det(D)$. We use the same approach as above, by defining a target function which we show to be equal to the determinant. We do this by showing that the function $C(A)=Det(A^*)$ also satisfies properties (1)-(3) as a function of the columns of A, and therefore must be equal to its determinant. The basic ideas are that if A is the identity then so is $A^*$, so property (1) follows; any inear operation on a row (column) of A is also a linear operation on a row (column) of $A^*$, so (2) holds, and if A is not full rank, then neither is $A^*$, so (3) holds. Therefore, the function C(A), which extends A to $A^*$ by adding n-k standard basis vectors and then takes the determinant of the expanded matrix, is in fact equal to $Det(A)$.



                                    Given all of this, we immediately get the result, since
                                    $$Det(M)=Det(M^*)=Det(A^**D^*)=Det(A^*)*Det(D^*)=Det(A)*Det(D)
                                    $$






                                    share|cite|improve this answer











                                    $endgroup$
















                                      1












                                      1








                                      1





                                      $begingroup$

                                      Here is an approach that does not rely on any explicit definition of the determinant, nor any concept of the inverse. Instead, we can start with the 3 basic properties that the determinant function should satisfy. These three properties are:



                                      (1) Det(I) = 1



                                      (2) The Det() function is multilinear in each of the rows (columns) individually, assuming all other rows (columns) are held constant



                                      (3) If the matrix M is not full rank, Det(M)=0



                                      Artin showed that these three properties alone uniquely determine the form of the determinant function (I don't prove this here). Property 3 I am using here is slightly more general than that used by Artin, but it is equally intuitive and allows me to skip a step. First, you can show that



                                      $$
                                      Det
                                      begin{pmatrix}
                                      A & 0 \
                                      C & D
                                      end{pmatrix}
                                      =
                                      Det
                                      begin{pmatrix}
                                      A & 0 \
                                      0 & D
                                      end{pmatrix}
                                      $$



                                      To show this, we can expand the determinant of the original matrix M as
                                      $$
                                      (4) Det(M)= Det(M_1)+Det(M_2)
                                      $$
                                      where $M_1$ is the same as the original matrix but with the first k entries of the $k+1^{th}$ row set to 0, and $M_2$
                                      is the same as the original matrix but with the last n-k elements of the $k+1^{th}$ row seet to 0. Note that the $k+1^{th}$ row of $M_1$ and $M_2$ sum to the $k+1^{th}$ row of M, and therefore (4) holds according to property (2). Note that
                                      $
                                      Det(M_1)=0
                                      $
                                      since the resulting matrix is clearly not full-rank (there are now k+1 rows that have non zero entries in only k columns).
                                      Therefore we have $Det(M) = Det(M_2)$. We can then repeat this process for each row to show that the above claim is true.



                                      Now, let us denote
                                      $$
                                      M^*=begin{pmatrix}
                                      A & 0 \
                                      0 & D
                                      end{pmatrix}
                                      ,
                                      A^*=
                                      begin{pmatrix}
                                      A & 0 \
                                      0 & I
                                      end{pmatrix}
                                      ,
                                      D^*=
                                      begin{pmatrix}
                                      I & 0 \
                                      0 & D
                                      end{pmatrix}
                                      $$
                                      Note that
                                      $
                                      M^*=A^*B^*
                                      $
                                      I claim that
                                      $
                                      Det(M^*) = Det(A^*)*Det(D^*).
                                      $ To show this we can show that the function
                                      $$(5) F(D^*)=F(d^*_1,...,d^*_n)=frac{Det(A^*D^*)}{Det(A^*)}$$
                                      also satisfies properties (1)-(3). Clearly if $D=I$, then the RHS of (5) reduces to 1. To show (ii), We can write the j-th column of $M^*=A^**D^*$ as $A^*d_j$. Since we already know the determinant is multilinear in each column, and $A^*d^*_j$ is a linear function of d^*_j, it follows that $F(d^*_1,...,d^*_n)$ is also multininear in each of the columns ${d^*_1,...,d^*_n}$. Finally, to show (iii), we can note that if $d^*_i=d^*_j$ then $A^*d^*_i=A^*d^*_j$, so the numerator on the RHS would be 0. Therefore, $F(d^*_1,...,d^*_n)=Det(d^*_1,...,d^*_n)$, so $Det(A^**D^*)=Det(D^*)*Det(A^*)$, as desired.



                                      In order to finish the proof, one final step is needed. We need to show that $Det(A^*)=Det(A)$ and $Det(D^*)=Det(D)$. We use the same approach as above, by defining a target function which we show to be equal to the determinant. We do this by showing that the function $C(A)=Det(A^*)$ also satisfies properties (1)-(3) as a function of the columns of A, and therefore must be equal to its determinant. The basic ideas are that if A is the identity then so is $A^*$, so property (1) follows; any inear operation on a row (column) of A is also a linear operation on a row (column) of $A^*$, so (2) holds, and if A is not full rank, then neither is $A^*$, so (3) holds. Therefore, the function C(A), which extends A to $A^*$ by adding n-k standard basis vectors and then takes the determinant of the expanded matrix, is in fact equal to $Det(A)$.



                                      Given all of this, we immediately get the result, since
                                      $$Det(M)=Det(M^*)=Det(A^**D^*)=Det(A^*)*Det(D^*)=Det(A)*Det(D)
                                      $$






                                      share|cite|improve this answer











                                      $endgroup$



                                      Here is an approach that does not rely on any explicit definition of the determinant, nor any concept of the inverse. Instead, we can start with the 3 basic properties that the determinant function should satisfy. These three properties are:



                                      (1) Det(I) = 1



                                      (2) The Det() function is multilinear in each of the rows (columns) individually, assuming all other rows (columns) are held constant



                                      (3) If the matrix M is not full rank, Det(M)=0



                                      Artin showed that these three properties alone uniquely determine the form of the determinant function (I don't prove this here). Property 3 I am using here is slightly more general than that used by Artin, but it is equally intuitive and allows me to skip a step. First, you can show that



                                      $$
                                      Det
                                      begin{pmatrix}
                                      A & 0 \
                                      C & D
                                      end{pmatrix}
                                      =
                                      Det
                                      begin{pmatrix}
                                      A & 0 \
                                      0 & D
                                      end{pmatrix}
                                      $$



                                      To show this, we can expand the determinant of the original matrix M as
                                      $$
                                      (4) Det(M)= Det(M_1)+Det(M_2)
                                      $$
                                      where $M_1$ is the same as the original matrix but with the first k entries of the $k+1^{th}$ row set to 0, and $M_2$
                                      is the same as the original matrix but with the last n-k elements of the $k+1^{th}$ row seet to 0. Note that the $k+1^{th}$ row of $M_1$ and $M_2$ sum to the $k+1^{th}$ row of M, and therefore (4) holds according to property (2). Note that
                                      $
                                      Det(M_1)=0
                                      $
                                      since the resulting matrix is clearly not full-rank (there are now k+1 rows that have non zero entries in only k columns).
                                      Therefore we have $Det(M) = Det(M_2)$. We can then repeat this process for each row to show that the above claim is true.



                                      Now, let us denote
                                      $$
                                      M^*=begin{pmatrix}
                                      A & 0 \
                                      0 & D
                                      end{pmatrix}
                                      ,
                                      A^*=
                                      begin{pmatrix}
                                      A & 0 \
                                      0 & I
                                      end{pmatrix}
                                      ,
                                      D^*=
                                      begin{pmatrix}
                                      I & 0 \
                                      0 & D
                                      end{pmatrix}
                                      $$
                                      Note that
                                      $
                                      M^*=A^*B^*
                                      $
                                      I claim that
                                      $
                                      Det(M^*) = Det(A^*)*Det(D^*).
                                      $ To show this we can show that the function
                                      $$(5) F(D^*)=F(d^*_1,...,d^*_n)=frac{Det(A^*D^*)}{Det(A^*)}$$
                                      also satisfies properties (1)-(3). Clearly if $D=I$, then the RHS of (5) reduces to 1. To show (ii), We can write the j-th column of $M^*=A^**D^*$ as $A^*d_j$. Since we already know the determinant is multilinear in each column, and $A^*d^*_j$ is a linear function of d^*_j, it follows that $F(d^*_1,...,d^*_n)$ is also multininear in each of the columns ${d^*_1,...,d^*_n}$. Finally, to show (iii), we can note that if $d^*_i=d^*_j$ then $A^*d^*_i=A^*d^*_j$, so the numerator on the RHS would be 0. Therefore, $F(d^*_1,...,d^*_n)=Det(d^*_1,...,d^*_n)$, so $Det(A^**D^*)=Det(D^*)*Det(A^*)$, as desired.



                                      In order to finish the proof, one final step is needed. We need to show that $Det(A^*)=Det(A)$ and $Det(D^*)=Det(D)$. We use the same approach as above, by defining a target function which we show to be equal to the determinant. We do this by showing that the function $C(A)=Det(A^*)$ also satisfies properties (1)-(3) as a function of the columns of A, and therefore must be equal to its determinant. The basic ideas are that if A is the identity then so is $A^*$, so property (1) follows; any inear operation on a row (column) of A is also a linear operation on a row (column) of $A^*$, so (2) holds, and if A is not full rank, then neither is $A^*$, so (3) holds. Therefore, the function C(A), which extends A to $A^*$ by adding n-k standard basis vectors and then takes the determinant of the expanded matrix, is in fact equal to $Det(A)$.



                                      Given all of this, we immediately get the result, since
                                      $$Det(M)=Det(M^*)=Det(A^**D^*)=Det(A^*)*Det(D^*)=Det(A)*Det(D)
                                      $$







                                      share|cite|improve this answer














                                      share|cite|improve this answer



                                      share|cite|improve this answer








                                      edited Feb 10 '17 at 0:05

























                                      answered Feb 9 '17 at 19:39









                                      PaulPaul

                                      341313




                                      341313






























                                          draft saved

                                          draft discarded




















































                                          Thanks for contributing an answer to Mathematics Stack Exchange!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid



                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.


                                          Use MathJax to format equations. MathJax reference.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function () {
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f75293%2fdeterminant-of-a-block-lower-triangular-matrix%23new-answer', 'question_page');
                                          }
                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          Nidaros erkebispedøme

                                          Birsay

                                          Was Woodrow Wilson really a Liberal?Was World War I a war of liberals against authoritarians?Founding Fathers...