Question
Question: Let a,b,c such that \(b\left( a+c \right)\ne 0\). If \[\left| \begin{matrix} a & a+1 & a-1 \\\ ...
Let a,b,c such that b(a+c)=0. If a −b c a+1b+1c−1a−1b−1c+1+a+1 a−1 (−1)n+2a b+1c−1(−1)n+1bc−1c+1(−1)nc=0, then then the value of n isA.Anyinteger
B. zeroC.Anyeveninteger
D. Any odd integer$$$$
Solution
We first take (−1)n common from the third row of the second determinant and take it outside of the determinant as multiple. We take the transpose of the second determinant and exchange columns two times such that the second determinant can be expressed in terms of the first determinant. We take the first determinant common and solve the equation for n.$$$$
Complete step-by-step solution:
We know that the transpose of a matrix A with m rows and n column is denoted as AT and can be obtained by
1\. Reflecting $A$ over its main diagonal (which runs from top-left to bottom-right)
2. Writing the rows of A as the columns of {{A}^{T}}$$$$$
3\. Writing the columns of Aastherowsof{{A}^{T}}$$$$$
We can only find the determinant of the matrix if the matrix is a square matrix where the determinant value of the matrix A and determinant value of the transpose are equal which means
det(A)=det(AT)
We know that we can interchange rows n a number of times if we multiply (−1)n to the determinant. We also know that if we multiply a multiple k with any row or column then the determinant value changes by k times which means if k is a factor every element in a row or column we can take k outside the determinant as a factor.
We are given in the question an equation of determinants