Solveeit Logo

Question

Question: What are the properties of matrices?...

What are the properties of matrices?

Explanation

Solution

In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns. An m×nm \times n matrix A is a rectangular array of elements aij{a_{ij}} consisting of mm rows and nn columns.
A = \left[ {{a_{ij}}} \right] = \left( {\begin{array}{*{20}{c}} {{a_{11}}}& \ldots &{{a_{1n}}} \\\ \vdots & \ddots & \vdots \\\ {{a_{m1}}}& \cdots &{{a_{mn}}} \end{array}} \right)

Complete step-by-step solution:
A square matrix of ordernn has nn rows and nn columns.
A square matrix[aij]\left[ {{a_{ij}}} \right] is called a symmetric matrix if aij=aji{a_{ij}} = {a_{ji}} i.e. the elements of the matrix are symmetric with respect to the main diagonal.
A square matrix [aij]\left[ {{a_{ij}}} \right] is called a skew-symmetric ifaij=aji{a_{ij}} = - {a_{ji}}.
A square matrix is called diagonal if all its elements outside the main diagonal are equal to zero.
A diagonal matrix is called the identity matrix if the elements on its main diagonal are all equal to 11 (All other elements are zero).
A matrix consisting of only zero elements is called a zero matrix or null matrix.
Two matrices A and B are equal if and only if they have the same size m×nm \times n and their corresponding elements are equal.
Two matrices A and B can be added (or subtracted) if and only if they have the same sizem×nm \times n. If
A = \left[ {{a_{ij}}} \right] = \left( {\begin{array}{*{20}{c}} {{a_{11}}}& \ldots &{{a_{1n}}} \\\ \vdots & \ddots & \vdots \\\ {{a_{m1}}}& \cdots &{{a_{mn}}} \end{array}} \right)
and
B = \left[ {{b_{ij}}} \right] = \left( {\begin{array}{*{20}{c}} {{b_{11}}}& \ldots &{{b_{1n}}} \\\ \vdots & \ddots & \vdots \\\ {{b_{m1}}}& \cdots &{{b_{mn}}} \end{array}} \right)
Then the sum of these matrices is defined as
A+B = \left[ {{a_{ij}}} \right] +\left[ {{b_{ij}}} \right] = \left( {\begin{array}{*{20}{c}} {{a_{11}+b_{11}}}& \ldots &{{a_{1n}+b_{1n}}} \\\ \vdots & \ddots & \vdots \\\ {{a_{m1}+b_{m1}}}& \cdots &{{a_{mn}+b_{mn}}} \end{array}} \right)
Given a constant number k and a matrixA=[aij]A = \left[ {{a_{ij}}} \right]
kA = \left[ {{ka_{ij}}} \right] = \left( {\begin{array}{*{20}{c}} {{ka_{11}}}& \ldots &{{ka_{1n}}} \\\ \vdots & \ddots & \vdots \\\ {{ka_{m1}}}& \cdots &{{ka_{mn}}} \end{array}} \right)
Let A and B be two matrices. The product of the matrices AB exists if and only if the number of columns in the first matrix is equal to the number of rows in the second matrix. If
A = \left[ {{a_{ij}}} \right] = \left( {\begin{array}{*{20}{c}} {{a_{11}}}& \ldots &{{a_{1n}}} \\\ \vdots & \ddots & \vdots \\\ {{a_{m1}}}& \cdots &{{a_{mn}}} \end{array}} \right)
and
B = \left[ {{b_{ij}}} \right] = \left( {\begin{array}{*{20}{c}} {{b_{11}}}& \ldots &{{b_{1n}}} \\\ \vdots & \ddots & \vdots \\\ {{b_{m1}}}& \cdots &{{b_{mn}}} \end{array}} \right)
Then the product AB is represented as a matrix
AB=C = \left( {\begin{array}{*{20}{c}} {{c_{11}}}& \ldots &{{c_{1n}}} \\\ \vdots & \ddots & \vdots \\\ {{c_{m1}}}& \cdots &{{c_{mn}}} \end{array}} \right)
If the rows and columns in a matrix A are interchanged, the new matrix is called the transpose of the original matrix A. The transposed matrix is denoted by AT{A^T}.
A square matrix A is called orthogonal if AAT=IA{A^T} = I where I is the identity matrix.
If the matrix product AB is defined, then(AB)T=BTAT{\left( {AB} \right)^T} = {B^T}{A^T}
If A is a square matrix of order nn, then the corresponding adjoint matrix, denoted as C{C^*} is a matrix formed by the cofactors Aij{A_{ij}} of the elements of the transposed matrix AT{A^T}
If A is a square matrix of order nn, then its trace, denoted as tr A, is the sum of the elements on the main diagonal.
If the matrix product AB is defined, then (AB)1=B1A1{\left( {AB} \right)^{ - 1}} = {B^{ - 1}}{A^{ - 1}}
If A is a square matrix, its eigenvectors X satisfy the matrix equation AX=λXAX = \lambda X and the eigenvalues λ\lambda are determined by the characteristic equation AλI=0\left| {A - \lambda I} \right| = 0.

Note: The inverse of a matrix A is defined as a A1{A^{ - 1}} such that the result of multiplication of the original matrix A by A1{A^{ - 1}} is the identity matrix I:
AA1=IA{A^{ - 1}} = I
An inverse matrix exists only for square nonsingular matrices (whose determinant is not zero). If A is a square nonsingular matrix of order n, the inverse matrix A1{A^{ - 1}} is given by
A1=CdetA{A^{ - 1}} = {C^*}\det A
where C{C^*} is the adjoint of the matrix and detA is the determinant of the matrix A.