Similar to inverses in many properties.

Encyclopedic YouTube

    1 / 5

    ✪ Inverse matrix (2 ways to find)

    ✪ How to find inverse matrix - bezbotvy

    ✪ Inverse Matrix #1

    ✪ Solving a system of equations using the inverse matrix method - bezbotvy

    ✪ Reverse Matrix

    Subtitles

Inverse Matrix Properties

  • det A − 1 = 1 det A (\displaystyle \det A^(-1)=(\frac (1)(\det A))), where det (\displaystyle \ \det ) denotes a determinant.
  • (A B) − 1 = B − 1 A − 1 (\displaystyle \ (AB)^(-1)=B^(-1)A^(-1)) for two square invertible matrices A (\displaystyle A) and B (\displaystyle B).
  • (A T) − 1 = (A − 1) T (\displaystyle \ (A^(T))^(-1)=(A^(-1))^(T)), where (. . .) T (\displaystyle (...)^(T)) denotes the transposed matrix.
  • (k A) − 1 = k − 1 A − 1 (\displaystyle \ (kA)^(-1)=k^(-1)A^(-1)) for any coefficient k ≠ 0 (\displaystyle k\not =0).
  • E − 1 = E (\displaystyle \ E^(-1)=E).
  • If it is necessary to solve a system of linear equations , (b is a non-zero vector) where x (\displaystyle x) is the desired vector, and if A − 1 (\displaystyle A^(-1)) exists, then x = A − 1 b (\displaystyle x=A^(-1)b). Otherwise, either the dimension of the solution space is greater than zero, or there are none at all.

Ways to find the inverse matrix

If the matrix is ​​invertible, then to find the inverse of the matrix, you can use one of the following methods:

Exact (direct) methods

Gauss-Jordan method

Let's take two matrices: itself A and single E. Let's bring the matrix A to the identity matrix by the Gauss-Jordan method applying transformations in rows (you can also apply transformations in columns, but not in a mix). After applying each operation to the first matrix, apply the same operation to the second. When the reduction of the first matrix to the identity form is completed, the second matrix will be equal to A -1.

When using the Gauss method, the first matrix will be multiplied from the left by one of the elementary matrices Λ i (\displaystyle \Lambda _(i))(transvection or diagonal matrix with ones on the main diagonal, except for one position):

Λ 1 ⋅ ⋯ ⋅ Λ n ⋅ A = Λ A = E ⇒ Λ = A − 1 (\displaystyle \Lambda _(1)\cdot \dots \cdot \Lambda _(n)\cdot A=\Lambda A=E \Rightarrow \Lambda =A^(-1)). Λ m = [ 1 … 0 − a 1 m / a m m 0 … 0 … 0 … 1 − a m − 1 m / a m m 0 … 0 0 … 0 1 / a m m 0 … 0 0 … 0 − a m + 1 m / a m m 1 … 0 … 0 … 0 − a n m / a m m 0 … 1 ] (\displaystyle \Lambda _(m)=(\begin(bmatrix)1&\dots &0&-a_(1m)/a_(mm)&0&\dots &0\\ &&&\dots &&&\\0&\dots &1&-a_(m-1m)/a_(mm)&0&\dots &0\\0&\dots &0&1/a_(mm)&0&\dots &0\\0&\dots &0&-a_( m+1m)/a_(mm)&1&\dots &0\\&&&\dots &&&\\0&\dots &0&-a_(nm)/a_(mm)&0&\dots &1\end(bmatrix))).

The second matrix after applying all operations will be equal to Λ (\displaystyle \Lambda ), that is, will be the desired one. The complexity of the algorithm - O(n 3) (\displaystyle O(n^(3))).

Using the matrix of algebraic additions

Matrix Inverse Matrix A (\displaystyle A), represent in the form

A − 1 = adj (A) det (A) (\displaystyle (A)^(-1)=(((\mbox(adj))(A)) \over (\det(A))))

where adj (A) (\displaystyle (\mbox(adj))(A))- attached matrix ;

The complexity of the algorithm depends on the complexity of the algorithm for calculating the determinant O det and is equal to O(n²) O det .

Using LU/LUP decomposition

Matrix equation A X = I n (\displaystyle AX=I_(n)) for inverse matrix X (\displaystyle X) can be viewed as a collection n (\displaystyle n) systems of the form A x = b (\displaystyle Ax=b). Denote i (\displaystyle i)-th column of the matrix X (\displaystyle X) through X i (\displaystyle X_(i)); then A X i = e i (\displaystyle AX_(i)=e_(i)), i = 1 , … , n (\displaystyle i=1,\ldots ,n),because the i (\displaystyle i)-th column of the matrix I n (\displaystyle I_(n)) is the unit vector e i (\displaystyle e_(i)). in other words, finding the inverse matrix is ​​reduced to solving n equations with the same matrix and different right-hand sides. After running the LUP expansion (time O(n³)) each of the n equations takes O(n²) time to solve, so this part of the work also takes O(n³) time.

If the matrix A is nonsingular, then we can calculate the LUP decomposition for it P A = L U (\displaystyle PA=LU). Let P A = B (\displaystyle PA=B), B − 1 = D (\displaystyle B^(-1)=D). Then, from the properties of the inverse matrix, we can write: D = U − 1 L − 1 (\displaystyle D=U^(-1)L^(-1)). If we multiply this equality by U and L, then we can get two equalities of the form U D = L − 1 (\displaystyle UD=L^(-1)) and D L = U − 1 (\displaystyle DL=U^(-1)). The first of these equalities is a system of n² linear equations for n (n + 1) 2 (\displaystyle (\frac (n(n+1))(2))) of which the right-hand sides are known (from the properties of triangular matrices). The second is also a system of n² linear equations for n (n − 1) 2 (\displaystyle (\frac (n(n-1))(2))) of which the right-hand sides are known (also from the properties of triangular matrices). Together they form a system of n² equalities. Using these equalities, we can recursively determine all n² elements of the matrix D. Then from the equality (PA) −1 = A −1 P −1 = B −1 = D. we obtain the equality A − 1 = D P (\displaystyle A^(-1)=DP).

In the case of using the LU decomposition, no permutation of the columns of the matrix D is required, but the solution may diverge even if the matrix A is nonsingular.

The complexity of the algorithm is O(n³).

Iterative Methods

Schultz Methods

( Ψ k = E − A U k , U k + 1 = U k ∑ i = 0 n Ψ k i (\displaystyle (\begin(cases)\Psi _(k)=E-AU_(k),\\U_( k+1)=U_(k)\sum _(i=0)^(n)\Psi _(k)^(i)\end(cases)))

Error estimate

Choice of Initial Approximation

The problem of choosing the initial approximation in the processes of iterative matrix inversion considered here does not allow us to treat them as independent universal methods that compete with direct inversion methods based, for example, on the LU decomposition of matrices. There are some recommendations for choosing U 0 (\displaystyle U_(0)), ensuring the fulfillment of the condition ρ (Ψ 0) < 1 {\displaystyle \rho (\Psi _{0})<1} (the spectral radius of the matrix is ​​less than unity), which is necessary and sufficient for the convergence of the process. However, in this case, first, it is required to know from above the estimate for the spectrum of the invertible matrix A or the matrix A A T (\displaystyle AA^(T))(namely, if A is a symmetric positive definite matrix and ρ (A) ≤ β (\displaystyle \rho (A)\leq \beta ), then you can take U 0 = α E (\displaystyle U_(0)=(\alpha )E), where ; if A is an arbitrary nonsingular matrix and ρ (A A T) ≤ β (\displaystyle \rho (AA^(T))\leq \beta ), then suppose U 0 = α A T (\displaystyle U_(0)=(\alpha )A^(T)), where also α ∈ (0 , 2 β) (\displaystyle \alpha \in \left(0,(\frac (2)(\beta ))\right)); Of course, the situation can be simplified and, using the fact that ρ (A A T) ≤ k A A T k (\displaystyle \rho (AA^(T))\leq (\mathcal (k))AA^(T)(\mathcal (k))), put U 0 = A T ‖ A A T ‖ (\displaystyle U_(0)=(\frac (A^(T))(\|AA^(T)\|)))). Secondly, with such a specification of the initial matrix, there is no guarantee that ‖ Ψ 0 ‖ (\displaystyle \|\Psi _(0)\|) will be small (perhaps even ‖ Ψ 0 ‖ > 1 (\displaystyle \|\Psi _(0)\|>1)), and a high order of convergence rate will not be immediately apparent.

Examples

Matrix 2x2

Unable to parse expression (syntax error): (\displaystyle \mathbf(A)^(-1) = \begin(bmatrix) a & b \\ c & d \\ \end(bmatrix)^(-1) = \frac (1)(\det(\mathbf(A))) \begin& \!\!-b \\ -c & \,a \\ \end(bmatrix) = \frac(1)(ad - bc) \begin (bmatrix) \,\,\,d & \!\!-b\\ -c & \,a \\ \end(bmatrix).)

The inversion of a 2x2 matrix is ​​possible only under the condition that a d − b c = det A ≠ 0 (\displaystyle ad-bc=\det A\neq 0).

Let there be a square matrix of the nth order

Matrix A -1 is called inverse matrix with respect to the matrix A, if A * A -1 = E, where E is the identity matrix of the nth order.

Identity matrix- such a square matrix, in which all elements along the main diagonal, passing from the upper left corner to the lower right corner, are ones, and the rest are zeros, for example:

inverse matrix may exist only for square matrices those. for those matrices that have the same number of rows and columns.

Inverse Matrix Existence Condition Theorem

For a matrix to have an inverse matrix, it is necessary and sufficient that it be nondegenerate.

The matrix A = (A1, A2,...A n) is called non-degenerate if the column vectors are linearly independent. The number of linearly independent column vectors of a matrix is ​​called the rank of the matrix. Therefore, we can say that in order for an inverse matrix to exist, it is necessary and sufficient that the rank of the matrix is ​​equal to its dimension, i.e. r = n.

Algorithm for finding the inverse matrix

  1. Write the matrix A in the table for solving systems of equations by the Gauss method and on the right (in place of the right parts of the equations) assign matrix E to it.
  2. Using Jordan transformations, bring matrix A to a matrix consisting of single columns; in this case, it is necessary to simultaneously transform the matrix E.
  3. If necessary, rearrange the rows (equations) of the last table so that the identity matrix E is obtained under the matrix A of the original table.
  4. Write the inverse matrix A -1, which is in the last table under the matrix E of the original table.
Example 1

For matrix A, find the inverse matrix A -1

Solution: We write down the matrix A and on the right we assign the identity matrix E. Using Jordan transformations, we reduce the matrix A to the identity matrix E. The calculations are shown in Table 31.1.

Let's check the correctness of the calculations by multiplying the original matrix A and the inverse matrix A -1.

As a result of matrix multiplication, the identity matrix is ​​obtained. Therefore, the calculations are correct.

Answer:

Solution of matrix equations

Matrix equations can look like:

AX = B, XA = B, AXB = C,

where A, B, C are given matrices, X is the desired matrix.

Matrix equations are solved by multiplying the equation by inverse matrices.

For example, to find the matrix from an equation, you need to multiply this equation by on the left.

Therefore, to find a solution to the equation, you need to find the inverse matrix and multiply it by the matrix on the right side of the equation.

Other equations are solved similarly.

Example 2

Solve the equation AX = B if

Solution: Since the inverse of the matrix equals (see example 1)

Matrix method in economic analysis

Along with others, they also find application matrix methods. These methods are based on linear and vector-matrix algebra. Such methods are used for the purposes of analyzing complex and multidimensional economic phenomena. Most often, these methods are used when it is necessary to compare the functioning of organizations and their structural divisions.

In the process of applying matrix methods of analysis, several stages can be distinguished.

At the first stage the formation of a system of economic indicators is carried out and on its basis a matrix of initial data is compiled, which is a table in which system numbers are shown in its individual lines (i = 1,2,....,n), and along the vertical graphs - numbers of indicators (j = 1,2,....,m).

At the second stage for each vertical column, the largest of the available values ​​of the indicators is revealed, which is taken as a unit.

After that, all the amounts reflected in this column are divided by the largest value and a matrix of standardized coefficients is formed.

At the third stage all components of the matrix are squared. If they have different significance, then each indicator of the matrix is ​​assigned a certain weighting coefficient k. The value of the latter is determined by an expert.

On the last fourth stage found values ​​of ratings Rj grouped in order of increasing or decreasing.

The above matrix methods should be used, for example, in a comparative analysis of various investment projects, as well as in assessing other economic performance indicators of organizations.

In this article, we will talk about the matrix method for solving a system of linear algebraic equations, find its definition and give examples of the solution.

Definition 1

Inverse matrix method is the method used to solve SLAE when the number of unknowns is equal to the number of equations.

Example 1

Find a solution to a system of n linear equations with n unknowns:

a 11 x 1 + a 12 x 2 + . . . + a 1 n x n = b 1 a n 1 x 1 + a n 2 x 2 + . . . + a n n x n = b n

Matrix record view : A × X = B

where A = a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋯ ⋯ ⋯ ⋯ a n 1 a n 2 ⋯ a n n is the matrix of the system.

X = x 1 x 2 ⋮ x n - column of unknowns,

B = b 1 b 2 ⋮ b n - column of free coefficients.

From the equation we got, we need to express X. To do this, multiply both sides of the matrix equation on the left by A - 1:

A - 1 × A × X = A - 1 × B .

Since A - 1 × A = E, then E × X = A - 1 × B or X = A - 1 × B.

Comment

The inverse matrix to matrix A has the right to exist only if the condition d e t A is not equal to zero. Therefore, when solving SLAE by the inverse matrix method, first of all, d e t A is found.

In the event that d e t A is not equal to zero, the system has only one solution: using the inverse matrix method. If d e t A = 0, then the system cannot be solved by this method.

An example of solving a system of linear equations using the inverse matrix method

Example 2

We solve SLAE by the inverse matrix method:

2 x 1 - 4 x 2 + 3 x 3 = 1 x 1 - 2 x 2 + 4 x 3 = 3 3 x 1 - x 2 + 5 x 3 = 2

How to decide?

  • We write the system in the form of a matrix equation А X = B , where

A \u003d 2 - 4 3 1 - 2 4 3 - 1 5, X \u003d x 1 x 2 x 3, B \u003d 1 3 2.

  • We express from this equation X:
  • We find the determinant of matrix A:

d e t A = 2 - 4 3 1 - 2 4 3 - 1 5 = 2 × (- 2) × 5 + 3 × (- 4) × 4 + 3 × (- 1) × 1 - 3 × (- 2) × 3 - - 1 × (- 4) × 5 - 2 × 4 - (- 1) = - 20 - 48 - 3 + 18 + 20 + 8 = - 25

d e t А is not equal to 0, therefore, the inverse matrix solution method is suitable for this system.

  • We find the inverse matrix A - 1 using the union matrix. We calculate the algebraic additions A i j to the corresponding elements of the matrix A:

A 11 \u003d (- 1) (1 + 1) - 2 4 - 1 5 \u003d - 10 + 4 \u003d - 6,

A 12 \u003d (- 1) 1 + 2 1 4 3 5 \u003d - (5 - 12) \u003d 7,

A 13 \u003d (- 1) 1 + 3 1 - 2 3 - 1 \u003d - 1 + 6 \u003d 5,

A 21 \u003d (- 1) 2 + 1 - 4 3 - 1 5 \u003d - (- 20 + 3) \u003d 17,

A 22 \u003d (- 1) 2 + 2 2 3 3 5 - 10 - 9 \u003d 1,

A 23 \u003d (- 1) 2 + 3 2 - 4 3 - 1 \u003d - (- 2 + 12) \u003d - 10,

A 31 \u003d (- 1) 3 + 1 - 4 3 - 2 4 \u003d - 16 + 6 \u003d - 10,

A 32 \u003d (- 1) 3 + 2 2 3 1 4 \u003d - (8 - 3) \u003d - 5,

A 33 \u003d (- 1) 3 + 3 2 - 4 1 - 2 \u003d - 4 + 4 \u003d 0.

  • We write down the union matrix A * , which is composed of algebraic complements of the matrix A:

A * = - 6 7 5 17 1 - 10 - 10 - 5 0

  • We write the inverse matrix according to the formula:

A - 1 \u003d 1 d e t A (A *) T: A - 1 \u003d - 1 25 - 6 17 - 10 7 1 - 5 5 - 10 0,

  • We multiply the inverse matrix A - 1 by the column of free terms B and get the solution of the system:

X = A - 1 × B = - 1 25 - 6 17 - 10 7 1 - 5 5 - 10 0 1 3 2 = - 1 25 - 6 + 51 - 20 7 + 3 - 10 5 - 30 + 0 = - 1 0 1

Answer : x 1 = - 1; x 2 \u003d 0; x 3 = 1

If you notice a mistake in the text, please highlight it and press Ctrl+Enter

Initial according to the formula: A^-1 = A*/detA, where A* is the associated matrix, detA is the original matrix. The attached matrix is ​​the transposed matrix of additions to the elements of the original matrix.

First of all, find the determinant of the matrix, it must be different from zero, since then the determinant will be used as a divisor. Let, for example, be given a matrix of the third (consisting of three rows and three columns). As you can see, the determinant of the matrix is ​​not equal to zero, so there is an inverse matrix.

Find the complement to each element of the matrix A. The complement to A is the determinant of the submatrix obtained from the original one by deleting the i-th row and the j-th column, and this determinant is taken with a sign. The sign is determined by multiplying the determinant by (-1) to the power of i+j. Thus, for example, the complement to A will be the determinant considered in the figure. The sign turned out like this: (-1)^(2+1) = -1.

As a result you will get matrix additions, now transpose it. Transposition is an operation that is symmetrical about the main diagonal of the matrix, columns and rows are swapped. Thus, you have found the associated matrix A*.

An inverse matrix for a given one is such a matrix, multiplication of the original one by which gives the identity matrix: A mandatory and sufficient condition for the presence of an inverse matrix is ​​the inequality of the determinant of the original one (which in turn implies that the matrix must be square). If the determinant of a matrix is ​​equal to zero, then it is called degenerate and such a matrix has no inverse. In higher mathematics, inverse matrices are important and are used to solve a number of problems. For example, on finding the inverse matrix a matrix method for solving systems of equations is constructed. Our service site allows calculate inverse matrix online two methods: the Gauss-Jordan method and using the matrix of algebraic additions. The first implies a large number of elementary transformations within the matrix, the second - the calculation of the determinant and algebraic additions to all elements. To calculate the determinant of a matrix online, you can use our other service - Calculating the determinant of a matrix online

.

Find the inverse matrix on the site

website allows you to find inverse matrix online fast and free. On the site, calculations are made by our service and a result is displayed with a detailed solution for finding inverse matrix. The server always gives only the exact and correct answer. In tasks by definition inverse matrix online, it is necessary that the determinant matrices was different from zero, otherwise website will report the impossibility of finding the inverse matrix due to the fact that the determinant of the original matrix is ​​equal to zero. Finding task inverse matrix found in many branches of mathematics, being one of the most basic concepts of algebra and a mathematical tool in applied problems. Independent inverse matrix definition requires considerable effort, a lot of time, calculations and great care in order not to make a slip or a small error in the calculations. Therefore, our service finding the inverse matrix online will greatly facilitate your task and will become an indispensable tool for solving mathematical problems. Even if you find inverse matrix yourself, we recommend checking your solution on our server. Enter your original matrix on our Calculate Inverse Matrix Online and check your answer. Our system is never wrong and finds inverse matrix given dimension in the mode online instantly! On the site website character entries are allowed in elements matrices, in this case inverse matrix online will be presented in general symbolic form.


close