home · Measurements · Eigenvalues ​​(numbers) and eigenvectors. Examples of solutions. Eigenvectors and eigenvalues ​​of a linear operator

Eigenvalues ​​(numbers) and eigenvectors. Examples of solutions. Eigenvectors and eigenvalues ​​of a linear operator

Diagonal matrices have the simplest structure. The question arises whether it is possible to find a basis in which the matrix of the linear operator would have a diagonal form. Such a basis exists.
Let us be given a linear space R n and a linear operator A acting in it; in this case, operator A takes R n into itself, that is, A:R n → R n .

Definition. A non-zero vector is called an eigenvector of the operator A if the operator A translates into a collinear vector, that is. The number λ is called the eigenvalue or eigenvalue of the operator A, corresponding to the eigenvector.
Let us note some properties of eigenvalues ​​and eigenvectors.
1. Any linear combination of eigenvectors operator A corresponding to the same eigenvalue λ is an eigenvector with the same eigenvalue.
2. Eigenvectors operator A with pairwise different eigenvalues ​​λ 1 , λ 2 , …, λ m are linearly independent.
3. If the eigenvalues ​​λ 1 =λ 2 = λ m = λ, then the eigenvalue λ corresponds to no more than m linearly independent eigenvectors.

So, if there are n linearly independent eigenvectors , corresponding to different eigenvalues ​​λ 1, λ 2, ..., λ n, then they are linearly independent, therefore, they can be taken as the basis of the space R n. Let us find the form of the matrix of the linear operator A in the basis of its eigenvectors, for which we will act with the operator A on the basis vectors: Then .
Thus, the matrix of the linear operator A in the basis of its eigenvectors has a diagonal form, and the eigenvalues ​​of the operator A are along the diagonal.
Is there another basis in which the matrix has a diagonal form? The answer to this question is given by the following theorem.

Theorem. The matrix of a linear operator A in the basis (i = 1..n) has a diagonal form if and only if all the vectors of the basis are eigenvectors of the operator A.

Rule for finding eigenvalues ​​and eigenvectors

Let a vector be given , where x 1, x 2, …, x n are the coordinates of the vector relative to the basis and is the eigenvector of the linear operator A corresponding to the eigenvalue λ, that is. This relationship can be written in matrix form

. (*)


Equation (*) can be considered as an equation for finding , and , that is, we are interested in non-trivial solutions, since the eigenvector cannot be zero. It is known that nontrivial solutions of a homogeneous system of linear equations exist if and only if det(A - λE) = 0. Thus, for λ to be an eigenvalue of the operator A it is necessary and sufficient that det(A - λE) = 0.
If equation (*) is written in detail in coordinate form, we obtain a system of linear homogeneous equations:

(1)
Where - linear operator matrix.

System (1) has a non-zero solution if its determinant D is equal to zero


We received an equation for finding eigenvalues.
This equation is called the characteristic equation, and its left side is called the characteristic polynomial of the matrix (operator) A. If the characteristic polynomial has no real roots, then the matrix A has no eigenvectors and cannot be reduced to diagonal form.
Let λ 1, λ 2, …, λ n be the real roots of the characteristic equation, and among them there may be multiples. Substituting these values ​​in turn into system (1), we find the eigenvectors.

Example 12. The linear operator A acts in R 3 according to the law, where x 1, x 2, .., x n are the coordinates of the vector in the basis , , . Find the eigenvalues ​​and eigenvectors of this operator.
Solution. We build the matrix of this operator:
.
We create a system for determining the coordinates of eigenvectors:

We compose a characteristic equation and solve it:

.
λ 1,2 = -1, λ 3 = 3.
Substituting λ = -1 into the system, we have:
or
Because , then there are two dependent variables and one free variable.
Let x 1 be a free unknown, then We solve this system in any way and find the general solution of this system: The fundamental system of solutions consists of one solution, since n - r = 3 - 2 = 1.
The set of eigenvectors corresponding to the eigenvalue λ = -1 has the form: , where x 1 is any number other than zero. Let's choose one vector from this set, for example, putting x 1 = 1: .
Reasoning similarly, we find the eigenvector corresponding to the eigenvalue λ = 3: .
In the space R 3, the basis consists of three linearly independent vectors, but we received only two linearly independent eigenvectors, from which the basis in R 3 cannot be composed. Consequently, we cannot reduce the matrix A of a linear operator to diagonal form.

Example 13. Given a matrix .
1. Prove that the vector is an eigenvector of matrix A. Find the eigenvalue corresponding to this eigenvector.
2. Find a basis in which matrix A has a diagonal form.
Solution.
1. If , then is an eigenvector

.
Vector (1, 8, -1) is an eigenvector. Eigenvalue λ = -1.
The matrix has a diagonal form in a basis consisting of eigenvectors. One of them is famous. Let's find the rest.
We look for eigenvectors from the system:

Characteristic equation: ;
(3 + λ)[-2(2-λ)(2+λ)+3] = 0; (3+λ)(λ 2 - 1) = 0
λ 1 = -3, λ 2 = 1, λ 3 = -1.
Let's find the eigenvector corresponding to the eigenvalue λ = -3:

The rank of the matrix of this system is two and equal to the number of unknowns, so this system has only a zero solution x 1 = x 3 = 0. x 2 here can be anything other than zero, for example, x 2 = 1. Thus, the vector (0 ,1,0) is an eigenvector corresponding to λ = -3. Let's check:
.
If λ = 1, then we obtain the system
The rank of the matrix is ​​two. We cross out the last equation.
Let x 3 be a free unknown. Then x 1 = -3x 3, 4x 2 = 10x 1 - 6x 3 = -30x 3 - 6x 3, x 2 = -9x 3.
Assuming x 3 = 1, we have (-3,-9,1) - an eigenvector corresponding to the eigenvalue λ = 1. Check:

.
Since the eigenvalues ​​are real and distinct, the vectors corresponding to them are linearly independent, so they can be taken as a basis in R 3 . Thus, in the basis , , matrix A has the form:
.
Not every matrix of a linear operator A:R n → R n can be reduced to diagonal form, since for some linear operators there may be less than n linear independent eigenvectors. However, if the matrix is ​​symmetric, then the root of the characteristic equation of multiplicity m corresponds to exactly m linearly independent vectors.

Definition. A symmetric matrix is ​​a square matrix in which the elements symmetric about the main diagonal are equal, that is, in which .
Notes. 1. All eigenvalues ​​of a symmetric matrix are real.
2. The eigenvectors of a symmetric matrix corresponding to pairwise different eigenvalues ​​are orthogonal.
As one of the many applications of the studied apparatus, we consider the problem of determining the type of a second-order curve.

An eigenvector of a square matrix is ​​one that, when multiplied by a given matrix, results in a collinear vector. In simple words, when a matrix is ​​multiplied by an eigenvector, the latter remains the same, but multiplied by a certain number.

Definition

An eigenvector is a non-zero vector V, which, when multiplied by a square matrix M, becomes itself increased by some number λ. In algebraic notation it looks like:

M × V = λ × V,

where λ is the eigenvalue of the matrix M.

Let's look at a numerical example. For ease of recording, numbers in the matrix will be separated by a semicolon. Let us have a matrix:

  • M = 0; 4;
  • 6; 10.

Let's multiply it by a column vector:

  • V = -2;

When we multiply a matrix by a column vector, we also get a column vector. In strict mathematical language, the formula for multiplying a 2 × 2 matrix by a column vector will look like this:

  • M × V = M11 × V11 + M12 × V21;
  • M21 × V11 + M22 × V21.

M11 means the element of matrix M located in the first row and first column, and M22 means the element located in the second row and second column. For our matrix, these elements are equal to M11 = 0, M12 = 4, M21 = 6, M22 10. For a column vector, these values ​​are equal to V11 = –2, V21 = 1. According to this formula, we get the following result of the product of a square matrix by a vector:

  • M × V = 0 × (-2) + (4) × (1) = 4;
  • 6 × (-2) + 10 × (1) = -2.

For convenience, let's write the column vector into a row. So, we multiplied the square matrix by the vector (-2; 1), resulting in the vector (4; -2). Obviously, this is the same vector multiplied by λ = -2. Lambda in this case denotes the eigenvalue of the matrix.

An eigenvector of a matrix is ​​a collinear vector, that is, an object that does not change its position in space when multiplied by a matrix. The concept of collinearity in vector algebra is similar to the term of parallelism in geometry. In a geometric interpretation, collinear vectors are parallel directed segments of different lengths. Since the time of Euclid, we know that one line has an infinite number of lines parallel to it, so it is logical to assume that each matrix has an infinite number of eigenvectors.

From the previous example it is clear that eigenvectors can be (-8; 4), and (16; -8), and (32, -16). These are all collinear vectors corresponding to the eigenvalue λ = -2. When multiplying the original matrix by these vectors, we will still end up with a vector that differs from the original by 2 times. That is why, when solving problems of finding an eigenvector, it is necessary to find only linearly independent vector objects. Most often, for an n × n matrix, there are an n number of eigenvectors. Our calculator is designed for the analysis of second-order square matrices, so almost always the result will find two eigenvectors, except for cases when they coincide.

In the example above, we knew the eigenvector of the original matrix in advance and clearly determined the lambda number. However, in practice, everything happens the other way around: the eigenvalues ​​are found first and only then the eigenvectors.

Solution algorithm

Let's look at the original matrix M again and try to find both of its eigenvectors. So the matrix looks like:

  • M = 0; 4;
  • 6; 10.

First we need to determine the eigenvalue λ, which requires calculating the determinant of the following matrix:

  • (0 − λ); 4;
  • 6; (10 − λ).

This matrix is ​​obtained by subtracting the unknown λ from the elements on the main diagonal. The determinant is determined using the standard formula:

  • detA = M11 × M21 − M12 × M22
  • detA = (0 − λ) × (10 − λ) − 24

Since our vector must be non-zero, we accept the resulting equation as linearly dependent and equate our determinant detA to zero.

(0 − λ) × (10 − λ) − 24 = 0

Let's open the brackets and get the characteristic equation of the matrix:

λ 2 − 10λ − 24 = 0

This is a standard quadratic equation that needs to be solved using a discriminant.

D = b 2 − 4ac = (-10) × 2 − 4 × (-1) × 24 = 100 + 96 = 196

The root of the discriminant is sqrt(D) = 14, therefore λ1 = -2, λ2 = 12. Now for each lambda value we need to find the eigenvector. Let us express the system coefficients for λ = -2.

  • M − λ × E = 2; 4;
  • 6; 12.

In this formula, E is the identity matrix. Based on the resulting matrix, we create a system of linear equations:

2x + 4y = 6x + 12y,

where x and y are the eigenvector elements.

Let's collect all the X's on the left and all the Y's on the right. Obviously - 4x = 8y. Divide the expression by - 4 and get x = –2y. Now we can determine the first eigenvector of the matrix, taking any values ​​of the unknowns (remember the infinity of linearly dependent eigenvectors). Let's take y = 1, then x = –2. Therefore, the first eigenvector looks like V1 = (–2; 1). Return to the beginning of the article. It was this vector object that we multiplied the matrix by to demonstrate the concept of an eigenvector.

Now let's find the eigenvector for λ = 12.

  • M - λ × E = -12; 4
  • 6; -2.

Let's create the same system of linear equations;

  • -12x + 4y = 6x − 2y
  • -18x = -6y
  • 3x = y.

Now we take x = 1, therefore y = 3. Thus, the second eigenvector looks like V2 = (1; 3). When multiplying the original matrix by a given vector, the result will always be the same vector multiplied by 12. This is where the solution algorithm ends. Now you know how to manually determine the eigenvector of a matrix.

  • determinant;
  • trace, that is, the sum of the elements on the main diagonal;
  • rank, that is, the maximum number of linearly independent rows/columns.

The program operates according to the above algorithm, shortening the solution process as much as possible. It is important to point out that in the program lambda is designated by the letter “c”. Let's look at a numerical example.

Example of how the program works

Let's try to determine the eigenvectors for the following matrix:

  • M = 5; 13;
  • 4; 14.

Let's enter these values ​​into the cells of the calculator and get the answer in the following form:

  • Matrix rank: 2;
  • Matrix determinant: 18;
  • Matrix trace: 19;
  • Calculation of the eigenvector: c 2 − 19.00c + 18.00 (characteristic equation);
  • Eigenvector calculation: 18 (first lambda value);
  • Eigenvector calculation: 1 (second lambda value);
  • System of equations for vector 1: -13x1 + 13y1 = 4x1 − 4y1;
  • System of equations for vector 2: 4x1 + 13y1 = 4x1 + 13y1;
  • Eigenvector 1: (1; 1);
  • Eigenvector 2: (-3.25; 1).

Thus, we obtained two linearly independent eigenvectors.

Conclusion

Linear algebra and analytical geometry are standard subjects for any freshman engineering major. The large number of vectors and matrices is terrifying, and it is easy to make mistakes in such cumbersome calculations. Our program will allow students to check their calculations or automatically solve the problem of finding an eigenvector. There are other linear algebra calculators in our catalog; use them in your studies or work.

www.site allows you to find . The site performs the calculation. In a few seconds the server will give the correct solution. The characteristic equation for the matrix will be an algebraic expression found using the rule for calculating the determinant matrices matrices, while along the main diagonal there will be differences in the values ​​of the diagonal elements and the variable. When calculating characteristic equation for the matrix online, each element matrices will be multiplied with corresponding other elements matrices. Find in mode online only possible for square matrices. Finding operation characteristic equation for the matrix online reduces to calculating the algebraic sum of the product of elements matrices as a result of finding the determinant matrices, only for the purpose of determining characteristic equation for the matrix online. This operation occupies a special place in the theory matrices, allows you to find eigenvalues ​​and vectors using roots. The task of finding characteristic equation for the matrix online consists of multiplying elements matrices followed by summing these products according to a certain rule. www.site finds characteristic equation for the matrix given dimension in mode online. Calculation characteristic equation for the matrix online given its dimension, this is finding a polynomial with numerical or symbolic coefficients, found according to the rule for calculating the determinant matrices- as the sum of the products of the corresponding elements matrices, only for the purpose of determining characteristic equation for the matrix online. Finding a polynomial with respect to a variable for a quadratic matrices, as a definition characteristic equation for the matrix, common in theory matrices. The meaning of the roots of a polynomial characteristic equation for the matrix online used to determine eigenvectors and eigenvalues ​​for matrices. Moreover, if the determinant matrices will be equal to zero, then characteristic equation of the matrix will still exist, unlike the reverse matrices. In order to calculate characteristic equation for the matrix or find for several at once matrices characteristic equations, you need to spend a lot of time and effort, while our server will find in a matter of seconds characteristic equation for matrix online. In this case, the answer to finding characteristic equation for the matrix online will be correct and with sufficient accuracy, even if the numbers when finding characteristic equation for the matrix online will be irrational. On the site www.site character entries are allowed in elements matrices, that is characteristic equation for matrix online can be represented in general symbolic form when calculating characteristic equation of the matrix online. It is useful to check the answer obtained when solving the problem of finding characteristic equation for the matrix online using the site www.site. When performing the operation of calculating a polynomial - characteristic equation of the matrix, you need to be careful and extremely focused when solving this problem. In turn, our site will help you check your decision on the topic characteristic equation of a matrix online. If you do not have time for long checks of solved problems, then www.site will certainly be a convenient tool for checking when finding and calculating characteristic equation for the matrix online.

SYSTEM OF HOMOGENEOUS LINEAR EQUATIONS

A system of homogeneous linear equations is a system of the form

It is clear that in this case , because all elements of one of the columns in these determinants are equal to zero.

Since the unknowns are found according to the formulas , then in the case when Δ ≠ 0, the system has a unique zero solution x = y = z= 0. However, in many problems the interesting question is whether a homogeneous system has solutions other than zero.

Theorem. In order for a system of linear homogeneous equations to have a non-zero solution, it is necessary and sufficient that Δ ≠ 0.

So, if the determinant Δ ≠ 0, then the system has a unique solution. If Δ ≠ 0, then the system of linear homogeneous equations has an infinite number of solutions.

Examples.

Eigenvectors and eigenvalues ​​of a matrix

Let a square matrix be given , X– some matrix-column, the height of which coincides with the order of the matrix A. .

In many problems we have to consider the equation for X

where λ is a certain number. It is clear that for any λ this equation has a zero solution.

The number λ for which this equation has non-zero solutions is called eigenvalue matrices A, A X for such λ is called eigenvector matrices A.

Let's find the eigenvector of the matrix A. Because the EX = X, then the matrix equation can be rewritten as or . In expanded form, this equation can be rewritten as a system of linear equations. Really .

And therefore

So, we have obtained a system of homogeneous linear equations for determining the coordinates x 1, x 2, x 3 vector X. For a system to have non-zero solutions it is necessary and sufficient that the determinant of the system be equal to zero, i.e.

This is a 3rd degree equation for λ. It's called characteristic equation matrices A and serves to determine the eigenvalues ​​of λ.

Each eigenvalue λ corresponds to an eigenvector X, whose coordinates are determined from the system at the corresponding value of λ.

Examples.

VECTOR ALGEBRA. THE CONCEPT OF VECTOR

When studying various branches of physics, there are quantities that are completely determined by specifying their numerical values, for example, length, area, mass, temperature, etc. Such quantities are called scalar. However, in addition to them, there are also quantities, to determine which, in addition to the numerical value, it is also necessary to know their direction in space, for example, the force acting on the body, the speed and acceleration of the body when it moves in space, the magnetic field strength at a given point in space and etc. Such quantities are called vector quantities.

Let us introduce a strict definition.

Directed segment Let's call a segment, relative to the ends of which it is known which of them is the first and which is the second.

Vector called a directed segment having a certain length, i.e. This is a segment of a certain length, in which one of the points limiting it is taken as the beginning, and the second as the end. If A– the beginning of the vector, B is its end, then the vector is denoted by the symbol; in addition, the vector is often denoted by a single letter. In the figure, the vector is indicated by a segment, and its direction by an arrow.

Module or length A vector is called the length of the directed segment that defines it. Denoted by || or ||.

We will also include the so-called zero vector, whose beginning and end coincide, as vectors. It is designated. The zero vector does not have a specific direction and its modulus is zero ||=0.

Vectors are called collinear, if they are located on the same line or on parallel lines. Moreover, if the vectors and are in the same direction, we will write , opposite.

Vectors located on straight lines parallel to the same plane are called coplanar.

The two vectors are called equal, if they are collinear, have the same direction and are equal in length. In this case they write .

From the definition of equality of vectors it follows that a vector can be transported parallel to itself, placing its origin at any point in space.

For example.

LINEAR OPERATIONS ON VECTORS

  1. Multiplying a vector by a number.

    The product of a vector and the number λ is a new vector such that:

    The product of a vector and a number λ is denoted by .

    For example, there is a vector directed in the same direction as the vector and having a length half that of the vector.

    The introduced operation has the following properties:

  2. Vector addition.

    Let and be two arbitrary vectors. Let's take an arbitrary point O and construct a vector. After that from the point A let's put aside the vector. The vector connecting the beginning of the first vector with the end of the second is called amount of these vectors and is denoted .

    The formulated definition of vector addition is called parallelogram rule, since the same sum of vectors can be obtained as follows. Let's postpone from the point O vectors and . Let's construct a parallelogram on these vectors OABC. Since vectors, then vector, which is a diagonal of a parallelogram drawn from the vertex O, will obviously be a sum of vectors.

    It's easy to check the following properties of vector addition.

  3. Vector difference.

    A vector collinear to a given vector, equal in length and oppositely directed, is called opposite vector for a vector and is denoted by . The opposite vector can be considered as the result of multiplying the vector by the number λ = –1: .

With matrix A, if there is a number l such that AX = lX.

In this case, the number l is called eigenvalue operator (matrix A) corresponding to vector X.

In other words, an eigenvector is a vector that, under the action of a linear operator, transforms into a collinear vector, i.e. just multiply by some number. In contrast, improper vectors are more complex to transform.

Let's write down the definition of an eigenvector in the form of a system of equations:

Let's move all the terms to the left side:

The latter system can be written in matrix form as follows:

(A - lE)X = O

The resulting system always has a zero solution X = O. Such systems in which all free terms are equal to zero are called homogeneous. If the matrix of such a system is square and its determinant is not equal to zero, then using Cramer’s formulas we will always get a unique solution - zero. It can be proven that a system has non-zero solutions if and only if the determinant of this matrix is ​​equal to zero, i.e.

|A - lE| = = 0

This equation with unknown l is called characteristic equation (characteristic polynomial) matrix A (linear operator).

It can be proven that the characteristic polynomial of a linear operator does not depend on the choice of basis.

For example, let's find the eigenvalues ​​and eigenvectors of the linear operator defined by the matrix A = .

To do this, let's create a characteristic equation |A - lE| = = (1 - l) 2 - 36 = 1 - 2l + l 2 - 36 = l 2 - 2l - 35 = 0; D = 4 + 140 = 144; eigenvalues ​​l 1 = (2 - 12)/2 = -5; l 2 = (2 + 12)/2 = 7.

To find eigenvectors, we solve two systems of equations

(A + 5E)X = O

(A - 7E)X = O

For the first of them, the expanded matrix takes the form

,

whence x 2 = c, x 1 + (2/3)c = 0; x 1 = -(2/3)s, i.e. X (1) = (-(2/3)s; s).

For the second of them, the expanded matrix takes the form

,

from where x 2 = c 1, x 1 - (2/3)c 1 = 0; x 1 = (2/3)s 1, i.e. X (2) = ((2/3)s 1; s 1).

Thus, the eigenvectors of this linear operator are all vectors of the form (-(2/3)с; с) with eigenvalue (-5) and all vectors of the form ((2/3)с 1 ; с 1) with eigenvalue 7 .

It can be proven that the matrix of the operator A in the basis consisting of its eigenvectors is diagonal and has the form:

,

where l i are the eigenvalues ​​of this matrix.

The converse is also true: if matrix A in some basis is diagonal, then all vectors of this basis will be eigenvectors of this matrix.

It can also be proven that if a linear operator has n pairwise distinct eigenvalues, then the corresponding eigenvectors are linearly independent, and the matrix of this operator in the corresponding basis has a diagonal form.


Let's illustrate this with the previous example. Let's take arbitrary non-zero values ​​c and c 1, but such that the vectors X (1) and X (2) are linearly independent, i.e. would form a basis. For example, let c = c 1 = 3, then X (1) = (-2; 3), X (2) = (2; 3).

Let us verify the linear independence of these vectors:

12 ≠ 0. In this new basis, matrix A will take the form A * = .

To verify this, let's use the formula A * = C -1 AC. First, let's find C -1.

C -1 = ;

Quadratic shapes

Quadratic shape f(x 1, x 2, x n) of n variables is called a sum, each term of which is either the square of one of the variables, or the product of two different variables, taken with a certain coefficient: f(x 1, x 2, x n) = (a ij = a ji).

The matrix A composed of these coefficients is called matrix quadratic form. It's always symmetrical matrix (i.e. a matrix symmetrical about the main diagonal, a ij = a ji).

In matrix notation, the quadratic form is f(X) = X T AX, where

Indeed

For example, let's write the quadratic form in matrix form.

To do this, we find a matrix of quadratic form. Its diagonal elements are equal to the coefficients of the squared variables, and the remaining elements are equal to the halves of the corresponding coefficients of the quadratic form. That's why

Let the matrix-column of variables X be obtained by a non-degenerate linear transformation of the matrix-column Y, i.e. X = CY, where C is a non-singular matrix of nth order. Then the quadratic form f(X) = X T AX = (CY) T A(CY) = (Y T C T)A(CY) = Y T (C T AC)Y.

Thus, with a non-degenerate linear transformation C, the matrix of quadratic form takes the form: A * = C T AC.

For example, let's find the quadratic form f(y 1, y 2), obtained from the quadratic form f(x 1, x 2) = 2x 1 2 + 4x 1 x 2 - 3x 2 2 by linear transformation.

The quadratic form is called canonical(It has canonical view), if all its coefficients a ij = 0 for i ≠ j, i.e.
f(x 1, x 2, x n) = a 11 x 1 2 + a 22 x 2 2 + a nn x n 2 = .

Its matrix is ​​diagonal.

Theorem(proof not given here). Any quadratic form can be reduced to canonical form using a non-degenerate linear transformation.

For example, let us reduce the quadratic form to canonical form
f(x 1, x 2, x 3) = 2x 1 2 + 4x 1 x 2 - 3x 2 2 - x 2 x 3.

To do this, first select a complete square with the variable x 1:

f(x 1, x 2, x 3) = 2(x 1 2 + 2x 1 x 2 + x 2 2) - 2x 2 2 - 3x 2 2 - x 2 x 3 = 2(x 1 + x 2) 2 - 5x 2 2 - x 2 x 3.

Now we select a complete square with the variable x 2:

f(x 1, x 2, x 3) = 2(x 1 + x 2) 2 - 5(x 2 2 + 2* x 2 *(1/10)x 3 + (1/100)x 3 2) + (5/100)x 3 2 =
= 2(x 1 + x 2) 2 - 5(x 2 - (1/10)x 3) 2 + (1/20)x 3 2.

Then the non-degenerate linear transformation y 1 = x 1 + x 2, y 2 = x 2 + (1/10)x 3 and y 3 = x 3 brings this quadratic form to the canonical form f(y 1, y 2, y 3) = 2y 1 2 - 5y 2 2 + (1/20)y 3 2 .

Note that the canonical form of a quadratic form is determined ambiguously (the same quadratic form can be reduced to canonical form in different ways). However, canonical forms obtained by various methods have a number of common properties. In particular, the number of terms with positive (negative) coefficients of a quadratic form does not depend on the method of reducing the form to this form (for example, in the example considered there will always be two negative and one positive coefficient). This property is called the law of inertia of quadratic forms.

Let us verify this by bringing the same quadratic form to canonical form in a different way. Let's start the transformation with the variable x 2:

f(x 1, x 2, x 3) = 2x 1 2 + 4x 1 x 2 - 3x 2 2 - x 2 x 3 = -3x 2 2 - x 2 x 3 + 4x 1 x 2 + 2x 1 2 = - 3(x 2 2 +
+ 2* x 2 ((1/6) x 3 - (2/3)x 1) + ((1/6) x 3 - (2/3)x 1) 2) + 3((1/6) x 3 - (2/3)x 1) 2 + 2x 1 2 =
= -3(x 2 + (1/6) x 3 - (2/3)x 1) 2 + 3((1/6) x 3 + (2/3)x 1) 2 + 2x 1 2 = f (y 1 , y 2 , y 3) = -3y 1 2 -
+3y 2 2 + 2y 3 2, where y 1 = - (2/3)x 1 + x 2 + (1/6) x 3, y 2 = (2/3)x 1 + (1/6) x 3 and y 3 = x 1 . Here there is a negative coefficient -3 at y 1 and two positive coefficients 3 and 2 at y 2 and y 3 (and using another method we got a negative coefficient (-5) at y 2 and two positive ones: 2 at y 1 and 1/20 at y 3).

It should also be noted that the rank of a matrix of quadratic form, called rank of quadratic form, is equal to the number of nonzero coefficients of the canonical form and does not change under linear transformations.

The quadratic form f(X) is called positively (negative) certain, if for all values ​​of the variables that are not simultaneously equal to zero, it is positive, i.e. f(X) > 0 (negative, i.e.
f(X)< 0).

For example, the quadratic form f 1 (X) = x 1 2 + x 2 2 is positive definite, because is a sum of squares, and the quadratic form f 2 (X) = -x 1 2 + 2x 1 x 2 - x 2 2 is negative definite, because represents it can be represented as f 2 (X) = -(x 1 - x 2) 2.

In most practical situations, it is somewhat more difficult to establish the definite sign of a quadratic form, so for this we use one of the following theorems (we will formulate them without proof).

Theorem. A quadratic form is positive (negative) definite if and only if all eigenvalues ​​of its matrix are positive (negative).

Theorem(Sylvester criterion). A quadratic form is positive definite if and only if all the leading minors of the matrix of this form are positive.

Main (corner) minor The kth order matrix A of the nth order is called the determinant of the matrix, composed of the first k rows and columns of the matrix A ().

Note that for negative definite quadratic forms the signs of the principal minors alternate, and the first-order minor must be negative.

For example, let us examine the quadratic form f(x 1, x 2) = 2x 1 2 + 4x 1 x 2 + 3x 2 2 for sign definiteness.

= (2 - l)*
*(3 - l) - 4 = (6 - 2l - 3l + l 2) - 4 = l 2 - 5l + 2 = 0; D = 25 - 8 = 17;
. Therefore, the quadratic form is positive definite.

Method 2. Principal minor of the first order of matrix A D 1 = a 11 = 2 > 0. Principal minor of the second order D 2 = = 6 - 4 = 2 > 0. Therefore, according to Sylvester’s criterion, the quadratic form is positive definite.

We examine another quadratic form for sign definiteness, f(x 1, x 2) = -2x 1 2 + 4x 1 x 2 - 3x 2 2.

Method 1. Let's construct a matrix of quadratic form A = . The characteristic equation will have the form = (-2 - l)*
*(-3 - l) - 4 = (6 + 2l + 3l + l 2) - 4 = l 2 + 5l + 2 = 0; D = 25 - 8 = 17;
. Therefore, the quadratic form is negative definite.

Method 2. Principal minor of the first order of matrix A D 1 = a 11 =
= -2 < 0. Главный минор второго порядка D 2 = = 6 - 4 = 2 >0. Consequently, according to Sylvester’s criterion, the quadratic form is negative definite (the signs of the main minors alternate, starting with the minus).

And as another example, we examine the sign-determined quadratic form f(x 1, x 2) = 2x 1 2 + 4x 1 x 2 - 3x 2 2.

Method 1. Let's construct a matrix of quadratic form A = . The characteristic equation will have the form = (2 - l)*
*(-3 - l) - 4 = (-6 - 2l + 3l + l 2) - 4 = l 2 + l - 10 = 0; D = 1 + 40 = 41;
.

One of these numbers is negative and the other is positive. The signs of the eigenvalues ​​are different. Consequently, the quadratic form can be neither negatively nor positively definite, i.e. this quadratic form is not sign-definite (it can take values ​​of any sign).

Method 2. Principal minor of the first order of matrix A D 1 = a 11 = 2 > 0. Principal minor of the second order D 2 = = -6 - 4 = -10< 0. Следовательно, по критерию Сильвестра квадратичная форма не является знакоопределенной (знаки главных миноров разные, при этом первый из них - положителен).