Makarskaya E.V. In the book: Days of student science. Spring - 2011. M .: Moscow State University of Economics, Statistics and Informatics, 2011. S. 135-139.

The authors consider the practical application of the theory of linear differential equations for the study of economic systems. The paper analyzes the dynamic models of Keynes and Samuelson-Hicks with finding equilibrium states of economic systems.

Ivanov A.I., Isakov I., Demin A.V. et al. Part 5.M .: Slovo, 2012.

The manual describes the quantitative methods for studying oxygen consumption by a person during tests with dosed physical activity performed at the State Research Center of the Russian Federation-IBMP RAS. The manual is intended for scientists, physiologists and physicians working in the field of aerospace, underwater and sports medicine.

Mikheev A.V. SPb .: Department of Operational Printing, National Research University Higher School of Economics - St. Petersburg, 2012.

This collection contains problems for the course of differential equations taught by the author at the Faculty of Economics of the National Research University Higher School of Economics - St. Petersburg. At the beginning of each topic, a brief summary of the main theoretical facts is given and examples of solutions to typical problems are analyzed. For students and listeners of programs of higher professional education.

Konakov V.D. STI. WP BRP. Publishing House of the Board of Trustees of the Faculty of Mechanics and Mathematics of Moscow State University, 2012. No. 2012.

This textbook is based on a special course chosen by the student, read by the author at the Faculty of Mechanics and Mathematics of Moscow State University. M.V. Lomonosov in the 2010-2012 academic years. The manual acquaints the reader with the parametrix method and its discrete analogue, recently developed by the author of the manual and his fellow co-authors. It brings together material that was previously only contained in a number of journal articles. Without striving for maximum generality of presentation, the author aimed to demonstrate the capabilities of the method in proving local limit theorems on the convergence of Markov chains to a diffusion process and in obtaining two-sided Aronson-type estimates for some degenerate diffusions.

Iss. 20. NY: Springer, 2012.

This publication is a collection of selected articles from the "Third International Conference on Information Systems Dynamics" held at the University of Florida, February 16-18, 2011. The purpose of this conference was to bring together scientists and engineers from industry, government and academia. so that they can exchange new discoveries and results in issues related to the theory and practice of information system dynamics Information system dynamics: Mathematical discovery is a modern research and is intended for graduate students and researchers who are interested in the latest discoveries in information theory and dynamic Scientists in other disciplines can also benefit from the application of new developments in their research areas.

Palvelev R., Sergeev A. G. Trudy Mathematical Institute. V.A. Steklov Institute of RAS. 2012.Vol. 277.S. 199-214.

The adiabatic limit in the Landau-Ginzburg hyperbolic equations is studied. Using this limit, a correspondence is established between the solutions of the Ginzburg-Landau equations and adiabatic trajectories in the moduli space of static solutions, called vortices. Manton proposed a heuristic adiabatic principle, postulating that any solution of the Ginzburg-Landau equations with a sufficiently low kinetic energy can be obtained as a perturbation of some adiabatic trajectory. A rigorous proof of this fact was recently found by the first author

We give an explicit formula for a quasi-isomorphism between the operads Hycomm (the homology of the moduli space of stable genus 0 curves) and BV / Δ (the homotopy quotient of Batalin-Vilkovisky operad by the BV-operator). In other words we derive an equivalence of Hycomm-algebras and BV-algebras enhanced with a homotopy that trivializes the BV-operator. These formulas are given in terms of the Givental graphs, and are proved in two different ways. One proof uses the Givental group action, and the other proof goes through a chain of explicit formulas on resolutions of Hycomm and BV. The second approach gives, in particular, a homological explanation of the Givental group action on Hycomm-algebras.

Under scientific. edited by A. Mikhailov, Vol. 14.M .: Sociological Faculty of Moscow State University, 2012.

The articles in this collection are written on the basis of reports made in 2011 at the sociological faculty of Moscow State University. M.V. Lomonosov at a meeting of the XIV Interdisciplinary Annual Scientific Seminar "Mathematical Modeling of Social Processes" named after Hero of Socialist Labor Academician A.A. Samara.

The publication is intended for researchers, teachers, students of universities and scientific institutions of the Russian Academy of Sciences, interested in the problems, development and implementation of the methodology of mathematical modeling of social processes.

MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN FEDERATION NATIONAL RESEARCH NUCLEAR UNIVERSITY "MEPHI" TI Bukharova, VL Kamynin, AB Kostin, DS Tkachenko Lecture course on ordinary differential equations Recommended by UMO “Nuclear Physics and Technologies” in as a textbook for students of higher educational institutions Moscow 2011 UDC 517.9 BBK 22.161.6 B94 Bukharova T.I., Kamynin V.L., Kostin A.B., Tkachenko D.S. A course of lectures on ordinary differential equations: Textbook. - M .: NRNU MEPhI, 2011 .-- 228 p. The textbook is based on a course of lectures given by the authors at the Moscow Engineering Physics Institute for many years. It is intended for students of NRNU MEPhI of all faculties, as well as for university students with advanced mathematical training. The manual was prepared within the framework of the NRNU MEPhI Creation and Development Program. Reviewer: Doctor of Phys.-Math. Sciences N.A. Kudryashov. ISBN 978-5-7262-1400-9 © National Research Nuclear University "MEPhI", 2011 Contents Preface. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 5 I. Introduction to the theory of ordinary differential equations Basic concepts. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Cauchy problem. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 6 6 11 II. Existence and uniqueness of a solution to the Cauchy problem for a first-order equation. Uniqueness theorem for a first-order OLE. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Existence of the solution to the Cauchy problem for the first order OÄE. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Continuation of the solution for the first order OÄE. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... III. Cauchy problem for a normal n-th order system Basic concepts and some auxiliary properties of vector functions. ... ... ... Uniqueness of the solution to the Cauchy problem for a normal system. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ; ... The concept of metric space. Principle of the given images. ... ... ... ... ... Existence and uniqueness theorems for the solution of the Cauchy problem for normal systems. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 14 14 23 34 38 38 43 44 48 IV. Some Classes of Ordinary Differential Equations Solved in Quadratures Equation with separable variables. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Linear OÄE of the first order. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Homogeneous equations. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... The Vernoulli equation. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Equation in total differentials. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 55 55 58 63 64 65 V. 67 First order equations that are not resolved with respect to the derivative The theorem of the existence and uniqueness of the solution of the DE that is not resolved with respect to the derivative. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... A special solution. The discriminant curve. I go around. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Parameter introduction method. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Lagran's equation. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Clairaut's equation. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Vi. Systems of linear ODEs Basic concepts. The theorem of existence and uniqueness of the solution of the problem Homogeneous systems of linear DE. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Determinant of Voronsky. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Integrated solutions for a homogeneous system. Transition to real ÔСР. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Inhomogeneous systems of linear OÄE. Method of variation of constants. ... ... ... ... Homogeneous systems of linear OÄE with constant coefficients. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Exponential function of the matrix. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 3 67 70 77 79 81 85 Cauchy 85. ... ... 87. ... ... 91. ... ... ... ... ... 96 97. ... ... one hundred . ... ... 111 Inhomogeneous systems of linear DE with constant coefficients. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 116 VII. High-order linear ODEs Reduction to a system of linear ODEs. The theorem of existence and uniqueness of the solution of the Cauchy problem. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... High order homogeneous linear OÄE. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Properties of complex solutions of high-order homogeneous linear OÄE. The transition from a complex SSS to real. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... High-order nonhomogeneous linear DE. Method of variation of constants. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Homogeneous linear high-order OÄE with constant coefficients. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... High-order non-homogeneous linear ODE with constant coefficients. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 126 VIII. Stability theory Basic concepts and definitions related to stability. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Stability of solutions of a linear system. ... ... ... ... ... Lyapunov's stability theorems. ... ... ... ... ... ... ... ... ... Stability at first approximation. ... ... ... ... ... ... Behavior of phase trajectories near the rest point 162. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 126 128 136 139 142 150 162 168 172 182 187 IX. First integrals of systems of ODE 198 First integrals of autonomous systems of ordinary differential equations 198 Semi-autonomous systems of ODE. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 205 Symmetric notation of OÄU systems. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... 206 X. Partial differential equations of the first order Homogeneous linear partial differential equations of the first order. Cauchy problem for a linear partial differential equation of the first order. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... First order quasilinear partial differential equations. ... ... ... The Cauchy problem for a first order quasilinear partial differential equation. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... Bibliography. ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... -4- 210. ... ... ... ... 210. ... ... ... ... 212. ... ... ... ... 216. ... ... ... ... 223. ... ... ... ... 227 PREFACE In preparing the book, the authors set themselves the goal of collecting in one place and presenting in an accessible form information on most issues related to the theory of ordinary differential equations. Therefore, in addition to the material included in the compulsory curriculum of the course of ordinary differential equations taught at NRNU MEPhI (and in other universities), the manual also includes additional questions, which, as a rule, do not have enough time in lectures, but which will be useful for better understanding. subject and will be useful to current students in their future professional activities. Mathematically rigorous proofs are given for all statements of the proposed manual. These proofs, as a rule, are not original, but they have all been revised in accordance with the style of presentation of mathematical courses at MEPhI. According to the widespread opinion among teachers and scientists, mathematical disciplines should be studied with complete and detailed proofs, moving gradually from simple to complex. The authors of this guide are of the same opinion. The theoretical information given in the book is supported by an analysis of a sufficient number of examples, which, we hope, will make it easier for the reader to study the material. The manual is addressed to university students with advanced mathematical training, first of all, students of NRNU MEPhI. At the same time, it will also be useful to everyone who is interested in the theory of differential equations and uses this branch of mathematics in their work. -5- Chapter I. Introduction to the theory of ordinary differential equations 1. 1. Basic concepts Throughout the manual, ha, bi will denote any of the sets (a, b),, (a, b],, we obtain x0 2 Zx ln 4C + 3 u (t) v (t) dt5 Zx v (t) dt. Ln C 6 x0 x0 After potentiating the last inequality and applying (2.3), we have 2 x 3 Zx Z u (x) 6 C + u (t) v (t) dt 6 C exp 4 v (t) dt5 x0 x0 for all x 2 [1, 1]. Estimate the difference jf (x, y2) f (x, y1) j \u003d sin x y1 y2 6 for all (x , y) 2 G. Thus, f satisfies the Lipschitz condition with L \u003d 1, in fact, even with L \u003d sin 1 in y. However, the derivative fy0 at the points (x, 0) 6 \u003d (0, 0) does not even exist. The following theorem, which is interesting in itself, will allow us to prove the uniqueness of the solution to the Cauchy problem: Theorem 2.1 (On an estimate for the difference of two solutions) Let G be a domain 2 in R and f (x, y) 2 CG and satisfy the Lipschitz condition in G. y with constant L. If y1, y2 are two solutions of the equation y 0 \u003d f (x, y) on an interval, then the inequality (estimate) is valid: jy2 (x) y1 (x) j 6 jy2 (x0) y1 (x0) j exp L (x x0) 6 y1 for all x 2. -19- y2 Proof. By definition 2.2 of the solution of equation (2.1) we obtain that 8 x 2 points x, y1 (x) and x, y2 (x) 2 G. For all t 2 we have the correct equalities y10 (t) \u003d ft, y1 (t ) and y20 (t) \u003d ft, y2 (t), which we integrate over t on the interval, where x 2. Integration is legal, since the right and left sides are continuous functions. We obtain the system of equalities Zx y1 (x) y1 (x0) \u003d x0 Zx y2 (x) y2 (x0) \u003d f t, y1 (t) dt, f t, y2 (t) dt. x0 Subtracting one from the other, we have jy1 (x) y2 (x) j \u003d y1 (x0) y2 (x0) + Zx hft, y1 (t) ift, y2 (t) dt 6 x0 Zx 6 y1 (x0) y2 ( x0) + ft, y1 (t) ft, y2 (t) dt 6 x0 Zx 6 y1 (x0) y2 (x0) + L y1 (t) y2 (t) dt. x0 We denote C \u003d y1 (x0) y2 (x0)\u003e 0, v (t) \u003d L\u003e 0, u (t) \u003d y1 (t) Then, by the Gronwall – Vellman inequality, we obtain the estimate: jy2 (x) y1 (x) j 6 jy2 (x0) y1 (x0) j exp L (x x0) y2 (t)\u003e 0. for all x 2. The theorem is proved. As a consequence of the proved theorem, we obtain a uniqueness theorem for the solution of the Cauchy problem (2. 1), (2.2). Corollary 1. Let the function f (x, y) 2 C G and satisfy in G the Lipschitz condition in y, and the functions y1 (x) and y2 (x) are two solutions of Eq. (2.1) on the same interval, and x0 2. If y1 (x0) \u003d y2 (x0), then y1 (x) y2 (x) on. Evidence. Let's consider two cases. -20- 1. Let x\u003e x0, then it follows from Theorem 2. 1 that h i that is y1 (x) y1 (x) y2 (x) 6 0 exp L (x x0), y2 (x) for x\u003e x0. 2. Let x 6 x0, make the change t \u003d x, then yi (x) \u003d yi (t) y ~ i (t) for i \u003d 1, 2. Since x 2, then t 2 [x0, x1] and the equality y ~ 1 (x0) \u003d y ~ 2 (x0). Let us find out which equation satisfies y ~ i (t). The following chain of equalities is true: d y ~ i (t) \u003d dt d ~ yi (x) \u003d dx f x, yi (x) \u003d f (t, y ~ i (t)). Here we used the rule of differentiation of a complex function and the fact that yi (x) are solutions of equation (2.1). Since the function f ~ (t, y) f (t, y) is continuous and satisfies the Lipschitz condition in y, then by Theorem 2.1 we have that y ~ 1 (t) y ~ 2 (t) on [x0, x1 ], ie y1 (x) y2 (x) on. Combining both considered cases, we obtain the statement of the corollary. Corollary 2. (on continuous dependence on initial data) Let the function f (x, y) 2 CG and satisfy in G the Lipschitz condition with respect to y with constant L, and the functions y1 (x) and y2 (x) are solutions of equation (2.1) defined on. Î denote l \u003d x1 x0 and δ \u003d y1 (x0) y2 (x0). Then the inequality y1 (x) y2 (x) 6 δ eL l is true for 8 x 2. The proof follows immediately from Theorem 2. 1. The inequality from Corollary 2 is called an estimate of the stability of a solution from the initial data. Its meaning is that if at x \u003d x0 the solutions are “close”, then on a finite segment they are also “close”. Theorem 2.1 gives an important estimate for applications for the modulus of the difference of two solutions, and Corollary 1 - the uniqueness of the solution to the Cauchy problem (2.1), (2.2). There are also other sufficient conditions for uniqueness, one of which we now present. As noted above, the geometrically uniqueness of the solution to the Cauchy problem means that no more than one integral curve of equation (2.1) can pass through the point (x0, y0) of the domain G. Theorem 2.2 (Osgood on uniqueness). Let the function f (x, y) 2 CG and for 8 (x, y1), (x, y2) 2 G the inequality f (x, y1) f (x, y2) 6 6 ϕ jy1 y2 j holds, where ϕ ( u)\u003e 0 for u 2 (0, β], ϕ (u) is continuous, and Zβ du! +1 when ε! 0+. Then through the point (x0, y0) of the domain ϕ (u) ε G there are at most one integral curve (2.1). -21- Proof.Let there exist two solutions y1 (x) and y2 (x) of equation (2.1) such that y1 (x0) \u003d y2 (x0) \u003d y0, we denote z (x) \u003d y2 (x) y1 (x). dyi Since \u003d f (x, yi), for i \u003d 1, 2, then the equality dx dz \u003d f (x, y2) f (x, y1) holds for z (x). dx dz \u003d f (x, y2) f (x, y1) jzj 6 ϕ jzj jzj, that is, then z dx 1 d is the inequality jzj2 6 ϕ jzj jzj, from which for jzj 6 \u003d 0 the following 2 dx double inequality follows: Zjz2 j Zx2 dx 6 x1 2 d jzj 6 2 jzjϕ jzj Zx2 dx, (2.5) x1 jz1 j where integration is carried out over any interval on which z (x)\u003e 0, and zi \u003d z (xi), i \u003d 1, 2. By assumption, z (x) 6 0 and, moreover, is continuous, so there is such a segment, select it and fix it. Consider the sets n o X1 \u003d x x< x1 и z(x) = 0 , n o X2 = x x > x2 and z (x) \u003d 0. At least one of these sets is not empty, since z (x0) \u003d 0 and x0 62. Let, for example, X1 6 \u003d ∅, it is bounded above, therefore 9 α \u003d sup X1. Note that z (α) \u003d 0, that is, α 2 X1, since assuming that z (α)\u003e 0, by virtue of continuity, we will have z (x)\u003e 0 on some interval α δ1, α + δ1, and this contradicts the definition α \u003d sup X1. From the condition z (α) \u003d 0 it follows that α< x1 . По построению z(x) > 0 for all x 2 (α, x2], and by virtue of the continuity of z (x)! 0+ for x! Α + 0. We repeat the reasoning in deriving (2.5), integrating over the segment [α + δ, x2], where x2 chosen above and fixed, and δ 2 (0, x2 α) is arbitrary, we obtain the inequality: Zjz2 j Zx2 dx 6 α + δ d jzj2 6 2 jzjϕ jzj jz (α + δ) j Zx2 dx. α + δ In this double inequality let us let δ! 0+, then z (α + δ)! z (α) \u003d 0, from Zjz2 jd jzj2! +1, by the continuity condition of z (x), and then the integral 2 jzjϕ jzj of the theorem. jz (α + δ) j -22- The right-hand side of the inequality Rx2 dx \u003d x2 α δ 6 x2 α is bounded by α + δ from above by a finite value, which is simultaneously impossible. The resulting contradiction proves Theorem 2. 2. Existence of a solution to the Cauchy problem for a first-order ODE Recall that that by the Cauchy problem (2.1), (2.2) we mean the following problem of finding the function y (x): 0 y \u003d f (x, y), (x, y) 2 G, y (x0) \u003d y0, (x0, y0 ) 2 G, where f (x, y) 2 CG and (x0, y0) 2 G; G is a domain in R2. Lemma 2. 2. Let f (x, y) 2 CG. Then the following statements hold: 1 ) any re The solution ϕ (x) of equation (2.1) on the interval ha, bi satisfying (2.2) x0 2 ha, bi is a solution on ha, bi of the integral equation Zx y (x) \u003d y0 + f τ, y (τ) dτ ; (2.6) x0 2) if ϕ (x) 2 C ha, bi is a solution to the integral equation (2.6) on ha, bi, 1 where x0 2 ha, bi, then ϕ (x) 2 C ha, bi is a solution to (2.1 ), (2.2). Evidence. 1. Let ϕ (x) be a solution to (2.1), (2.2) on ha, bi. Then, by Remark 2.2, ϕ (x) 2 C ha, bi and 8 τ 2 ha, bi we have the equality ϕ 0 (τ) \u003d f τ, ϕ (τ), integrating from x0 to x, we obtain (for any x 2 ha , bi) Rx ϕ (x) ϕ (x0) \u003d f τ, ϕ (τ) dτ, and ϕ (x0) \u003d y0, that is, ϕ (x) is a solution to (2.6). x0 2. Let y \u003d ϕ (x) 2 C ha, bi be a solution to (2.6). Since f x, ϕ (x) is continuous on ha, bi by hypothesis, it follows that Zx ϕ (x) y0 + f τ, ϕ (τ) dτ 2 C 1 ha, bi x0 as an integral with variable upper limit of a continuous function. Differentiating the last equality with respect to x, we obtain ϕ 0 (x) \u003d f x, ϕ (x) 8 x 2 ha, bi and, obviously, ϕ (x0) \u003d y0, that is, ϕ (x) is a solution to the Cauchy problem (2.1), (2.2). (As usual, the derivative at the end of the segment is understood as the corresponding one-sided derivative.) -23- Remark 2. 6. Lemma 2. 2 is called the lemma on the equivalence of the Cauchy problem (2.1), (2.2) to the integral equation (2.6). If we prove that a solution to equation (2.6) exists, then we obtain the solvability of the Cauchy problem (2.1), (2.2). This plan is implemented in the following theorem. Theorem 2.3 (Local existence theorem). Let the rectangle P \u003d (x, y) 2 R2: jx x0 j 6 α, jy y0 j 6 β lie entirely in the domain G of the function f (x, y). The function f (x, y) 2 C G and satisfies the Lipschitz condition with respect to n y o in G with constant L. Î denote β M \u003d max f (x, y), h \u003d min α, M. When on the interval P there exists a solution to the Soshi problem (2.1), (2.2). Evidence. Let us establish the existence of a solution to the integral equation (2.6) on an interval. To do this, consider the following sequence of functions: Zx y0 (x) \u003d y0, y1 (x) \u003d y0 + f τ, y0 (τ) dτ, ... x0 Zx yn (x) \u003d y0 + f τ, yn 1 (τ ) dτ, etc. x0 1. Let us show that 8 n 2 N functions yn (successive approximations) are defined, that is, we show that for 8 x 2 the inequality yn (x) y0 6 β holds for all n \u003d 1, 2,. ... ... We use the method of mathematical induction (MMI): a) induction basis: n \u003d 1. Zx y1 (x) y0 \u003d f τ, y0 (τ) dτ 6 M0 x x0 6 M h 6 β, x0 where M0 \u003d max f (x , y0) for jx x 0 j 6 α, M0 6 M; b) the assumption and the induction step. Let the inequality be true for yn 1 (x), let us prove it for yn (x): Zx yn (x) y0 \u003d f τ, yn 1 (τ) dτ 6 M x x0 So, if jx x0 j 6 h, then yn ( x) y0 6 β 8 n 2 N. -24- x0 6 M h 6 β. Our goal is to prove the convergence of the successor of the closest 1 yk (x) k \u003d 0, for this it is convenient to represent it in the form: yn \u003d y0 + n X yk 1 (x) \u003d y0 + y1 yk (x) y0 + y2 y1 +. ... ... + yn yn 1, k \u003d 1 i.e. sequences of partial sums of a functional series. 2. Estimate the terms of this series by proving the following inequalities 8 n 2 N and 8 x 2: x0 jn yn (x) yn 1 6 M0 L 6 M0 Ln n! Let us apply the method of mathematical induction: jx n 1 1 hn. n! (2.7) a) induction basis: n \u003d 1. y1 (x) x y 0 6 M0 x0 6 M0 h, proved above; b) the assumption and the induction step. Let the inequality be true for n, we show it for n: Zx yn (x) yn 1 f τ, yn 1 (τ) \u003d f τ, yn 2 (τ) 1, up to dτ 6 x0 Zx i yn 6 by the Lipschitz condition 6 L h yn 1 2 dτ 6 x0 h Zx i 6 by the induction hypothesis 6 L n 2 M0 L jτ x0 jn 1 dτ \u003d (n 1)! x0 M0 Ln 1 \u003d (n 1)! Zx jτ n 1 x0 j M0 Ln 1 jx x0 jn M0 L n 6 dτ \u003d (n 1)! n n! 1 x0 Rx Here we have used the fact that the integral I \u003d jτ x0 for x\u003e x0 for x< x0 Rx I = (τ x0 Rx I = (x0 n 1 x0) τ)n 1 dτ = dτ = x0 (x (x0 x)n n Таким образом, неравенства (2.7) доказаны. -25- x0)n и n = jx x0 jn . n x0 jn 1 dτ : hn . 3. Рассмотрим тождество yn = y0 + ним функциональный ряд: y0 + 1 P n P yk (x) yk 1 (x) и связанный с k=1 yk 1 (x) . Частичные суммы это- yk (x) k=1 го ряда равны yn (x), поэтому, доказав его сходимость, получим сходимость 1 последовательности yk (x) k=0 . В силу неравенств (2.7) функциональный ряд мажорируется на отрезке k 1 P k 1 h числовым рядом M0 L . Этот числовой ряд сходится k! k=1 по признаку Даламбера, так как M0 Lk hk+1 k! ak+1 = ak (k + 1)! M0 L k 1 hk = h L ! 0, k+1 k ! 1. Тогда по признаку Вейерштрасса о равномерной сходимости функциональный 1 P ряд y0 + yk (x) yk 1 (x) сходится абсолютно и равномерно на отрезk=1 ке , следовательно и функциональная последовательность 1 yk (x) k=0 сходится равномерно на отрезке к некоторой функ- ции ϕ(x), а так как yn (x) 2 C , то и ϕ(x) 2 C как предел равномерно сходящейся последовательности непрерывных функций. 4. Рассмотрим определение yn (x): Zx yn (x) = y0 + f τ, yn 1 (τ) dτ (2.8) x0 – это верное равенство при всех n 2 N и x 2 . У левой части равенства (2.8) существует предел при n ! 1, так как yn (x) ⇒ ϕ(x) на , поэтому и у правой части (2.8) тоже существует Rx предел (тот же самый). Покажем, что он равен функции y0 + f τ, ϕ(τ) dτ , x0 используя для этого следующий критерий равномерной сходимости функциональной последовательности: X yn (x) ⇒ ϕ(x) при n ! 1 () sup yn (x) y(x) ! 0 при n ! 1 . x2X X Напомним, что обозначение yn (x) ⇒ ϕ(x) при n ! 1 принято использовать для равномерной на множестве X сходимости функциональной последователь 1 ности yk (x) k=0 к функции ϕ(x). -26- Покажем, что y0 + Rx X f τ, yn 1 (τ) dτ ⇒ y0 + x0 здесь X = . Для этого рассмотрим f τ, yn 1 (τ) f τ, ϕ(τ) dτ 6 x2X x0 Zx h i yn 1 (τ) 6 по условию Липшица 6 sup L ϕ(τ) dτ 6 x2X x0 6 L h sup yn 1 (τ) ϕ(τ) ! 0 при n ! 1 τ 2X X в силу равномерной при n ! 1 сходимости yn (x) ⇒ ϕ(x). Таким образом, переходя к пределу в (2.8), получим при всех x 2 верное равенство Zx ϕ(x) = y0 + f τ, ϕ(τ) dτ, x0 в котором ϕ(x) 2 C . По доказанной выше лемме 2. 2 ϕ(x) – решение задачи Коши (2.1), (2.2). Теорема доказана. Замечание 2. 7. В теореме 2. 3 установлено существование решения на отрезке . По следствию 1 из теоремы 2. 1 это решение единственно в том смысле, что любое другое решение задачи Коши (2.1), (2.2), определенное на совпадает с ним на этом отрезке. Замечание 2. 8. Представим прямоугольник P в виде объединения двух (пересекающихся) прямоугольников P = P [ P + , (рис. 2. 3) где n P = (x, y) n P = (x, y) + x 2 ; x 2 ; -27- jy jy o y0 j 6 β , o y0 j 6 β . Рис. 2. 3. Объединение прямоугольников Обозначим f (x, y . M − = max − f (x, y , M + = max + P P Повторяя,с очевидными изменениями, доказательство теоремы 2. 3 отдель но для P + или P − , получим существование (и единственность) решения на отрезке n o β + + , где h = min α, M + или, соответственно, на , n o β − . Отметим, что при этом, вообще говоря, h+ 6= h− , а h h = min α, M − из теоремы 2. 3 есть минимум из h+ и h− . Замечание 2. 9. Существование решения задачи (2.1), (2.2) теоремой 2. 3 гарантируется лишь на некотором отрезке . В таком случае говорят, что теорема является локальной. Возникает вопрос: не является ли локальный характер теоремы 2. 3 следствием примененного метода ее доказательства? Может быть, используя другой метод доказательства, можно установить существование решения на всем отрезке , т.е. глобально, как это было со свойством единственности решения задачи Коши (2.1), (2.2)? Следующий пример показывает, что локальный характер теоремы 2. 3 связан с «существом» задачи, а не с методом ее доказательства. Пример 2. 1. Рассмотрим задачу Коши 0 y = −y 2 , (2.9) y(0) = 1 n o в прямоугольнике P = (x, y) jxj 6 2, jy − 1j 6 1 . Функция f (x, y) = −y 2 непрерывна в P и fy0 = −2y 2 C P , поэтому все условия тео1 β , α = и ремы 2. 3 выполнены, а M = max f (x, y) = 4. Тогда h = min P P M 4 -28- теорема 2. 3 гарантирует существование решения на отрезке 1 1 . Решим − , 4 4 эту задачу Коши, используя «разделение переменных»: − dy = dx y2 () y(x) = 1 . x+C 1 – решение задачи Коши (2.9). x+1 График решения представлен на рис. (2.4), из которого видно, что решение 1 при x < x = − покидает прямоугольник P , а при x 6 −1 даже не 2 существует. Подставляя x = 0, найдем C = 1 и y(x) = Рис. 2. 4. Локальный характер разрешимости задачи Коши В связи с этим возникает вопрос об условиях, обеспечивающих существование решения на всем отрезке . На приведенном примере мы видим, что решение покидает прямоугольник P , пересекая его «верхнее» основание, поэтому можно попробовать вместо прямоугольника P в теореме 2. 3 взять полосу: o n 2 A 6 x 6 B − 1 < y < +1 , A, B 2 R. Q = (x, y) 2 R Оказывается, что при этом решение существует на всем отрезке A, B , если f (x, y) удовлетворяет условию Липшица по переменной y в Q. А именно, имеет место следующая важная для приложений теорема. Теорема 2. 4. Пусть функция f (x, y) определена, непрерывна и удовлетворяет условию Липшица по y с константой L в полосе Q = (x, y) 2 R2: A 6 x 6 B, y 2 R , -29- где A, B 2 R. Òогда при любых начальных данных x0 2 , y0 2 R т.е. (x0 y0) 2 Q существует и притом единственное решение задачи Êоши (2.1), (2.2), определенное на всем . Доказательство. Áудем считать, что x0 2 (A, B). Проведем рассуждения по схеме теоремы 2. 3 отдельно для полосы o n + 2 x 2 y 2 R и Q = (x, y) 2 R n Q = (x, y) 2 R2 o x 2 y 2 R . + Если x0 = A или x0 = B, то один из этапов рассуждений (для Q или, соответственно, для Q) отсутствует. + Возьмем полосу Q и построим последовательные приближения yn+ (x), + как в теореме 2. 3. Поскольку Q не содержит ограничений на размер по y, то пункт 1) доказательства теоремы 2. 3 не проверяем. Далее, как в предыдущей теореме, от последовательности переходим к ряду с частичными суммами yn+ (x) = y0 + n X yk+ (x) yk+ 1 (x) , где x 2 . k=1 Повторяя рассуждения, доказываем оценку вида (2.7) x0 jn x0)n n 1 (B 6 M0 L 6 M0 L (2.10) n! n! при всех x 2 ; здесь M0 = max f (x, y0) при x 2 , откуда yn+ (x) yn+ 1 n 1 jx yn+ (x) как и выше в теореме 2. 3 получим, что ⇒ ϕ+ (x), n ! 1, причем ϕ+ (x) 2 C , ϕ+ (x) – решение интегрального уравнения (2.6) на . Возьмем полосу Q и построим последовательность yn (x). Действуя ана логично, получим, что 9 ϕ (x) 2 C , ϕ (x) – решение интегрального уравнения (2.6) на . Определим функцию ϕ(x) как «сшивку» по непрерывности ϕ+ и ϕ , т.е. + ϕ (x), при x 2 , ϕ(x) = ϕ (x), при x 2 . Отметим, что ϕ+ (x0) = ϕ (x0) = y0 и потому ϕ(x) 2 C . Функции ϕ (x) по построению удовлетворяют интегральному уравнению (2.6), т.е. Zx ϕ (x) = y0 + f τ, ϕ (τ) dτ, x0 -30- где x 2 для ϕ+ (x) и x 2 для ϕ (x), соответственно. Следовательно, при любом x 2 функция ϕ(x) удовлетворяет инте 1 гральному уравнению (2.6). Тогда по лемме 2. 2, ϕ(x) 2 C и является решением задачи Коши (2.1), (2.2). Теорема доказана. Из доказанной теоремы 2. 4 нетрудно получить следствие для интервала (A, B) (открытой полосы). Ñледствие. Пусть функция f (x, y) определена, непрерывна в открытой полосе Q = (x, y) 2 R2: x 2 (A, B), y 2 R , причем A и B 2 R могут быть символами 1 и +1 соответственно. Предположим, что f (x, y) удовлетворяет в полосе Q условию: 9 L(x) 2 C(A, B), такая, что 8 x 2 (A, B) и 8 y1 , y2 2 R выполняется неравенство f (x, y2) f (x, y1) 6 L(x) jy2 y1 j. Òогда при любых начальных данных x0 2 (A, B), y0 2 R т.е. (x0 y0) 2 Q существует и притом единственное решение задачи Êоши (2.1), (2.2), определенное на всем (A, B). Доказательство. Для любой полосы Q1 = (x, y) 2 R2: x 2 , y2R , где A1 > A, B1< B, лежащей строго внутри Q и содержащей (x0 , y0), справедлива теорема 2. 4, так как при доказательстве оценок вида (2.10), необходимых для обоснования равномерной на сходимости последовательности yn (x) , используются постоянные M0 = max f (x, y0) при x 2 и L = max L(x) x 2 . Эти постоянные не убывают при расширении (A, B). Возьмем последовательность расширяющихся отрезков , удовлетворяющих условиям B > Bk + 1\u003e Bk for all k 2 N; 1) A< Ak+1 < Ak , 2) x0 2 при всех k 2 N; 3) Ak ! A, Bk ! B при k ! 1. Заметим сразу, что S = (A, B) и, более того, для любого x 2 (A, B) k найдется номер x 2 . N (x) 2 N, такой, что при всех -31- k > N holds Let us prove this auxiliary statement for the case A, B 2 R (i.e., A and B are finite; if A \u003d 1 or B \u003d + 1, then similarly). Take x A B x, arbitrary x 2 (A, B) and δ (x) \u003d min, δ (x)\u003e 0. For 2 2, the number δ from the convergence Ak! A and Bk! B we obtain 9 N1 (δ) 2 N: 8 k\u003e N1, A< Ak < A + δ < x, 9 N2 (δ) 2 N: 8 k > N2, x< B δ < Bk < B. Тогда для N = max N1 , N2 справедливо доказываемое свойство. Построим последовательность решений задачи Коши (2.1), (2.2) Yk (x), применяя теорему 2. 4 к соответствующему отрезку . Любые два из этих решений совпадают на общей области определения по следствию 1 из теоремы 2.1. Таким образом, два последовательных решения Yk (x) и Yk+1 (x) совпадают на , но Yk+1 (x) определено на более широком отрезке . Построим решение на всем (A, B). Возьмем и построим ϕ(x) – решение задачи (2.1), (2.2) на всем (по теореме 2. 4). Затем продолжим это решение на , . . . , . . . Получим, что решение ϕ(x) определено на всем (A, B). Докажем его единственность. Предположим, что существует решение ψ(x) задачи Коши (2.1), (2.2), также определенное на всем (A, B). Докажем, что ϕ(x) ψ(x) при любом x 2 (A, B). Пусть x – произвольная точка (A, B), найдется номер N (x) 2 N, такой, что x 2 при всех k > N. Applying Corollary 1 in Section 2.1 (that is, the uniqueness theorem), we obtain that ϕ (t) ψ (t) for all t 2 and, in particular, for t \u003d x. Since x is an arbitrary point (A, B), the uniqueness of the solution, and with it the corollary, are proved. Remark 2. 10. In the corollary proved, we first encountered the concept of continuation of a solution to a wider set. We'll explore it in more detail in the next section. Here are some examples. p Example 2. 2. For the equation y 0 \u003d ejxj x2 + y 2, find out whether its solution exists on all (A, B) \u003d (1, +1). Consider this equation in the "strip" Q \u003d R2, the function p jxj f (x, y) \u003d e x2 + y 2 ∂f y \u003d ejxj p, fy0 6 ejxj \u003d L (x). ∂y x2 + y 2 By Statement 2. 1 of Section 2.1, the function f (x, y) satisfies the Lipschitz condition with respect to y with a “constant” L \u003d L (x), x is fixed. Then all the conditions of the corollary are satisfied, and for any initial data (x0, y0) 2 R2 a solution to the Cauchy problem exists and, moreover, is unique on (1, +1). Note that the equation itself in quadratures is not solved, but approximate solutions can be constructed numerically. is defined and continuous in Q, -32- Example 2. 3. For the equation y 0 \u003d ex y 2, find out whether there exist solutions defined on R. If we again consider this equation in the “strip” Q \u003d R2, where the function ∂ ff (x, y) \u003d ex y 2 is defined and continuous, and \u003d 2yex, then we can notice ∂y that the condition of the corollary is violated, namely, there is no continuous function L (x) such that f (x, y2) f (x, y1) 6 L (x) jy2 y1 j for all y1, y2 2 R. Indeed, f (x, y2) f (x, y1) \u003d ex jy2 + y1 j jy2 y1 j, and the expression jy2 + y1 j is not bounded for y1, y2 2 R. Thus, the corollary is not applicable. Let us solve this equation by "separation of variables", we obtain the general solution: "y (x) \u003d 0, y (x) \u003d 1. ex + C For definiteness, we take x0 \u003d 0, y0 2 R. If y0 \u003d 0, then y (x ) 0 is the solution to the Cauchy problem on R. 1 is the solution to the Cauchy problem For y0 2 [1, 0) ex it is defined for all x 2 R, and for y0 2 (1, 1) [(0, +1) the solution is not y0 + 1 can be continued through the point x \u003d ln.More precisely, if x\u003e 0, then y0 1 the solution y (x) \u003d y0 +1 is defined for x 2 (1, x), and if x< 0 x e y0 y0 < 1 , то решение определено при x 2 (x , +1). В первом случае lim y(x) = +1, а во втором – lim y(x) = 1. Если y0 6= 0, то y(x) = x!x 0 y0 +1 y0 x!x +0 -33- Для наглядности нарисуем интегральные кривые при соответствующих значениях y0 (рис. 2. 5). Рис. 2. 5. Интегральные кривые уравнения y 0 = ex y 2 Таким образом, для задачи Коши 0 y = ex y 2 , y(0) = y0 имеем следующее: 1) если y0 2 [ 1, 0], то решение существует при всех x 2 R; y0 + 1 2) если y0 < 1, то решение существует лишь при x 2 ln ; +1 ; y0 y0 + 1 . 3) если y0 > 0, then the solution exists only for x 2 1; ln y0 This example shows that the restriction on the growth of the function f (x, y) in the corollary of Theorem 2.4 proved above is essential for extending the solution to the whole of (A, B). Examples with the function f (x, y) \u003d f1 (x) y 1 + ε for any ε\u003e 0 are obtained in a similar way; in the given example, ε \u003d 1 is taken only for the convenience of presentation. 2. 3. Extension of the solution for a first-order ODE Definition 2. 5. Consider the equation y 0 \u003d f (x, y) and let y (x) be its solution on ha, bi, and Y (x) its solution on hA , Bi, and ha, bi are contained in hA, Bi and Y (x) \u003d y (x) on ha, bi. Then Y (x) is called the extension of the solution y (x) to hA, Bi, and y (x) is said to be extended to hA, Bi. -34- In Section 2.2, we proved a local existence theorem for a solution to the Cauchy problem (2.1), (2.2). Under what conditions can this decision be continued for a wider period? The present section is devoted to this question. Its main result is as follows. Theorem 2.5 (on the continuation of the solution in a bounded closed domain). Let the function f (x, y) 2 CG and satisfy the Lipschitz condition with respect to y in R2, and (x0, y0) is an interior point of a bounded closed domain G G. Then, through the point (x0, y0) the solution of the equation y 0 \u003d f (x , y), extended up to ∂G of the boundary of the domain G, that is, it can be extended to a segment such that the points a, y (a) and b, y (b) lie on ∂G. ∂f (x, y) is continuous in a bounded, closed, convex in y domain G, then the function f (x, y) satisfies in G the Lipschitz condition in the variable y. See the corollary from Statement 2. 1 ∂f from Section 2.1. Therefore, this theorem is valid if it is continuous in ∂y G. Remark 2. 11. Recall that if Proof. Since (x0, y0) is an interior point of G, then there is a closed rectangle no 2 P \u003d (x, y) 2 R x x0 6 α, y y0 6 β, which lies entirely in G. Then, by Theorem 2.3 from item .2.2 there is h\u003e 0 such that on the interval there exists (and, moreover, a unique) solution y \u003d ϕ (x) of the equation y 0 \u003d f (x, y). Let us first continue this solution to the right up to the boundary of the domain G, breaking the proof into separate steps. 1. Consider the set ER: for E \u003d α\u003e 0 the solution y \u003d ϕ (x) is extendable to there exists a solution y \u003d ϕ1 (x) of the equation y 0 \u003d f (x, y) satisfying the Cauchy conditions ϕ1 ~ b \u003d ϕ ~ b ... Thus, ϕ (x) and ϕ1 (x) are solutions on the segment ~ b h1, ~ b of one equation that coincide at the point x \u003d ~ b, therefore they coincide on the entire segment ~ b h1, ~ b and, therefore, ϕ1 (x) is a continuation of the solution ϕ (x) from the segment ~ b h1, ~ b to ~ b h1, ~ b + h1. Consider the function ψ (x): ϕ (x), x 2 x0, ψ (x) \u003d ϕ1 (x), x 2 ~ b ~ b, h1, ~ b + h1 ~ b h1, x0 + α0 + h1, which is a solution to the equation y 0 \u003d f (x, y) and satisfies the Cauchy condition ψ (x0) \u003d y0. Then the number α0 + h1 2 E, and this contradicts the definition of α0 \u003d sup E. Therefore, case 2 is impossible. Similarly, the solution ϕ (x) extends to the left, to the segment where the point is a, ϕ (a) 2 ∂G. The theorem is completely proved. -37- Chapter III. Cauchy problem for a normal system of n-th order 3. 1. Basic concepts and some auxiliary properties of vector-functions In this chapter we will consider a normal system of n-th order of the form 8\u003e t, y,. ... ... , y y _ \u003d f 1 n 1 1\u003e,< y_ 2 = f2 t, y1 , . . . , yn , (3.1) . . . > \u003e: y_ \u003d f t, y,. ... ... , y, n n 1 n where unknowns (sought) are functions y1 (t),. ... ... , yn (t), and the functions fi are known, i \u003d 1, n, the dot above the function denotes the derivative with respect to t. It is assumed that all fi are defined in the domain G Rn + 1. It is convenient to write system (3.1) in vector form: y_ \u003d f (t, y), where y (t) y1 (t). ... ... , yn (t), f (t, y) f1 (t, y). ... ... , fn (t, y); we will not write arrows in the designation of vectors for brevity. This notation will also be denoted by (3.1). Let the point t0, y10,. ... ... , yn0 lies in G. The Cauchy problem for (3.1) is to find a solution ϕ (t) of system (3.1) satisfying the condition: ϕ1 (t0) \u003d y10, ϕ2 (t0) \u003d y20, ..., ϕn (t0) \u003d yn0, (3.2) or in the vector form ϕ (t0) \u003d y 0. As noted in Chapter 1, a solution to system (3.1) on the interval ha, bi is understood as a vector function ϕ (t) \u003d ϕ1 (t),. ... ... , ϕn (t), satisfying the conditions: 1) 8 t 2 ha, bi the point t, ϕ (t) lies in G; 2) 8 t 2 ha, bi 9 d dt ϕ (t); 38 3) 8 t 2 ha, bi ϕ (t) satisfies (3.1). If such a solution additionally satisfies (3.2), where t0 2 ha, bi, then it is called a solution to the Cauchy problem. Conditions (3.2) are called initial conditions or Cauchy conditions, and the numbers t0, y10,. ... ... , yn0 - the Cauchy data (initial data). In the special case when the vector function f (t, y) (n + 1) of the variable depends on y1,. ... ... , yn in a linear manner, i.e. has the form: f (t, y) \u003d A (t) y + g (t), where A (t) \u003d aij (t) - n n matrix, system (3.1) is called linear. In the future, we will need the properties of vector functions, which we will present here for convenience of references. The rules of addition and multiplication by a number for vectors are known from the course of linear algebra, these basic operations are performed coordinatewise. n If we introduce the scalar product x, y \u003d x1 y1 + into R. ... ... + xn yn, then we obtain a Euclidean space, which will also be denoted by Rn, with the length s q n P of the vector jxj \u003d x, x \u003d x2k (or the Euclidean norm). For scalar k \u003d 1 product and length, two basic inequalities are valid: 1) 8 x, y 2 Rn 2) 8 x, y 2 Rn sky). x + y 6 x + y x, y 6 x (triangle inequality); y (Cauchy inequality Áunyakov- It is known from the course of mathematical analysis of the second semester that the convergence of a sequence of points (vectors) in a Euclidean space (finite-dimensional) is equivalent to the convergence of sequences of coordinates of these vectors, they say, is equivalent to coordinate-wise convergence. This easily follows from the inequalities: qp max x 6 x21 +... + x2n \u003d jxj 6 n max xk.16k6n 16k6n Similarly to the scalar case, the derivative and the integral of the vector function are defined, and the properties are easily proved using the transition to coordinates. Here are some inequalities for vector functions that will be used below. 1. For any vector function y (t) \u003d y1 (t),. ... ... , yn (t), integrable (for example, continuous) on, the inequality Zb Zb y (t) dt 6 ay (t) dt a -39- (3.3) holds or in coordinate form 0 Zb Zb y1 (t) dt, @ y2 (t) dt,. ... ... , a 1 Zb a Zb q yn (t) dt A 6 y12 (t) +. ... ... yn2 (t) dt. a a Proof. Note firstly that the inequality does not exclude the case b< a, для этого случая в правой части присутствует знак внешнего модуля. По определению, интеграл от вектор-функции – это предел интегральn P ных сумм στ (y) = y(ξk) tk при характеристике («мелкости») разбиения k=1 λ(τ) = max tk стремящейся к нулю. По условию στ ! k=1, N Rb y(t) dt , а по a неравенству треугольника получим στ 6 n X Zb y(ξk) tk ! k=1 при λ(τ) ! 0 y(t) dt, a (здесь мы для определенности считаем a < b). По теореме о переходе к пределу в неравенстве получим доказываемое. Случай b < a сводится к изученному, Rb Ra так как = . a b Аналоги теорем Ролля и Лагранжа отсутствуют для вектор-функций, однако можно получить оценку, напоминающую теорему Лагранжа. 2. Для любой вектор-функции x(t), непрерывно дифференцируемой на , имеет место оценка ¾приращения¿: x(b) x(a) 6 max x 0 (t) b a. (3.4) Доказательство. Неравенство (3.4) сразу получается из (3.3) при y(t) = x 0 (t). При доказательстве теоремы разрешимости для линейных систем нам понадобятся оценки с n n матрицами, которые мы сейчас и рассмотрим. 3. Пусть A(t) = aij (t) n n матрица, обозначим произведение Ax через y. Как оценить y через матрицу A и x ? Оказывается, справедливо неравенство Ax 6 A -40- 2 x, (3.5) где x = p jx1 j2 + . . . + jxn j2 , A 2 = n P ! 12 a2ij , а элементы матрицы i,j=1 A и координаты вектора x могут быть комплексными. Доказательство. Для любого i = 1, n, ai – i-я строка матрицы A, тогда: 2 2 2 yi = ai1 x1 + ai2 x2 + . . . + ain xn = ai , x 6 h i 2 6 по неравенству Коши-Áуняковского 6 jai j2 x = ! ! n n X X 2 2 aik xl = , k=1 суммируя эти неравенства по i = 1, n, имеем: 0 1 n X 2 2 2 aik A x = A y [email protected] 2 2 l \u003d 1 2 x, k, i \u003d 1 whence (3.5) follows. Definition 3. 1. We say that a vector function f (t, y) satisfies the Lipschitz condition with respect to a vector variable y on a set G of variables õ (t, y) if 9 L\u003e 0 such that for any t, y , 2 t, y 2 G the inequality ft, y 2 ft, y 1 6 L y 2 y 1 holds. As in the case of a function of two variables (see Proposition 2.1), a sufficient condition for the Lipschitz property in a “convex in y” domain G is that the partial derivatives are bounded. Let's give a precise definition. Definition 3. 2. The domain G of variables (t, y) is called convex 1 2 in y if for any two points t, y and t, y lying in G, it also contains the entire segment connecting these two points, i.e. e. set n o t, y y \u003d y 1 + τ y 2 y 1, where τ 2. Statement 3. 1. If the domain G of variables (t, y) is convex in y, and the partial derivatives ∂fi are continuous and bounded by a constant l in G for ∂yj for all i, j \u003d 1, n, then the vector function ft, y satisfies in G the Lipschitz condition with respect to y with the constant L \u003d n l. 1 2 Proof. Consider arbitrary points t, y and t, y from G and 1 2 the segment connecting them, i.e. set t, y, where y \u003d y + τ y y1, t is fixed, and τ 2. -41- We introduce a vector function of one scalar argument g (τ) \u003d ft, y (τ), 2 1 then g (1) g (0) \u003d ft, yft, y, and on the other hand - Z1 g (1) g (0) \u003d dg (τ) dτ \u003d dτ Z1 A (τ) dy (τ) dτ \u003d dτ 0 0 h \u003d due to y \u003d y 1 + τ y 2 yi 1 Z1 \u003d A (τ) y 2 y 1 dτ , 0 where A (τ) is a matrix with elements ∂fi, and ∂yj y2 y 1 is the corresponding column. Here we used the rule of differentiation of a complex function, namely, for all i \u003d 1, n, t - fixed, we have: gi0 (τ) \u003d ∂fi ∂y1 ∂fi ∂y2 ∂fi ∂yn d fi t, y (τ) \u003d + + ... + \u003d dτ ∂y1 ∂τ ∂y2 ∂τ ∂yn ∂τ ∂fi ∂fi, ..., y2 y1. \u003d ∂y1 ∂yn Writing it down in matrix form, we get: 0 2 1 g (τ) \u003d A (τ) y y with an n n matrix A (τ) \u003d aij (τ) ∂fi ∂yj. Using the estimate for the integral (3.3) and inequality (3.5), after the substitution we obtain: ft, y 2 ft, y 1 Z1 \u003d g 0 (τ) dτ \u003d 0 Z1 6 A (τ) y 2 Z1 y1 A (τ) y 2 0 Z1 dτ 6 0 A (τ) A (τ) dτ y2 y1 6 y2 y1 6 nl 0 6 max A (τ) since 2 y 1 dτ 6 2 2 n P ∂fi \u003d i, j \u003d 1 ∂yj 2 y2 y1, 2 6 n2 l2 at 8 τ 2. The statement is proven. -42- 3. 2. Uniqueness of the solution to the Cauchy problem for a normal system Theorem 3. 1 (on the estimate of the difference between two solutions). Let G be some domain Rn + 1, and the vector function f (x, y) is continuous in G and satisfies the Lipschitz condition with respect to the vector variable y on the set G with constant L. If y 1, y 2 are two solutions of the normal system (3.1) y_ \u003d f (x, y) on an interval, then the estimate y 2 (t) y 1 (t) 6 y 2 (t0) y 1 (t0) exp L (t t0) holds for all t 2. The proof repeats verbatim, taking into account the obvious renotations, the proof of Theorem 2.1 in Section 2.1. 2 From this it is easy to obtain a uniqueness and stability theorem for the solution with respect to the initial data. Corollary 3.1. Let the vector function f (t, y) be continuous in the domain G and satisfy the Lipschitz condition in y in G, and the functions y 1 (t) and y 2 (t) are two solutions of the normal system (3.1) on the same interval, and t0 2. If y 1 (t0) \u003d y 2 (t0), then y 1 (t) y 2 (t) on. Corollary 3.2. (about continuous dependence on the initial data). Let the vector function f (t, y) be continuous in the domain G and satisfy in G the Lipschitz condition in y with a constant L\u003e 0, and the vector functions y 1 (t) and y 2 (t) are solutions of the normal system (3.1) defined on. Then, for 8 t 2, the inequality y 1 (t) is true where δ \u003d y 1 (t0) y 2 (t0), and l \u003d t1 y 2 (t) 6 δ eL l, t0. The proof of the corollaries repeats the proof of Corollaries 2.1 and 2.2 word for word, taking into account the obvious redesignations. 2 Investigation of the solvability of the Cauchy problem (3.1), (3.2), as in the one-dimensional case, is reduced to the solvability of an integral equation (vector). Lemma 3. 1. Let f (t, y) 2 C G; Rn 1. When the following assertions hold: 1) any solution ϕ (t) of equation (3.1) on the interval ha, bi satisfying (3.2) t0 2 ha, bi is a continuous solution on ha, bi 1 Through C G; H is customary to denote the set of all continuous functions in the domain G with values \u200b\u200bin the space H. For example, f (t, y) 2 C G; Rn components) defined on the set G. is the set of all continuous vector functions (with n -43- integral equation y (t) \u003d y 0 + Zt f τ, y (τ) dτ; (3.6) t0 2) if the vector -function ϕ (t) 2 C ha, bi is a continuous solution to the integral equation (3.6) on ha, bi, where t0 2 ha, bi, then ϕ (t) has a continuous derivative on ha, bi and is a solution to (3.1), (3.2). Evidence. 1. Let 8 τ 2 ha, bi the equality dϕ (τ) \u003d f τ, ϕ (τ) holds. Then, integrating from t0 to t, taking into account (3.2), we obtain dτ Rt 0 and we obtain that ϕ (t) \u003d y + f τ, ϕ (τ) dτ, that is, ϕ (t) satisfies equation (3.6). t0 2. Suppose that a continuous vector function ϕ (t) satisfies Eq. (3.6) on ha, bi, then ft, ϕ (t) is continuous on ha, bi by the continuity theorem for a composite function, and therefore the right-hand side of (3.6) ( and hence the left-hand side) has a continuous derivative with respect to t on ha, bi. For t \u003d t0 from (3.6) ϕ (t0) \u003d y 0, that is, ϕ (t) is a solution to the Cauchy problem (3.1), (3.2). Note that, as usual, the derivative at the end of the segment (if it belongs to it) means the one-way derivative of the function. The lemma is proved. Remark 3. 1. Using the analogy with the one-dimensional case (see Chapter 2) and the statements proved above, one can prove the theorem on the existence and continuation of the solution to the Cauchy problem by constructing an iterative sequence converging to the solution of the integral equation (3.6) on some interval t0 h, t0 + h. Here we give another proof of the existence (and uniqueness) theorem for a solution based on the principle of contraction maps. We do this to acquaint the reader with more modern methods of theory, which will be used in the future, in courses on integral equations and equations of mathematical physics. For the implementation of our plan, a number of new concepts and auxiliary statements will be required, which we will proceed to consider. 3. 3. The concept of metric space. Contraction Mapping Principle The most important concept of the limit in mathematics is based on the concept of “proximity” of points, that is, the ability to find the distance between them. On the numerical axis, distance is the modulus of the difference between two numbers, on the plane it is the well-known formula for Euclidean distance, etc. Many facts of analysis do not use the algebraic properties of elements, but rely only on the concept of distance between them. Development of this approach, i.e. the allocation of a "being" related to the concept of a limit leads to the concept of metric space. -44- Definition 3. 3. Let X be a set of arbitrary nature, and ρ (x, y) be a real function of two variables x, y 2 X satisfying three axioms: 1) ρ (x, y)\u003e 0 8 x, y 2 X, and ρ (x, y) \u003d 0 only for x \u003d y; 2) ρ (x, y) \u003d ρ (y, x) (symmetry axiom); 3) ρ (x, z) 6 ρ (x, y) + ρ (y, z) (triangle inequality). In this case, the set X with a given function ρ (x, y) is called the metric space (ÌП), and the function ρ (x, y): X X 7! R satisfying 1) - 3) is a metric or distance. Here are some examples of metric spaces. Example 3. 1. Let X \u003d R with distance ρ (x, y) \u003d x y, we obtain the MP R. n o n xi 2 R, i \u003d 1, n is Example 3. 2. Let X \u003d R \u003d x1,. ... ... , xn is the set of ordered sets of n real numbers s n 2 P x \u003d x1,. ... ... , xn with distance ρ (x, y) \u003d xk yk, we obtain n1 k \u003d 1 n dimensional Euclidean space R. n Example 3. 3. Let X \u003d C a, b; R is the set of all continuous on a, b functions with values \u200b\u200bin Rn, i.e. continuous vector functions, with distance ρ (f, g) \u003d max f (t) g (t), where f \u003d f (t) \u003d f1 (t),. ... ... , fn (t), t2 s n 2 P g \u003d g (t) g1 (t),. ... ... , gn (t), f g \u003d fk (t) gk (t). k \u003d 1 For examples 3. 1 –3. The 3 axioms of the MP are verified directly, we leave this as an exercise for the conscientious reader. As usual, if each natural n is associated with an element xn 2 X, then we say that a sequence of points xn Ì X is given. Definition 3. 4. A sequence of points xn of an MP X is called going to a point x 2 X if lim ρ xn, x \u003d 0. n! 1 Definition 3. 5. A sequence xn is called fundamental if for any ε\u003e 0 there exists a natural number N (ε) such that for all n\u003e N and m\u003e N the inequality ρ xn, xm< ε. Определение 3. 6. МП X называется полным (ПÌП), если любая его фундаментальная последовательность сходится к элементу этого пространства. -45- Полнота пространств из примеров 3. 1 и 3. 2 доказана в курсе математиче ского анализа. Докажем полноту пространства X = C a, b ; Rn из примера 3. 3. Пусть последовательность вектор-функций fn (t) фундаментальна в X. Это означает, что 8 ε > 0 9 N (ε) 2 N: 8m, n\u003e N \u003d) max fm (t) fn (t)< ε. Поэтому выполнены условия критерия Коши равномерной на a, b сходи мости функциональной последовательности, т.е. fn (t) ⇒ f (t) при n ! 1. Как известно, предел f (t) в этом случае – непрерывная функция. Докажем, что f (t) – это предел fn (t) в метрике пространства C a, b ; Rn . Из равномерной сходимости получим, что для любого ε > 0 there is a number N (ε) such that for all n\u003e N and for all t 2 a, b the inequality fn (t) f (t)< ε, а так как в левой части неравенства стоит непрерывная функция, то и max fn (t) f (t) < ε. Это и есть сходимость в C a, b ; Rn , следовательно, полнота установлена. В заключение приведем пример МП, не являющегося полным. Пример 3. 4. Пусть X = Q – множество рациональных чисел, а расстояние ρ(x, y) = x y – модуль разностиpдвух чисел. Если взять последовательность десятичных приближений числа 2 , т.е. x1 = 1; x2 = 1, 4; x3 = 1, 41; . . ., p то, как известно, lim xn = 2 62 Q. При этом данная последовательность n!1 сходится в R, значит она фундаментальна в R, а следовательно, она фундаментальна и в Q. Итак, последовательность фундаментальна в Q, но предела, лежащего в Q, не имеет. Пространство не является полным. Определение 3. 7. Пусть X – метрическое пространство. Отображение A: X 7! X называется сжимающим отображением или сжатием, если 9 α < 1 такое, что для любых двух точек x, y 2 X выполняется неравенство: ρ Ax, Ay 6 α ρ(x, y). (3.7) Определение 3. 8. Точка x 2 X называется неподвижной точкой отображения A: X 7! X, если Ax = x . Замечание 3. 2. Всякое сжимающее отображение является непрерывным, т.е. любую сходящуюся последовательность xn ! x, n ! 1, переводит в сходящуюся последовательность Axn ! Ax, n ! 1, а предел последовательности – в предел ее образа. Действительно, если A – сжимающий оператор, то положив в (3.7) X X y = xn ! x, n ! 1, получим, что Axn ! Ax, n ! 1. Теорема 3. 2 (Принцип сжимающих отображений). Пусть X полное метрическое пространство, а отображение A: X 7! X является сжатием. Òогда A имеет и притом единственную неподвижную точку. Доказательство этого фундаментального факта см. , . -46- Приведем обобщение теоремы 3. 2, часто встречающееся в приложениях. Теорема 3. 3 (Принцип сжимающих отображений). Пусть X полное метрическое пространство, а отображение A: X 7! X таково, что оператор B = Am с некоторым m 2 N является сжатием. Òогда A имеет и притом единственную неподвижную точку. Доказательство. При m = 1 получаем теорему 3. 2. Пусть m > 1. Consider B \u003d Am, B: X 7! X, B - compression. By Theorem 3.2, the operator B has a unique fixed point x. Since A and B commute AB \u003d BA and since Bx \u003d x, we have B Ax \u003d A Bx \u003d Ax, i.e. y \u003d Ax is also a fixed point of B, and since such a point is unique by Theorem 3.2, then y \u003d x or Ax \u003d x. Hence x is a fixed point of the operator A. Let us prove the uniqueness. Suppose that x ~ 2 X and A ~ x \u003d x ~, then m m 1 B x ~ \u003d A x ~ \u003d A x ~ \u003d. ... ... \u003d x ~, i.e. x ~ is also a fixed point for B, whence x ~ \u003d x. The theorem is proved. A special case of a metric space is a normed linear space. Here is a precise definition. Definition 3. 9. Let X be a linear space (real or complex), on which a numerical function x is defined, acting from X to R and satisfying the axioms: 1) 8 x 2 X, x\u003e 0, and x \u003d 0 only for x \u003d θ; 2) 8 x 2 X and for 8 λ 2 R (or C) 3) 8 x, y 2 X holds a nickname). x + y 6 x + y λx \u003d jλj x; (the triangular inequality - Then X is called a normed space, x: X 7! R, satisfying 1) - 3), is the norm. and function In a normalized space, you can enter the distance between elements by the formula ρ x, y \u003d x y. The fulfillment of the MP axioms is easily verified. If the resulting metric space is complete, then the corresponding normed space is called a Banach space. It is often possible to introduce the norm in different ways on the same linear space. In this regard, such a concept arises. Definition 3. 10. Let X be a linear space and u be two 1 2 norms introduced on it. The norms and are called equivalent 1 2 norms if 9 C1\u003e 0 and C2\u003e 0: 8 x 2 X C1 x 1 6 x 2 6 C2 x 1. Remark 3. 3. If and are two equivalent norms on X, and 1 2 the space X is complete in one of them, then it is complete in the other norm as well. This easily follows from the fact that the sequence xn X, which is fundamental with respect to, is also fundamental with respect to, and converges to 1 2 the same element x 2 X. -47- Remark 3. 4. Often Theorem 3.2 (or 3.3) is applied when the closed ball of this space is taken as a complete n space o Br (a) \u003d x 2 X ρ x, a 6 r, where r\u003e 0 and a 2 X are fixed. Note that a closed ball in an SMP is itself an SMP with the same distance. We leave the proof of this fact to the reader as an exercise. Remark 3. 5. Above, we established the completeness of the space from for n measure 3. 3. Note that in the linear space X \u003d C 0, T, R one can introduce the norm kxk \u003d max x (t) so that the resulting normalized will be Banach. On the same set of continuous vector functions on the space 0, T one can introduce an equivalent norm by the formula kxkα \u003d max e αt x (t) for any α 2 R. For α\u003e 0, the equivalence follows from the inequalities e αT x (t) 6 e αt x (t) 6 x (t) for all t 2 0, T, whence e αT kxk 6 kxkα 6 kxk. We will use this property of equivalent norms to prove a theorem on the unique solvability of the Cauchy problem for linear (normal) systems. 3. 4. Existence and uniqueness theorems for the solution of the Cauchy problem for normal systems Consider the Cauchy problem (3.1) - (3.2), where the initial data t0, y 0 2 G, G Rn + 1 is the domain of definition of the vector function f (t, y ). In this section we will assume that G has - some n form G \u003d a, b o, where the domain is Rn and the ball BR (y 0) \u003d The theorem holds. y 2 Rn y y0 6 R lies entirely in. Theorem 3. 4. Let the vector function f (t, y) 2 C G; Rn, and 9 M\u003e 0 and L\u003e 0 such that the conditions 1) 8 (t, y) 2 G \u003d a, b f (t, y) 6 M are satisfied; 2) 8 (t, y 1), (t, y 2) 2 G f t, y 2 f t, y 1 6 L y 2 y 1. We fix the number δ 2 (0, 1) and let t0 2 (a, b). Then R 1 δ 9 h \u003d min; ; t0 a; b t0\u003e 0 ML such that there exists and, moreover, a unique solution to the Êoshi problem (3.1), (3.2) y (t) on the interval Jh \u003d t0 h, t0 + h, and y (t) y 0 6 R for all t 2 Jh. -48- Proof. By Lemma 3.1, the Cauchy problem (3.1), (3.2) is equivalent to the integral equation (3.6) on the interval, and hence on Jh, where h was chosen above. Consider the Banach space X \u003d C (Jh; Rn), the set of continuous vector functions x (t) on the segment Jh with the norm kxk \u003d max x (t), and introduce a closed set in X: t2Jh SR y 0 n 8 t 2 Jh \u003d y (t) 2 X y (t) n \u003d y (t) 2 X yy (t) o 0 6R \u003d o 0 y 6R is a closed ball in X. Operator A defined by the rule: Ay \u003d y 0 + Zt f τ , y (τ) dτ, t 2 Jh, t0 sends SR y 0 to itself, since y 0 \u003d max Ay Zt t2Jh f τ, y (τ) dτ 6 h \u200b\u200bM 6 R t0 by condition 1 of the theorem and the definition of h. Let us prove that A is a contraction operator on SR. Let us take 0 1 2 arbitrarily and estimate the value: y (t), y (t) 2 SR y Ay 2 Ay 1 \u003d max Zt h t2Jh f τ, y 2 (τ) if τ, y 1 (τ) dτ 6 t0 Zt 6 max t2Jh f τ, y 2 (τ) f τ, y 1 (τ) dτ 6 t0 6h L y2 y1 \u003d q y2 y1, where q \u003d h L 6 1 δ< 1 по условию теоремы. Отметим (см. замечание 3.4), что замкнутый шар SR y 0 в банаховом пространстве X является ПМП. Поэтому применим принцип сжимающих отображений (теорема 3. 2), по которому существует единственное решение y(t) 2 X интегрального уравнения (3.6) на отрезке Jh = t0 h, t0 + h . Теорема доказана. Замечание 3. 6. Если t0 = a или t0 = b, то утверждение теоремы сохраняется с небольшими изменениями в формуле для h и отрезка Jh . Приведем эти изменения для случая t0 = a. В этом случае число h > 0 we choose according to R formula h \u003d min M; 1L δ; b a, and everywhere we need to take -49- Jh \u003d t0, t0 + h \u003d a, a + h as the segment Jh. All other conditions of the theorem do not change; its proof, taking into account the redesignations, R is preserved. For the case t0 \u003d b, similarly, h \u003d min M; 1L δ; b a, and Jh \u003d b h, b. n Remark 3. 7. In Theorem 3. 4 the condition f (t, y) 2 C G; R, where G \u003d a, b D, can be weakened by replacing it with the requirement of continuity of f (t, y) in the variable t for each y 2, while preserving conditions 1 and 2. The proof does not change. Remark 3. 8. It is sufficient that conditions 1 and 2 of Theorem 3.4 are satisfied 0 for all t, y 2 a, b BR y, and the constants M and L depend, 0 generally speaking, on y and R. For more stringent restrictions on the vector function ft, y, similarly to Theorem 2.4, the existence and uniqueness theorem for a solution to the Cauchy problem (3.1), (3.2) on the entire interval a, b holds. n Theorem 3. 5. Let the vector function fx, y 2 CG, R, where G \u003d a, b Rn, and there exists L\u003e 0 such that the condition 8 t, y 1, t, y 2 2 G ft , y 2 ft, y 1 6 L y 2 y 1. When for any t0 2 and y 0 2 Rn on a, b there exists and, moreover, a unique solution to the иoshi problem (3.1), (3.2). Evidence. Take arbitrary t0 2 and y 0 2 Rn and fix them. We represent the set G \u003d a, b Rn in the form: G \u003d G [G +, where Rn, and G + \u003d t0, b Rn, assuming that t0 2 a, b, otherwise one G \u003d a, t0 from the stages of the proof will be absent. Let us carry out the reasoning for the strip G +. On the interval t0, b, the Cauchy problem (3.1), (3.2) is equivalent to equation (3.6). Let us introduce the integral operator n A: X 7! X, where X \u003d C t0, b; R, by the formula Ay \u003d y 0 + Zt f τ, y (τ) dτ. t0 Then the integral equation (3.6) can be written in the form of the operator equation Ay \u003d y. (3.8) If we prove that the operator equation (3.8) has a solution in the SMP X, then we obtain the solvability of the Cauchy problem on t0, b or on a, t0 for G. If this solution is unique, then by virtue of the equivalence, the solution to the Cauchy problem will also be unique. We give two proofs of the unique solvability of equation (3.8). Proof 1. Consider arbitrary vector functions 1 2 n y, y 2 X \u003d C t0, b; R, then the estimates hold for any -50- t 2 t0, b Ay 2: Ay 1 Zt hf τ, y 2 (τ) \u003d 1 f τ, y (τ) i dτ 6 t0 Zt y 2 (τ) 6L y 1 (τ) dτ 6 L t t0 max y 2 (τ) y 1 (τ) 6 τ 2 t0 6L t t0 y2 y1. Recall that the norm in X is introduced as follows: kxk \u003d max x (τ). From the obtained inequality we have: 2 2 Ay 2 1 Ay Zt hf τ, Ay 2 (τ) \u003d 1 i τ t0 dτ f τ, Ay (τ) dτ 6 t0 6 L2 Zt Ay 2 (τ) Ay 1 (τ ) dτ 6 L2 t0 Zt y2 y1 6 t0 6 L2 t t0 2! 2 y2 y1. Continuing this process, we can prove by induction that 8 k 2 N Ak y 2 Ak y 1 6 L t t0 k! k y2 y1. Hence, finally, we obtain the estimate Ak y 2 Ak y 1 \u003d max Ak y 2 L b t0 Ak y 1 6 L b t0 k! k y2 y1. k Since α (k) \u003d! 0 for k! 1, then there is k0 such, k! that α (k0)< 1. Применим теорему 3. 3 с m = k0 , получим, что A имеет в X неподвижную точку, причем единственную. Доказательство 2. В банаховом пространстве X = C t0 , b ; Rn введем семейство эквивалентных норм, при α > 0 (see Remark 3.5) by the formula: x α \u003d max e αt x (t). -51- Let us show that it is possible to choose α so that the operator A in the space X with the norm for α\u003e L is contracting. Indeed, α Ay 2 Ay 1 α Zt hf τ, y 2 (τ) αt \u003d max e 1 f τ, y (τ) i dτ 6 t0 6 max e αt Zt y 2 (τ) L y 1 (τ) dτ \u003d t0 \u003d L max e Zt αt e ατ y 2 (τ) eατ dτ 6 y 1 (τ) t0 6 L max e αt Zt eατ dτ max e ατ y 2 (τ) y 1 (τ) \u003d y2 α t0 \u003d L max e αt Since α\u003e L, then q \u003d L α 1 1 αt e α e eαt0 L \u003d α α b t0 y 2 y1 y 1 α \u003d 1 e α b t0.< 1 и оператор A – сжимающий (например, с α = L). Таким образом, доказано, что существует и притом единственная вектор + функция ϕ (t) – решение Коши (3.1), (3.2) на t0 , b . задачи Rn задачу Коши сведем к предыдущей при Для полосы G = a, t0 помощи линейной замены τ = 2t0 t. В самом деле, для вектор-функция y(t) = y 2t0 τ = y~(t), задача Коши (3.1), (3.2) запишется в виде: y~(τ) = f (2t0 τ, y~(τ)) f~ (τ, y~(τ)) , y~(t0) = y 0 на отрезке τ 2 t0 , 2t0 a . Поэтому можно применить предыдущие рассуждения, взяв b = 2t0 a. Итак, существует и притом единственное решение задачи Коши y~(τ) на всем отрезке τ 2 t0 , 2t0 a и,следовательно, ϕ (t) = y~ 2t0 t – решение задачи Коши (3.1), (3.2) на a, t0 . Возьмем «сшивку» вектор-функций ϕ (t) и ϕ+ (t), т.е. вектор-функцию ϕ (t), при t 2 a, t0 ; ϕ(t) = ϕ+ (t), при t 2 t0 , b . d dτ Как при доказательстве теоремы 2.4, устанавливаем, что ϕ(t) – это решение задачи Коши (3.1), (3.2) на a, b . Единственность его следует из следствия 3.1. Теорема доказана. -52- Замечание 3.9. Утверждение 3. 1 дает достаточное условие того, что векторфункция f t, y в выпуклой по y области G удовлетворяет условию Лип∂fi шица. А именно, для этого достаточно, чтобы все частные производные ∂yj были непрерывны и ограничены некоторой константой в G. Аналогично следствию из теоремы 2.4 получаем такое утверждение для нормальных систем. Ñледствие 3.3. Пусть вектор-функция f (t, y) определена, непрерывна в открытой полосе o n n Q = (t, y) t 2 (A, B), y 2 R , причем A и B могут быть символами 1 и +1 соответственно. Предположим, что вектор-функция f (t, y) удовлетворяет в полосе Q условию: 9 L(t) 2 C(A, B) такая, что 8 t 2 (A, B) и 8 y 1 , y 2 2 Rn выполняется неравенство f t, y 2 f t, y 1 6 L(t) y 2 y 1 . Òогда при любых начальных данных t0 2 (A, B), y 0 2 Rn существует и притом единственное решение задачи Êоши (3.1), (3.2) на всем интервале (A, B). Доказательство проводится повторением соответствующих рассуждений из п. 2.2, оставляем его добросовестному читателю. В качестве других следствий из доказанной теоремы 3. 5 получим теорему о существовании и единственности решения задачи Коши для линейной системы. Речь идет о задаче нахождения вектор-функции y(t) = (y1 (t), . . . , yn (t)) из условий: d y(t) = A(t)y(t) + f 0 (t), t 2 a, b , (3.9) dt y(t0) = y 0 , (3.10) где A(t) = aij (t) – n n матрица, f 0 (t) – вектор-функция переменной t, t0 2 a, b , y 0 2 Rn – заданы. n 0 Теорема 3. 6. Пусть a (t) 2 C a, b , f (t) 2 C a, b ; R , ij t0 2 a, b , y 0 2 Rn заданы. Òогда существует и притом единственное решение задачи Êоши (3.9), (3.10) на всем отрезке a, b . Доказательство. Проверим, что для функции f t, y = A(t)y + f 0 (t) выполнены теоремы 3. 5. Во-первых, f t, y 2 C G; Rn , где условия G = a, b Rn , как сумма двух непрерывных функций. Во-вторых, (см. неравенство (3.5)): Ay 2 Ay 1 = A(t) y 2 y 1 6 A 2 y 2 y 1 6 L y 2 y 1 , -53- поскольку A n P 2 ! 21 aij (t) 2 – непрерывная на a, b функция. Тогда i,j=1 по теореме 3. 5 получим доказываемое утверждение. Теорема 3. 7. Пусть aij (t) 2 C (R), f 0 (t) 2 C (R; Rn) заданы. Òогда при любых начальных данных t0 2 R, y 0 2 Rn существует и притом единственное решение задачи Êоши (3.9), (3.10) на всей числовой прямой. Доказательство. Проверим, что выполнены все условия следствия из теоре мы 3. 5 с A = 1, B = +1. Вектор-функция f t, y = A(t)y + f 0 (t) непрерывна в полосе Q = R Rn как функция (n + 1) переменной. Кроме того, L(t) y 2 y 1 , f t, y 2 f t, y 1 6 A(t) 2 y 2 y 1 где L(t) – непрерывная по условию теоремы на A, B = 1, +1 функция. Таким образом, все условия следствия выполнены, и теорема доказана. -54- Глава IV. Некоторые классы обыкновенных дифференциальных уравнений, решаемых в квадратурах В ряде случаев дифференциальное уравнение может быть решено в квадратурах, т.е. для его решения может быть получена явная формула. В таких случаях методика решения, как правило, следующая. 1. Предполагая, что решение существует, находят формулу, по которой решение выражается. 2. Существование решения затем доказывается непосредственной проверкой, т.е. подстановкой найденной формулы в исходное уравнение. 3. Используя дополнительные данные, (например, задавая начальные данные Коши) выделяют конкретное решение. 4. 1. Уравнение с разделяющимися переменными В данном параграфе применим уже использовавшуюся выше методику для решения уравнений с разделяющимися переменными, т.е. уравнений вида y 0 (x) = f1 (x) f2 (y), Áудем предполагать, что f1 (x) 2 C (ha, bi) , x 2 ha, bi, f2 (y) 2 C (hc, di) , y 2 hc, di. (4.1) f2 (y) 6= 0 на hc, di а следовательно, в силу непрерывности функции f2 (y), она сохраняет знак на hc, di . Итак, предположим, что в окрестности U(x0) точки x0 2 ha, bi существует решение y = ϕ(x) уравнения (4.1). Тогда имеем тождество dy = f1 (x) f2 (y), dx y = ϕ(x), 55 x 2 U(x0). Но тогда равны дифференциалы dy = f1 (x) dx f2 (y) мы учли, что f2 (y) 6= 0 . Из равенства дифференциалов вытекает равенство первообразных с точностью до постоянного слагаемого: Z Z dy = f1 (x) dx + C. (4.2) f2 (y) После введения обозначений Z F2 (y) = Z dy , f2 (y) F1 (x) = f1 (x) dx, получаем равенство F2 (y) = F1 (x) + C. (4.3) Заметим, что F20 (y) = 1/f2 (y) 6= 0, поэтому к соотношению (4.3) можно применить теорему об обратной функции, в силу которой равенство (4.3) можно разрешить относительно y и получить формулу y(x) = F2 1 F1 (x) + C , (4.4) справедливую в окрестности точки x0 . Покажем, что равенство (4.4) дает решение уравнения (4.1) в окрестности точки x0 . Действительно, используя теорему о дифференцировании обратной функции и учитывая соотношение F10 (x) = f1 (x), получим y 0 (x) = dF2 1 (z) dz z=F1 (x)+C F10 (x) = 1 F20 (y) y=y(x) F10 (x) = f2 y(x) f1 (x), откуда следует, что функция y(x) из (4.4) является решением уравнения (4.1). Рассмотрим теперь задачу Коши для уравнения (4.1) с начальным условием y(x0) = y0 . (4.5) Формулу (4.2) можно записать в виде Zy dξ = f2 (ξ) Zx f1 (x) dx + C. x0 y0 Подставляя сюда начальное условие (4.5), находим, что C = 0, т.е. решение задачи Коши определяется из соотношения Zy y0 dξ = f2 (ξ) Zx f1 (x) dx. x0 -56- (4.6) Очевидно, оно определяется однозначно. Итак, общее решение уравнения (4.1) задается формулой (4.4), а решение задачи Коши (4.4), (4.5) находится из соотношения (4.6). Замечание 4. 1. Если f2 (y) = 0 при некоторых y = yj , (j = 1, 2, . . . , s), то, очевидно, решениями уравнения (4.1) являются также функции y(x) yj , j = 1, 2, . . . , s, что доказывается непосредственной подстановкой этих функций в уравнение (4.1). Замечание 4. 2. Для уравнения (4.1) общее решение определяем из соотношения F2 (y) F1 (x) = C. (4.7) Таким образом, левая часть соотношения (4.7) постоянна на каждом решении уравнения (4.1). Соотношения типа (4.7) можно записать и при решении других ОДУ. Такие соотношения принято называть интегралами (общими интегралами) соответствующего ОДУ. Дадим точное определение. Определение 4. 1. Рассмотрим уравнение y 0 (x) = f (x, y). (4.8) Соотношение (x, y) = C, (4.9) где (x, y) – функция класса C 1 , называется общим интегралом уравнения (4.8), если это соотношение не выполняется тождественно, но выполняется на каждом решении уравнения (4.8). При каждом конкретном значении C 2 R мы получаем частный интеграл. Общее решение уравнения (4.8) получается из общего интеграла (4.9) с использованием теоремы о неявной функции. Пример 4. 1. Рассмотрим уравнение x (4.10) y 0 (x) = y и начальное условие y(2) = 4. (4.11) Применяя для решения уравнения (4.10) описанный выше метод разделения переменныõ, получаем y dy = x dx, откуда находим общий интеграл для уравнения (4.10) y 2 x2 = C. Общее решение уравнения (4.10) запишется по формуле p y= C + x2 , а решение задачи Коши (4.10), (4.11) – по формуле p y = 12 + x2 . -57- 4. 2. Линейные ОДУ первого порядка Линейным ОДУ первого порядка называется уравнение y 0 (x) + p(x)y(x) = q(x), Если q(x) 6 Если q(x) x 2 ha, bi. (4.12) 0, то уравнение называется неоднородным. 0, то уравнение называется однородным: y 0 (x) + p(x)y(x) = 0. (4.120) Теорема 4. 1. 1) Если y1 (x), y2 (x) решения однородного уравнения (4.120), α, β произвольные числа, то функция y (x) αy1 (x) + βy2 (x) также является решением уравнения (4.120). 2) Для общего решения неоднородного уравнения (4.12) имеет место формула yон = yоо + yчн; (4.13) здесь yон общее решение неоднородного уравнения (4.12), yчн частное решение неоднородного уравнения (4.12), yоо общее решение однородного уравнения (4.120). Доказательство. Первое утверждение теоремы доказывается непосредственной проверкой: имеем y 0 αy10 + βy20 = αp(x)y1 βp(x)y2 = p(x) αy1 + βy2 = p(x)y . Докажем второе утверждение. Пусть y0 – произвольное решение уравнения (4.120), тогда y00 = p(x)y0 . C другой стороны, 0 yчн = p(x)yчн + q(x). Следовательно, 0 y0 + yчн = p(x) y0 + yчн + q(x), а значит y y0 + yчн является решением уравнения (4.12). Таким образом, формула (4.13) дает решение неоднородного уравнения (4.12). Покажем, что по этой формуле могут быть получены все решения уравнения (4.12). Действительно, пусть y^(x) – решение уравнения (4.12). Положим y~(x) = y^(x) yчн. Имеем y~ 0 (x) = y^ 0 (x) 0 yчн (x) = p(x)^ y (x) + q(x) + p(x)yчн (x) = p(x) y^(x) q(x) = yчн (x) = p(x)~ y (x). Таким образом, y~(x) – решение однородного уравнения (4.120), и мы имеем y^(x) = y~(x) + yчн, что соответствует формуле (4.13). Теорема доказана. -58- Ниже будем рассматривать задачи Коши для уравнений (4.12) и (4.120) с начальным условием y(x0) = y0 , x0 2 ha, bi. (4.14) Относительно функций p(x) и q(x) из (4.12) будем предполагать, что p(x), q(x) 2 C (ha, bi). Замечание 4. 3. Положим F (x, y) = p(x)y + q(x). Тогда в силу наложенных выше условий на p(x) и q(x) имеем F (x, y), ∂F (x, y) 2 C G , ∂y G = ha, bi R1 , а следовательно, для задачи Коши (4.12), (4.14) справедливы теоремы существования и единственности решения, доказанные в главе 2. В доказанных ниже теоремах 4. 2, 4. 3 будут получены явные формулы для решений уравнений (4.120) и (4.12) и будет показано, что эти решения существуют на всем промежутке ha, bi. Рассмотрим сначала однородное уравнение (4.120). Теорема 4. 2. утверждения: Пусть p(x) 2 C (ha, bi). Òогда справедливы следующие 1) любое решение уравнения (4.120) определено на всем промежутке ha, bi; 2) общее решение однородного уравнения (4.120) задается формулой y(x) = C e где C R p(x) dx , (4.15) произвольная константа; 3) решение задачи Êоши (4.120), (4.14) задается формулой Rx y(x) = y0 e x0 p(ξ) dξ . (4.16) Доказательство. Выведем формулу (4.15) в соответствии с данной в начале главы методикой. Прежде всего заметим, что функция y 0 является решением уравнения (4.120). Пусть y(x) – решение уравнения (4.120), причем y 6 0 на ha, bi. Тогда 9 x1 2 ha, bi такая, что y(x1) = y0 6= 0. Рассмотрим уравнение (4.120) в окрестности точки x1 . Это уравнение с разделяющимися переменными, причем y(x) 6= 0 в некоторой окрестности точки x1 . Тогда, следуя результатам предыдущего параграфа, получим явную формулу для решения Z dy = p(x) dx, ln y = p(x) dx + C, y -59- откуда R y(x) = C e p(x) dx , c 6= 0, что соответствует формуле (4.15). Áолее того, решение y 0 также задается формулой (4.15) при C = 0. Непосредственной подстановкой в уравнение (4.120) убеждаемся, что функция y(x), задаваемая по формуле (4.15) при любом C, является решением уравнения (4.120), причем на всем промежутке ha, bi. Покажем, что формула (4.15) задает общее решение уравнения (4.120). Действительно, пусть y^(x) – произвольное решение уравнения (4.120). Если y^(x) 6= 0 на ha, bi, то повторяя предыдущие рассуждения, получим, что эта функция задается формулой (4.15) при некотором C: именно, если y^(x0) = y^0 , то Rx p(ξ) dξ . y^(x) = y^0 e x0 Если же 9x1 2 ha, bi такая, что y^(x1) = 0, то задача Коши для уравнения (4.120) с начальным условием y(x1) = 0 имеет два решения y^(x) и y(x) 0. В силу замечания 4. 3 решение задачи Коши единственно, поэтому y^(x) 0, а следовательно, задается формулой (4.15) при C = 0. Итак, доказано, что общее решение уравнения (4.120) определено на всем ha, bi и задается формулой (4.15). Формула (4.16), очевидно, является частным случаем формулы (4.15), поэтому задаваемая ею функция y(x) является решением уравнения (4.120). Кроме того, x R0 p(ξ) dξ y(x0) = y0 e x0 = y0 , поэтому формула (4.16) действительно задает решение задачи Коши (4.120), (4.14). Теорема 4. 2 доказана. Рассмотрим теперь неоднородное уравнение (4.12). Теорема 4. 3. Пусть p(x), q(x) 2 C (ha, bi). Òогда справедливы следующие утверждения: 1) любое решение уравнения (4.12) определено на всем промежутке ha, bi; 2) общее решение неоднородного уравнения (4.12) задается формулой Z R R R p(x) dx p(x) dx q(x)e p(x) dx dx, (4.17) y(x) = Ce +e где C произвольная константа; 3) решение задачи Êоши (4.12), (4.14) задается формулой Rx y(x) = y0 e x0 Zx p(ξ) dξ + q(ξ)e x0 -60- Rx ξ p(θ) dθ dξ. (4.18) Доказательство. В соответствии с теоремой 4. 1 и формулой (4.13) yон = yоо + yчн требуется найти частное решение уравнения (4.12). Для его нахождения применим так называемый метод вариации произвольной постоянной. Суть этого метода заключается в следующем: берем формулу (4.15), заменяем в ней константу C на неизвестную функцию C(x) и ищем частное решение уравнения (4.12) в виде yчн (x) = C(x) e R p(x) dx . (4.19) Подставим yчн (x) из (4.19) в уравнение (4.12) и найдем C(x) так, чтобы это уравнение удовлетворялось. Имеем R R 0 yчн (x) = C 0 (x) e p(x) dx + C(x) e p(x) dx p(x) . Подставляя в (4.12), получим C 0 (x) e R p(x) dx + C(x) e R p(x) dx p(x) + C(x)p(x) e R p(x) dx = q(x), откуда R C 0 (x) = q(x) e p(x) dx . Интегрируя последнее соотношение и подставляя найденное C(x) в формулу (4.19), получим, что Z R R p(x) dx yчн (x) = e q(x) e p(x) dx dx. Кроме того, в силу теоремы 4. 2 R yоо = C e p(x) dx . Поэтому используя формулу (4.13) из теоремы 4. 1, получаем, что Z R R R p(x) dx p(x) dx y(x) = yоо + yчн = Ce +e q(x)e p(x) dx dx, что совпадает с формулой (4.17). Очевидно, что формула (4.17) задает решение на всем промежутке ha, bi. Наконец, решение задачи Коши (4.12), (4.14) задается формулой Rx y(x) = y0 e Rx p(ξ) dξ x0 +e p(θ) dθ Zx Rξ p(θ) dθ q(ξ)ex0 x0 dξ. (4.20) x0 Действительно, формула (4.20) является частным случаем формулы (4.17) при C = y0 , поэтому она задает решение уравнения (4.12). Кроме того, x R0 y(x0) = y0 e x0 x R0 p(ξ) dξ +e p(θ) dθ Zx0 Rξ q(ξ)e x0 x0 x0 -61- p(θ) dθ dξ = y0 , поэтому удовлетворяются начальные данные (4.14). Приведем формулу (4.20) к виду (4.18). Действительно, из (4.20) имеем Rx y(x) = y0 e Zx p(ξ) dξ + x0 Rξ q(ξ)e x p(θ) dθ Rx dξ = y0 e Zx p(ξ) dξ + x0 x0 Rx q(ξ)e p(θ) dθ dξ, ξ x0 что совпадает с формулой (4.18). Теорема 4. 3 доказана. Ñледствие(об оценке решения задачи Коши для линейной системы). x0 2 ha, bi, p(x), q(x) 2 C (ha, bi), причем p(x) 6 K, q(x) 6 M Пусть 8 x 2 ha, bi. Òогда для решения задачи Êоши (4.12), (4.14) справедлива оценка M Kjx x0 j Kjx x0 j y(x) 6 y0 e + e 1 . K (4.21) Доказательство. Пусть сначала x > x0. By virtue of (4.18), we have Rx Zx K dξ y (x) 6 y0 ex0 Rx K dθ M eξ + dξ \u003d y0 eK (x x0) Zx + M x0 \u003d y0 e K (x x0) eK (x ξ) dξ \u003d x0 M + K e K (x ξ) ξ \u003d x ξ \u003d x0 \u003d y0 e Kjx x0 j M Kjx + e K x0 j 1. Now let x< x0 . Тогда, аналогично, получаем x R0 y(x) 6 y0 e x K dξ Zx0 + Rξ M ex K dθ dξ = y0 eK(x0 x) Zx0 +M x = y0 e K(x0 eK(ξ x) dξ = x M x) eK(ξ + K x) ξ=x0 ξ=x M h K(x0 x) = y0 e + e K M Kjx Kjx x0 j e = y0 e + K i 1 = K(x0 x) x0 j Таким образом, оценка (4.21) справедлива 8 x 2 ha, bi. Пример 4. 2. Решим уравнение y = x2 . x Решаем сначала однородное уравнение: y0 y0 y = 0, x dy dx = , y x ln jyj = ln jxj + C, -62- y = C x. 1 . Решение неоднородного уравнения ищем методом вариации произвольной постоянной: y чн = C(x) x, Cx = x2 , x 0 C x+C 0 C = x, x2 C(x) = , 2 откуда x3 , 2 y чн = а общее решение исходного уравнения y =Cx+ x3 . 2 4. 3. Однородные уравнения Однородным уравнением называется уравнение вида y 0 y (x) = f , (x, y) 2 G, x (4.22) G – некоторая область в R2 . Áудем предполагать, что f (t) – непрерывная функция, x 6= 0 при (x, y) 2 G. Однородное уравнение заменой y = xz, где z(x) – новая искомая функция, сводится к уравнению с разделяющимися переменными. В силу данной замены имеем y 0 = xz 0 + z. Подставляя в уравнение (4.22), получим xz 0 + z = f (z), откуда z 0 (x) = 1 x f (z) z . (4.23) Уравнение (4.23) представляет собой частный случай уравнения с разделяющимися переменными, рассмотренного в п. 4.1. Пусть z = ϕ(x) – решение уравнения (4.23). Тогда функция y = xϕ(x) является решением исходного уравнения (4.22). Действительно, y 0 = xϕ 0 (x) + ϕ(x) = x 1 x f (ϕ(x)) ϕ(x) + ϕ(x) = xϕ(x) y(x) = f ϕ(x) = f =f . x x Пример 4. 3. Ðешим уравнение y0 = y x -63- ey/x . Положим y = zx. Тогда xz 0 + z = z откуда y(x) = 1 z e, x z0 = ez , dz dx = , e z = ln jzj + C, z e x z = ln ln Cx , c 6= 0, x ln ln Cx , c 6= 0. 4. 4. Уравнение Áернулли Уравнением Áернулли называется уравнение вида y 0 = a(x)y + b(x)y α , α 6= 0, α 6= 1 . x 2 ha, bi (4.24) При α = 0 или α = 1 получаем линейное уравнение, которое было рассмотрено в п. 4.2. Áудем предполагать, что a(x), b(x) 2 C (ha, bi). Замечание 4. 4. Если α > 0, then, obviously, the function y (x) 0 is a solution to equation (4.24). To solve the ernoulli equation (4.24) α 6 \u003d 0, α 6 \u003d 1 we divide both sides of the equation by y α. For α\u003e 0, it is necessary to take into account that, due to Remark 4. 4, the function y (x) 0 is a solution to equation (4.24), which will be lost in such a division. Therefore, in the future it will need to be added to the general solution. After division, we obtain the relation y α y 0 \u003d a (x) y 1 α + b (x). Introduce the new required function z \u003d y 1 α, then z 0 \u003d (1 hence, we arrive at the equation for z z 0 \u003d (1 α) a (x) z + (1 α) y α) b (x). α y 0, and (4.25) Equation (4.25) is a linear equation. Such equations are considered in Section 4.2, where a general solution formula is obtained, due to which the solution z (x) of Eq. (4.25) is written in the form z (x) \u003d Ce R (α 1) a (x) dx + + (1 α ) e R (α 1) a (x) dx 1 Z b (x) e R (α 1) a (x) dx dx. (4.26) Then the function y (x) \u003d z 1 α (x), where z (x) is defined in (4.26), is a solution to the ernoulli equation (4.24). -64- Moreover, as indicated above, for α\u003e 0 the solution is also the function y (x) 0. Example 4. 4. Solve the equation y 0 + 2y \u003d y 2 ex. (4.27) Divide equation (4.27) by y 2 and make the change z \u003d we obtain a linear inhomogeneous equation 1 y. As a result, z 0 + 2z \u003d ex. (4.28) We first solve the homogeneous equation: z 0 + 2z \u003d 0, dz \u003d 2dx, z ln jzj \u003d 2x + c, z \u003d Ce2x, C 2 R1. We seek the solution of the inhomogeneous equation (4.28) by the method of variation of an arbitrary constant: zпн \u003d C (x) e2x, C 0 e2x 2Ce2x + 2Ce2x \u003d ex, C 0 \u003d ex, C (x) \u003d ex, whence zпн \u003d ex, and the general solution of the equation (4.28) z (x) \u003d Ce2x + ex. Therefore, the solution to the Áernoulli equation (4.24) is written in the form y (x) \u003d 1. ex + Ce2x In addition, the solution of equation (4.24) is also the function y (x) We lost this solution when dividing this equation by y 2. 0. 4. 5. Equation in complete differentials We consider the equation in differentials M (x, y) dx + N (x, y) dy \u003d 0, (x, y) 2 G, (4.29) G is some domain in R2. Such an equation is called an equation in the total differential x if there exists a function F (x, y) 2 C 1 (G), called a potential, such that dF (x, y) \u003d M (x, y) dx + N (x, y ) dy, (x, y) 2 G. For simplicity, we assume that M (x, y), N (x, y) 2 C 1 (G), and the domain G is simply connected. Under these assumptions in the course of mathematical analysis (see, for example,) it is proved that the potential F (x, y) for equation (4.29) exists (i.e., (4.29) is an equation in total differentials) if and only if My (x, y) \u003d Nx (x, y) -65- 8 (x, y) 2 G. Moreover, (x, Z y) F (x, y) \u003d M (x, y) dx + N (x, y) dy, (4.30) (x0, y0) where point (x0, y0) is some fixed point from G, (x, y) is the current point in G, and the curvilinear integral is taken along any curve connecting the points (x0, y0) and (x, y) and lying entirely in the domain G. If equation (4.29) is the equation

This course of lectures has been delivered for over 10 years for students of theoretical and applied mathematics at the Far Eastern State University. Complies with the 2nd generation standard for these specialties. Recommended for students and undergraduates of mathematical specialties.

Cauchy's theorem on the existence and uniqueness of a solution to the Cauchy problem for a first-order equation.
In this section, by imposing certain restrictions on the right-hand side of a first-order differential equation, we prove the existence and uniqueness of the solution determined by the initial data (x0, y0). The first proof of the existence of a solution to differential equations is due to Cauchy; the proof below is given by Picard; it is done using the method of successive approximations.

TABLE OF CONTENTS
1. Equations of the first order
1.0. Introduction
1.1. Separable Equations
1.2. Homogeneous equations
1.3. Generalized homogeneous equations
1.4. Linear equations of the first order and reduced to them
1.5. Bernoulli's equation
1.6. Riccati equation
1.7. Total Differential Equation
1.8. Integrating factor. The simplest cases of finding the integrating factor
1.9. Equations not resolved for the derivative
1.10. Cauchy's theorem on the existence and uniqueness of a solution to the Cauchy problem for a first-order equation
1.11. Special points
1.12. Special solutions
2. Equations of higher orders
2.1. Basic concepts and definitions
2.2. Types of n-order equations solvable by quadratures
2.3. Intermediate integrals. Equations admitting order reductions
3. Linear differential equations of the nth order
3.1. Basic concepts
3.2. Linear homogeneous differential equations of the nth order
3.3. Lowering the order of a linear homogeneous equation
3.4. Inhomogeneous linear equations
3.5. Reduction of order in a linear inhomogeneous equation
4. Linear equations with constant coefficients
4.1. Homogeneous linear equation with constant coefficients
4.2. Inhomogeneous linear equations with constant coefficients
4.3. Second-order linear equations with oscillating solutions
4.4. Integration by power series
5. Linear systems
5.1. Inhomogeneous and homogeneous systems. Some properties of solutions to linear systems
5.2. Necessary and sufficient conditions for linear independence of solutions of a linear homogeneous system
5.3. The existence of a fundamental matrix. Construction of a general solution to a linear homogeneous system
5.4. Construction of the entire set of fundamental matrices of a linear homogeneous system
5.5. Inhomogeneous systems. Construction of a general solution by the method of variation of arbitrary constants
5.6. Linear homogeneous systems with constant coefficients
5.7. Some information from the theory of functions of matrices
5.8. Construction of the fundamental matrix of a system of linear homogeneous equations with constant coefficients in the general case
5.9. Existence theorem and theorems on functional properties of solutions of normal systems of first order differential equations
6. Elements of the theory of stability
6.1
6.2. The simplest types of rest points
7. Partial differential equations of the 1st order
7.1. Linear homogeneous partial differential equation of the 1st order
7.2. First order inhomogeneous linear partial differential equation
7.3. System of two partial differential equations with 1 unknown function
7.4. Pfaff's equation
8. Variants of control tasks
8.1. Examination work No. 1
8.2. Test work No. 2
8.3. Examination work No. 3
8.4. Examination work No. 4
8.5. Examination work No. 5
8.6. Examination work No. 6
8.7. Examination work No. 7
8.8. Examination work No. 8.


Free download the e-book in a convenient format, watch and read:
Download the book A course of lectures on ordinary differential equations, Shepeleva R.P., 2006 - fileskachat.com, fast and free download.

Download pdf
Below you can buy this book at the best discounted price with delivery throughout Russia.

"LECTURES ON ORDINARY DIFFERENTIAL EQUATIONS PART 1. ELEMENTS OF GENERAL THEORY The textbook sets out the provisions that form the basis of the theory of ordinary differential equations: ..."

-- [ Page 1 ] --

A. E. Mamontov

LECTURES ON ORDINARY

DIFFERENTIAL EQUATIONS

ELEMENTS OF A GENERAL THEORY

The tutorial sets out the provisions that make up

the basis of the theory of ordinary differential equations: the concept of solutions, their existence, uniqueness,

dependence on parameters. Also (in § 3), some attention is paid to the "explicit" solution of some classes of equations. The manual is intended for in-depth study of the course "Differential Equations" by students studying at the Faculty of Mathematics of the Novosibirsk State Pedagogical University.

UDC 517.91 ББК В161.61 Preface The textbook is intended for students of the mathematical faculty of the Novosibirsk State Pedagogical University who want to study the compulsory course "Differential Equations" in an extended volume. The basic concepts and results that form the foundation of the theory of ordinary differential equations are offered to the readers' attention: concepts of solutions, theorems of their existence, uniqueness, dependence on parameters. The material described is presented in the form of a logically inseparable text in §§ 1, 2, 4, 5. Also (in § 3, which stands somewhat apart and temporarily interrupts the main thread of the course), the most popular methods of "explicit" finding solutions of some classes of equations are briefly considered. On the first reading, § 3 can be skipped without significant damage to the logical structure of the course.

An important role is played by the exercises included in large numbers in the text. The reader is strongly advised to solve them "hot on the trail", which guarantees the assimilation of the material and will serve as a test. Moreover, quite often these exercises fill the logical fabric, that is, without solving them, not all propositions will be rigorously proved.

In square brackets in the middle of the text there are remarks that have the role of comments (extended or side explanations). Lexically, these fragments interrupt the main text (that is, for a coherent reading they need to be "ignored"), but they are still needed as explanations. In other words, these fragments must be perceived as if they were taken out on the fields.

The text contains separately classified "notes for the teacher" - they can be omitted when reading by students, but are useful for the teacher who will use the manual, for example, when giving lectures - they help to better understand the logic of the course and indicate the direction of possible improvements (extensions) of the course ... However, the mastering of these remarks by students can only be welcomed.



A similar role is played by “reasoning for the teacher” - they provide, in an extremely concise form, proof of some of the propositions offered to the reader as exercises.

The most common (key) terms are used in the form of abbreviations, a list of which is given at the end for convenience. There is also a list of mathematical notation found in the text, but not related to the most common (and / or not unambiguously understood in the literature).

The symbol means the end of the proof, the statement of the statement, remarks, etc. (where it is necessary to avoid confusion).

Formulas are numbered independently in each paragraph. When referring to a part of the formula, indices are used, for example (2) 3 means the third part of the formula (2) (parts of the formula are considered to be fragments separated by typographic space, and from logical positions - by a link "and").

This manual cannot completely replace an in-depth study of the subject, which requires independent exercises and reading additional literature, for example, the one listed at the end of the manual. However, the author tried to present the main provisions of the theory in a rather succinct form suitable for a lecture course. In this regard, it should be noted that when reading a lecture course on this manual, it takes about 10 lectures.

It is planned to publish 2 more parts (volumes), continuing this manual and thus completing the cycle of lectures on the subject of "ordinary differential equations": part 2 (linear equations), part 3 (further theory of nonlinear equations, partial differential equations of the first order).

§ 1. Introduction A differential equation (DE) is a relation of the form u1 u1 un, higher derivatives F y, u (y), ..., \u003d 0, y1 y2 yk (1) where y \u003d (y1, ..., yk) Rk are independent variables, and u \u003d u (y) are unknown functions1, u \u003d (u1, ..., un). Thus, (1) contains n unknowns, so n equations are required, i.e., F \u003d (F1, ..., Fn), so (1) is, generally speaking, a system of n equations. If there is one unknown function (n \u003d 1), then equation (1) is scalar (one equation).

So, the function (s) F is (are) given, and u is sought. If k \u003d 1, then (1) is called an ODE, otherwise it is called PDE. The second case is the subject of a special MFI course set out in a series of tutorials of the same name. In this series of tutorials (consisting of 3 parts-volumes), we will study only ODEs, with the exception of the last paragraph of the last part (volume), in which we will begin to study some special cases of PDE.

2u u Example. 2 \u003d 0 is a PDE.

y1 y Unknown values \u200b\u200bof u can be real or complex, which is insignificant, since this moment refers only to the form of writing equations: any complex notation can be turned into real, separating the real and imaginary parts (but, of course, by doubling the number of equations and unknowns), and vice versa, in some cases it is convenient to switch to a complex notation.

du d2v dv 2 \u003d uv; u3 \u003d 2. This is a system of 2 ODEs Example.

dy dy dy for 2 unknown functions in the independent variable y.

If k \u003d 1 (ODE), then the "straight" symbol d / dy is used.

u (y) du Example. exp (sin z) dz is an ODE since it has an Example. \u003d u (u (y)) for n \u003d 1 is not a differential equation, but a functional differential equation.

This is not a DE, but an integro-differential equation; we will not study such equations. However, equation (2) can be easily reduced to an ODE:

An exercise. Reduce (2) to ODE.

But in general, integral equations are a more complex object (it is partially studied in the course of functional analysis), although, as we will see below, it is with their help that some results for ODEs are obtained.

DEs arise both from intra-mathematical needs (for example, in differential geometry) and in applications (historically for the first time, and now mainly in physics). The simplest DE is the "main problem of differential calculus" about recovering a function from its derivative: \u003d h (y). As is known from analysis, its solution has the form u (y) \u003d + h (s) ds. More general DEs require special methods to solve it. However, as we will see below, practically all methods for solving ODEs "in an explicit form" are essentially reduced to the indicated trivial case.

In applications, ODEs most often arise when describing processes developing in time, so that the role of an independent variable is usually played by the time t.

thus, the meaning of the ODE in such applications is to describe the change in the parameters of the system over time.Therefore, when constructing a general theory of ODE, it is convenient to denote the independent variable by t (and call it time with all the ensuing terminological consequences), and the unknown function (s) - via x \u003d (x1, ..., xn). Thus, the general view of the ODE (ODE system) is as follows:

where F \u003d (F1, ..., Fn) is a system of n ODEs for n functions x, and if n \u003d 1, then one ODE for 1 function x.

Moreover, x \u003d x (t), t R, and x, generally speaking, is complex-valued (this is for convenience, since then some systems are written more compactly).

System (3) is said to be of order m with respect to the function xm.

The derivatives are called senior, and the rest (including the xm \u003d themselves) are called low. If all m \u003d, then they simply say that the order of the system is equal.

True, the number m is often called the order of the system, which is also natural, as will become clear below.

We will consider the question of the need to study ODEs and their applications to be sufficiently substantiated by other disciplines (differential geometry, mathematical analysis, theoretical mechanics, etc.), and it is partially covered in the course of practical exercises in solving problems (for example, from a problem book). In this course, we will deal exclusively with the mathematical study of systems of the form (3), which implies an answer to the following questions:

1. what does it mean to “solve” the equation (system) (3);

2. how to do it;

3. what properties these solutions have, how to investigate them.

Question 1 is not as obvious as it seems - see below. Note right away that any system (3) can be reduced to a first-order system, denoting the lower derivatives as new unknown functions. The easiest way to explain this procedure is with an example:

of 5 equations for 5 unknowns. It is easy to understand that (4) and (5) are equivalent in the sense that the solution of one of them (after the appropriate redesignation) is the solution of the other. In this case, it is only necessary to stipulate the question of the smoothness of the solutions - we will do this further when we encounter a higher-order ODE (i.e., not the first one).

But now it is clear that it is sufficient to study only first-order ODEs, while others may be required only for the convenience of notation (such a situation will sometimes arise in our case).

For now, let's restrict ourselves to the ODE of the first order:

dimx \u003d dimF \u003d n.

The study of equation (system) (6) is inconvenient due to the fact that it is not resolved with respect to the derivatives dx / dt. As is known from the analysis (from the implicit function theorem), under certain conditions on F, equation (6) can be solved with respect to dx / dt and written in the form where f: Rn + 1 Rn is given, and x: R Rn is the desired one. They say that (7) is an ODE resolved with respect to derivatives (normal ODE). When passing from (6) to (7), naturally, difficulties can arise:

Example. The equation exp (x) \u003d 0 cannot be written in the form (7), and has no solutions at all, i.e., exp has no zeros even in the complex plane.

Example. The equation x 2 + x2 \u003d 1 at resolution is written in the form of two normal ODEs x \u003d ± 1 x2. You should solve each of them and then interpret the result.

Comment. When reducing (3) to (6), complexity may arise if (3) has 0 order in some function or part of functions (i.e., this is a functional differential equation). But then these functions must be eliminated by the implicit function theorem.

Example. x \u003d y, xy \u003d 1 x \u003d 1 / x. Find x from the resulting ODE, and then y from the functional equation.

But in any case, the problem of transition from (6) to (7) belongs rather to the field of mathematical analysis than to DE, and we will not deal with it. However, when solving an ODE of the form (6), moments that are interesting from the point of view of an ODE may arise, so this question is appropriate to study when solving problems (as is done, for example, in) and it will be slightly touched on in § 3. But in the rest of the course we will deal only with normal systems and equations. So, consider an ODE (system of ODE) (7). Let's write it down 1 time in component form:

The concept of "solve (7)" (and, in general, any DE) has long been understood as a search for an "explicit formula" for a solution (that is, in the form of elementary functions, their antiderivatives, or special functions, etc.), without emphasis on the smoothness of the solution and the interval of its definition. However, the current state of the theory of ODEs and other branches of mathematics (and the natural sciences in general) shows that this approach is unsatisfactory, if only because the fraction of ODEs that lend themselves to such "explicit integration" is extremely small (even for the simplest ODE x \u003d f (t) it is known that the solution in elementary functions is rare, although there is an "explicit formula").

Example. The equation x \u003d t2 + x2, in spite of its extreme simplicity, has no solutions in elementary functions (and even “there is no formula” here).

And although it is useful to know those classes of ODEs for which the "explicit" construction of the solution is possible (in the same way as it is useful to be able to "count integrals" when possible, although this is extremely rare). In this regard, the following terms are characteristic: "integrate ODE "," ODE integral "(outdated analogs of the modern concepts" solve ODE "," ODE solution "), which reflect the old concepts of the solution. How to understand modern terms, we will now explain.

and this issue will be considered in § 3 (as well as traditionally much attention is paid to it when solving problems in practical classes), but one should not expect any universality from this approach. As a rule, by the process of solving (7) we mean completely different steps.

It should be clarified which function x \u003d x (t) can be called a solution (7).

First of all, we note that a clear formulation of the concept of a solution is impossible without specifying the set on which it is defined, if only because the solution is a function, and any function (according to the school definition) is a law that compares any element of a set (called the domain of definition of this function) some element of another set (function values). Thus, to speak about a function without specifying its scope is by definition absurd. Analytical functions (more broadly - elementary) serve here as an "exception" (misleading) for the reasons indicated below (and some others), but in the case of DE, such liberties are unacceptable.

and generally without specifying the sets of definitions of all functions involved in (7). As will be clear from what follows, it is advisable to rigidly link the concept of a solution with the set of its definition, and consider solutions to be different if the sets of their definitions are different, even if the solutions coincide at the intersection of these sets.

Most often, in specific situations this means that if solutions are constructed in the form of elementary functions, so that 2 solutions have "the same formula", then it is still necessary to clarify whether the sets on which these formulas are written coincide. The confusion that prevailed in this issue for a long time was forgivable as long as solutions in the form of elementary functions were considered, since analytic functions unambiguously extend over wider intervals.

Example. x1 (t) \u003d et on (0,2) and x2 (t) \u003d et on (1,3) are different solutions of the equation x \u003d x.

In this case, it is natural to take an open interval (maybe infinite) as the definition set for any solution, since this set should be:

1.open, so that at any point it makes sense to talk about a derivative (two-sided);

2. connected so that the solution does not fall apart into disconnected pieces (in this case it is more convenient to talk about several solutions) - see the previous Example.

Thus, solution (7) is a pair (, (a, b)), where a b +, is defined on (a, b).

Note to the teacher. In some textbooks, it is allowed to include the ends of a segment in the domain of definition of the solution, but this is inappropriate because it only complicates the presentation and does not provide a real generalization (see § 4).

To make it easier to understand further reasoning, it is useful to use the geometric interpretation (7). In the space Rn + 1 \u003d ((t, x)) at each point (t, x) where f is defined, we can consider the vector f (t, x). If we construct in this space the graph of solution (7) (it is called the integral curve of system (7)), then it consists of points of the form (t, x (t)). When t (a, b) changes, this point moves along the IC. The tangent to the IC at the point (t, x (t)) has the form (1, x (t)) \u003d (1, f (t, x (t))). Thus, IK are those and only those curves in the space Rn + 1, which at each point (t, x) have a tangent parallel to the vector (1, f (t, x)). On this idea the so-called. isocline method for approximate construction of IC, which is used when displaying the graphs of solutions of specific ODEs (see.

eg ). For example, for n \u003d 1, our construction means the following: at each point of the IR, its inclination to the t axis has the property tg \u003d f (t, x). It is natural to assume that, taking any point from the set of definition of f, we can draw an IC through it. This idea will be rigorously substantiated below. So far, we lack a rigorous formulation of the smoothness of solutions - this will be done below.

Now it is necessary to refine the set B on which f is defined. It is natural to take this set:

1.open (so that the IC can be built in the vicinity of any point from B), 2.connected (otherwise, all connected pieces can be considered separately - all the same, the IC (as a graph of a continuous function) cannot jump from one piece to another, so that on this will not affect the generality of the search for solutions).

We will consider only classical solutions (7), i.e., such that x itself and its x are continuous on (a, b). Then it is natural to require that f C (B). In what follows, this requirement will always be understood by us. So, we finally get the Definition. Let B Rn + 1 be a domain, f C (B).

A pair (, (a, b)), ab +, defined on (a, b), is called a solution to (7) if C (a, b), for each t (a, b) the point (t, (t) ) B and (t) exists, and (t) \u003d f (t, (t)) (then automatically C 1 (a, b)).

It is geometrically clear that (7) will have many solutions (which is easy to understand graphically), since if we carry out IQs starting from points of the form (t0, x0), where t0 is fixed, then we will get different IQs. In addition, changing the interval for determining the solution will give a different solution, according to our definition.

Example. x \u003d 0. Solution: x \u003d \u003d const Rn. However, if we choose some t0 and fix the value x0 of the solution at the point t0: x (t0) \u003d x0, then the value is uniquely determined: \u003d x0, i.e., the solution is unique up to the choice of the interval (a, b) t0.

The presence of a “faceless” set of solutions is inconvenient for working with them2 - it is more convenient to “enumerate” them as follows: add additional conditions to (7) so as to select a single (in a certain sense) solution, and then, sorting out these conditions, work with each solution separately (geometrically, there can be one solution (IC), but there are many pieces - we will deal with this inconvenience later).

Definition. The problem for (7) is (7) with additional conditions.

In essence, we have already invented the simplest problem - this is the Cauchy problem: (7) with conditions of the form (Cauchy data, initial data):

From the point of view of applications, this problem is natural: for example, if (7) describes the change in some parameters x with time t, then (8) means that at some (initial) moment of time the value of the parameters is known. There is a need to study other problems as well, we will talk about this later, but for now we will focus on the Cauchy problem. Naturally, this problem makes sense for (t0, x0) B. Accordingly, a solution to problem (7), (8) is a solution (7) (in the sense of the definition given above) such that t0 (a, b), and (8).

Our immediate task is to prove the existence of a solution to the Cauchy problem (7), (8), and for certain additional example - a quadratic equation, it is better to write x1 \u003d ..., x2 \u003d ... than x \u003d b / 2 ± ...

assumptions on f - and its uniqueness in a certain sense.

Comment. We need to clarify the concept of the norm of a vector and a matrix (although we will need matrices only in Part 2). Due to the fact that all norms are equivalent in a finite-dimensional space, the choice of a specific norm does not matter if we are only interested in estimates, and not exact values. For example, for vectors, you can use | x | p \u003d (| xi | p) 1 / p, p is the Peano (Picard) segment. Consider the cone K \u003d (| x x0 | F | t t0 |) and its truncated part K1 \u003d K (t IP). It is clear that just K1 C.

Theorem. (Peano). Let the requirements on f in problem (1), specified in the definition of the solution, be satisfied, i.e.:

f C (B), where B is a domain in Rn + 1. Then, for all (t0, x0) B, there exists a solution to problem (1) on Int (IP).

Evidence. Let us define arbitrarily (0, T0] and construct the so-called Euler polyline with a step, namely: it is a polyline in Rn + 1, in which each link has a projection onto the t-axis of length, the first link to the right begins at the point (t0, x0) and is such that dx / dt \u003d f (t0, x0) on it; the right end of this link (t1, x1) serves as the left end for the second one, at which dx / dt \u003d f (t1, x1), etc., and similarly to the left. The resulting broken line defines a piecewise linear function x \u003d (t). As long as t IP, the broken line remains in K1 (and even more so in C, and hence in B), so the construction is correct - for this, in fact, we did auxiliary construction before the theorem.

Indeed, everywhere except for breakpoints exists, and then (s) (t) \u003d (z) dz, where arbitrary values \u200b\u200bof the derivative are taken at the breakpoints.

Moreover (moving along the broken line by induction) In particular, | (t) x0 | F | t t0 |.

Thus, on the IP function:

2. is equicontinuous, since Lipschitz:

Here, the reader should, if necessary, refresh his knowledge of such concepts and results as: equicontinuity, uniform convergence, the Arzela-Ascoli theorem, etc.

By the Arzela-Ascoli theorem, there is a sequence k 0 such that k is on IP, where C (IP). By construction, (t0) \u003d x0, so it remains to check that We will prove this for s t.

An exercise. Consider s t similarly.

Let us set 0 and find 0 so that for all (t1, x1), (t2, x2) C it is true This can be done in view of the uniform continuity of f on the compact set C. Let us find m N so that Fix t Int (IP) and take any s Int (IP) such that tst +. Then for all z we have | k (z) k (t) | F; therefore, in view of (4) | k (z) (t) | 2F.

Note that k (z) \u003d k (z) \u003d f (z, k (z)), where z is the abscissa of the left endpoint of the polyline segment containing the point (z, k (z)). But the point (z, k (z)) falls into the cylinder with parameters (, 2F), built on the point (t, (t)) (in fact, even into the truncated cone - see the figure, but it doesn't matter now), so in view of (3) we obtain | k (z) f (t, (t)) |. For a broken line, we have, as mentioned above, the formula At k this will give (2).

Comment. Let f C 1 (B). Then the solution defined on (a, b) will be of class C 2 (a, b). Indeed, on (a, b) we have: there exists f (t, x (t)) \u003d ft (t, x (t)) + (t, x (t)) x (t) (here is the Jacobi matrix ) is a continuous function. Know cheat, there is also 2 C (a, b). The smoothness of the solution can be further increased if f is smooth. If f is analytic, then one can prove the existence and uniqueness of an analytic solution (this is the so-called Cauchy theorem), although this does not follow from the previous reasoning!

Here it is necessary to remember what an analytical function is. Not to be confused with a function represented by a power series (this is only a representation of an analytic function on, generally speaking, a part of its domain of definition)!

Comment. For given (t0, x0), one can try to maximize T0 by varying T and R. However, this, as a rule, is not so important, since there are special methods for studying the maximum interval of existence of a solution (see § 4).

Peano's theorem says nothing about the uniqueness of the solution. In our understanding of a solution, it is always not the only one, since if there is a solution, then its restrictions to narrower intervals will be other solutions. We will consider this point in more detail later (in § 4), but for now, by uniqueness we mean the coincidence of any two solutions at the intersection of the intervals of their definition. Even in this sense, Peano's theorem does not say anything about uniqueness, which is not accidental, since uniqueness cannot be guaranteed under its conditions.

Example. n \u003d 1, f (x) \u003d 2 | x |. The Cauchy problem has a trivial solution: x1 0, and besides, x2 (t) \u003d t | t |. From these two solutions, a whole 2-parameter family of solutions can be compiled:

where + (infinite values \u200b\u200bmean the absence of the corresponding branch). If we consider the entire R as the domain of all these solutions, then there are still infinitely many of them.

Note that if we apply the proof of Peano's theorem in terms of Euler's broken lines in this problem, then we get only the zero solution. On the other hand, if a small error is allowed at each step in the process of constructing Euler polygonal lines, then even after the error parameter tends to zero, all solutions remain. Thus, Peano's theorem and Euler's polygonal lines are natural as a method for constructing solutions and are closely related to numerical methods.

The trouble observed in the example is due to the fact that the function f is nonsmooth in x. It turns out that if we impose additional requirements on the regularity of f in x, then uniqueness can be ensured, and this step is necessary in a certain sense (see below).

Let us recall some concepts from the analysis. A function (scalar or vector) g is called Hölder with exponent (0, 1] on the set if in is true by the Lipschitz condition. For 1, this is possible only for constant functions. A function defined on an interval (where the choice of 0 is inessential) is called the modulus of continuity, if g is said to satisfy in the generalized Hölder condition with modulus, if in this case is called the modulus of continuity of g in.

It can be shown that any modulus of continuity is the modulus of continuity of some continuous function.

The converse fact is important for us, namely: any continuous function on a compact set has its modulus of continuity, i.e., it satisfies (5) with some. Let's prove it. Recall that if is compact and g C (), then g is necessarily uniformly continuous in, that is,

\u003d (): | x y | \u003d | g (x) g (y) |. It turns out that this is equivalent to condition (5) with some. Indeed, if it exists, then it is sufficient to construct a modulus of continuity such that (()), and then for | x y | \u003d \u003d () we obtain Since (and) are arbitrary, then x and y can be any.

And vice versa, if (5) is true, then it is enough to find such that (()), and then for | x y | \u003d () we obtain It remains to justify the logical transitions:

For monotone and it is enough to take inverse functions, but in the general case it is necessary to use the so-called. generalized inverse functions. Their existence requires a separate proof, which we will not give, but just an idea (it is useful to accompany the reading with pictures):

for any F we define F (x) \u003d min F (y), F (x) \u003d max F (y) - these are monotone functions, and they have inverse. In F, it is easy to check that x x F (F (x)), (F) 1 (F (x)) x, F ((F) 1 (x)) x.

The best modulus of continuity is linear (Lipschitz condition). These are "almost differentiable" functions. It takes some effort to give a rigorous meaning to the last statement, and we limit ourselves to just two remarks:

1. Strictly speaking, not every Lipschitz function is differentiable, as the example shows g (x) \u003d | x | on R;

2. but differentiability implies Lipschitz property, as the following Assertion shows. Any function g having all M on a convex set satisfies the Lipschitz condition on it.

[For now, consider the scalar functions g for brevity.] Proof. For all x, y we have It is clear that this statement is also true for vector functions.

Comment. If f \u003d f (t, x) (generally speaking, a vector function), then we can introduce the concept “f is Lipschitz in x”, that is, | f (t, x) f (t, y) | C | xy |, and also prove that if D is convex in x for all t, then for f to be Lipschitz with respect to x in D, it suffices to have derivatives of f with respect to x bounded In the Statement, we obtained the estimate | g (x) g (y) | through | x y |. For n \u003d 1 it is usually done using the formula of finite increments: g (x) g (y) \u003d g (z) (xy) (if g is a vector function, then z is different for each component). For n 1, it is convenient to use the following analogue of this formula:

Lemma. (Hadamard). Let f C (D) (generally speaking, a vector function), where D (t \u003d t) is convex for any t, and f (t, x) f (t, y) \u003d A (t, x, y) (xy), where A is a continuous rectangular matrix.

Evidence. For any fixed t, apply the calculation from the proof. Statement for \u003d D (t \u003d t), g \u003d fk. We obtain the required representation with A (t, x, y) \u003d A is indeed continuous.

Let us return to the question of the uniqueness of the solution to problem (1).

Let us pose the question as follows: what should be the modulus of continuity of f with respect to x in order for solution (1) to be unique in the sense that 2 solutions defined on the same interval coincide? The answer is given by the following theorem:

Theorem. (Osgood). Let, under the conditions of Peano's theorem, be the modulus of continuity of f with respect to x in B, i.e., the function in the inequality satisfies the condition (we can assume C). Then problem (1) cannot have two different solutions defined on the same interval of the form (t0 a, t0 + b).

Compare with the non-uniqueness example above.

Lemma. If z C 1 (,), then on all (,):

1. at the points where z \u003d 0, there exists | z |, and || z | | | z |;

2. at the points where z \u003d 0, there are one-sided derivatives | z | ±, and || z | ± | \u003d | z | (in particular, if z \u003d 0, then | z | \u003d 0 exists).

Example. n \u003d 1, z (t) \u003d t. At the point t \u003d 0, the derivative of | z | does not exist, but there are one-way derivatives.

Evidence. (Lemmas). At those points where z \u003d 0, if z · z em: exists | z | \u003d, and || z | | | z |. At those points t, where z (t) \u003d 0, we have:

Case 1: z (t) \u003d 0. Then we obtain the existence of | z | (t) \u003d 0.

Case 2: z (t) \u003d 0. Then, for +0 or 0, z (t +) | | z (t) | whose modulus is equal to | z (t) |.

By assumption, F C 1 (0,), F 0, F, F (+0) \u003d +. Let z1,2 be two solutions to (1) defined on (t0, t0 +). We set z \u003d z1 z2. We have:

Suppose there is t1 (to be definite, t1 t0) such that z (t1) \u003d 0. The set A \u003d (t t1 | z (t) \u003d 0) is not empty (t0 A) and is bounded above. Hence, it has an upper bound t1. By construction, z \u003d 0 on (, t1), and since z is continuous, we have z () \u003d 0.

By Lemma, | z | C 1 (, t1), and on this interval | z | | z | (| z |), so that Integration over (t, t1) (where t (, t1)) gives F (| z (t) |) F (| z (t1) |) t1 t. For t + 0 we get a contradiction.

Corollary 1. If, under the conditions of Peano's theorem, f is Lipschitz in x in B, then problem (1) has a unique solution in the sense described in Osgood's theorem, since in this case () \u003d C satisfies (7).

Corollary 2. If, under the conditions of Peano's theorem, C (B), then solution (1) defined on Int (IP) is unique.

Lemma. Any solution (1) defined on IP must satisfy the estimate | x | \u003d | f (t, x) | F, and its graph is in K1, and even more so in C.

Evidence. Suppose there is t1 IP such that (t, x (t)) C. For definiteness, let t1 t0. Then there is t2 (t0, t1] such that | x (t) x0 | \u003d R. Similarly to the arguments in the proof of Osgood's theorem, we can assume that t2 is the leftmost such point, but we have (t, x (t)) C, so that | f (t, x (t)) | F, and therefore (t, x (t)) K1, which contradicts | x (t2) x0 | \u003d R. Hence, (t, x (t) ) C on the whole IP, and then (repeating the calculations) (t, x (t)) K1.

Evidence. (Corollary 2). C is a compact set f we obtain that f is Lipschitz in x in C, where the graphs of all solutions lie in view of Lemma. By Corollary 1, we obtain what is required.

Comment. Condition (7) means that the Lipschitz condition for f cannot be significantly weakened. For example, Hölder's condition with 1 is no longer valid. Only moduli of continuity close to linear are suitable - such as the "worst" one:

An exercise. (difficult enough). Prove that if satisfies (7), then there is 1 satisfying (7) such that 1 / is at zero.

In the general case, it is not necessary to require exactly something from the modulus of continuity of f with respect to x for uniqueness - various special cases are possible, for example:

Statement. If, under the conditions of Peano's theorem, then any 2 solutions of (1) defined on From (9) it is clear that x C 1 (a, b), and then differentiation (9) gives (1) 1, and (1) 2 is obvious ...

Unlike (1), for (9) it is natural to construct a solution on a closed segment.

Picard proposed the following method of successive approximations to solve (1) \u003d (9). We denote x0 (t) x0, and further by induction. Theorem. (Cauchy-Picard). Suppose that under the conditions of Peano's theorem the function f is Lipschitz in x in any compact set K from the domain B convex in x, i.e.,

Then, for any (t0, x0) B, the Cauchy problem (1) (aka (9)) has a unique solution on Int (IP), and xk x on IP, where xk are defined in (10).

Comment. It is clear that the theorem remains valid if condition (11) is replaced by C (B), since this condition implies (11).

Note to the teacher. In fact, not all compact sets convex in x are needed, but only cylinders, but the formulation is done in exactly this way, since in § 5 more general compact sets are required, and, moreover, it is with this formulation that the Remark looks most natural.

Evidence. We arbitrarily choose (t0, x0) B and make the same auxiliary construction as before Peano's theorem. Let us prove by induction that all xk are defined and continuous on IP, and their graphs lie in K1, and even more so in C. This is obvious for x0. If this is true for xk1, then it is clear from (10) that xk is defined and continuous on IP, and this is the membership of K1.

Now we prove by induction the estimate for IP:

(C is a compact convex in x in B, and L (C) is defined for it). For k \u003d 0, this is the already proven estimate (t, x1 (t)) K1. If (12) is true for k: \u003d k 1, then from (10) we have as required. Thus, the series is majorized on IP by a converging numerical series and therefore (this is called the Weierstrass theorem) converges uniformly on IP to some function x C (IP). But this also means xk x on IP. Then in (10) on IP we pass to the limit and we obtain (9) on IP, and hence (1) on Int (IP).

Uniqueness is immediately obtained by Corollary 1 from Osgood's theorem, but it is useful to prove it in another way, using just equation (9). Let there be 2 solutions x1,2 to problem (1) (i.e., (9)) on Int (IP). As indicated above, then their graphs must necessarily lie in K1, and even more so in C. Let t I1 \u003d (t0, t0 +), where is some positive number. Then \u003d 1 / (2L (C)). Then \u003d 0. Thus, x1 \u003d x2 on I1.

Note to the teacher. There is also a proof of uniqueness with the help of Gronwall's lemma, it is even more natural, since it runs immediately globally, but so far Gronwall's lemma is not very convenient, since it is difficult to adequately perceive it up to linear ODEs.

Comment. The last proof of uniqueness is instructive in that it once again shows in a different light how local uniqueness leads to global uniqueness (which is not true for existence).

An exercise. Prove the uniqueness on all IPs at once, arguing by contradiction as in the proof of Osgood's theorem.

An important special case (1) is linear ODEs, i.e., those in which the value f (t, x) is linear in x:

In this case, to fall into the conditions of the general theory, one should require Thus, in this case, B is a strip, and the condition of Lipschitz (and even differentiability) with respect to x is satisfied automatically: for all t (a, b), x, y Rn we have | f (t, x) f (t, y) | \u003d | A (t) (x y) | | A (t) | | (X y) |.

If we temporarily select a compact set (a, b), then on it we obtain | f (t, x) f (t, y) | L | (x y) |, where L \u003d max | A |.

The Peano and Osgood or Cauchy-Picard theorems imply the unique solvability of problem (13) on some (Peano-Picard) interval containing t0. Moreover, the solution on this interval is the limit of successive Picard approximations.

An exercise. Find this interval.

But it turns out that in this case all these results can be proved immediately globally, that is, on all (a, b):

Theorem. Let (14) be true. Then problem (13) has a unique solution on (a, b); moreover, successive Picard approximations converge to it uniformly on any compact set (a, b).

Evidence. Again, as in TK-P, we construct a solution to the integral equation (9) using successive approximations using formula (10). But now we do not need to check the condition of the graph hitting the cone and cylinder, since.

f is defined for all x as long as t (a, b). It is only necessary to check that all xk are defined and continuous on (a, b), which is obvious by induction.

Instead of (12), we now show a similar estimate of the form where N is some number depending on the choice. The first induction step for this estimate is different (since it is not connected with K1): for k \u003d 0 | x1 (t) x0 | N due to the continuity of x1, and the next steps are similar to (12).

It is possible not to describe this, since it is obvious, but it is possible Again we notice xk x on, and x is a solution of the corresponding (10) on. But by doing so, we have constructed a solution on all (a, b), since the choice of a compactum is arbitrary. The uniqueness follows from the Osgood or Cauchy-Picard theorems (and the reasoning above about global uniqueness).

Comment. As mentioned above, TC-P is formally superfluous due to the presence of Peano and Osgood's theorems, but it is useful for 3 reasons - it is:

1. allows you to relate the Cauchy problem for an ODE with an integral equation;

2. offers a constructive method of successive approximations;

3. makes it easy to prove global existence for linear ODEs.

[although the latter can also be deduced from the reasoning in § 4.] In what follows, we will most often refer to it.

Example. x \u003d x, x (0) \u003d 1. Consecutive approximations k Hence, x (t) \u003d e is a solution to the original problem on the whole R.

Most often, a row will not work, but a certain constructiveness remains. You can also estimate the error x xk (see).

Comment. From the theorems of Peano, Osgood, and Cauchy-Picard, it is easy to obtain the corresponding theorems for higher-order ODEs.

An exercise. Formulate the concepts of the Cauchy problem, solutions of the system and the Cauchy problem, all theorems for higher-order ODEs, using the reduction to first-order systems presented in § 1.

Somewhat violating the logic of the course, but in order to better assimilate and substantiate the methods of solving problems in practical classes, we temporarily interrupt the presentation of the general theory and deal with the technical problem of "explicit solution of ODEs".

§ 3. Some methods of integration So, consider the scalar equation \u003d f (t, x). Dt The simplest particular case that we have learned to integrate is the so-called. ERP, i.e., an equation in which f (t, x) \u003d a (t) b (x). The formal approach to integrating the ERP is to "separate" the variables t and x (hence the name): \u003d a (t) dt, and then take the integral:

x \u003d B (A (t)). This formal reasoning contains several points that require justification.

1. Division by b (x). We assume that f is continuous, so that a C (,), b C (,), i.e., B is a rectangle (,) (,)(generally speaking, infinite). The sets (b (x) 0) and (b (x) 0) are open and therefore are finite or countable sets of intervals. There are points or segments between these intervals, where b \u003d 0. If b (x0) \u003d 0, then the Cauchy problem has a solution x x0. Perhaps this solution is not unique, then there are intervals in its domain of definition, where b (x (t)) \u003d 0, but then it can be divided by b (x (t)) by them. Note in passing that on these intervals the function B is monotone and therefore we can take B 1. But if b (x0) \u003d 0, then b (x (t)) \u003d 0 in a neighborhood of t0, and the procedure is legal. Thus, the described procedure should, generally speaking, be applied when dividing the domain of definition of a solution into parts.

2. Integration of the left and right sides for different variables.

Method I. Suppose we want to find a solution to the problem Kod (t) wi (1) x \u003d (t). We have: \u003d a (t) b ((t)), whence - we got the same formula strictly.

Method II. The equation is the so-called. symmetric notation of the original ODE, that is, one that does not specify which variable is independent and which is dependent. Such a form makes sense precisely in the case of one first-order equation under consideration in view of the theorem on the invariance of the form of the first differential.

Here it is appropriate to understand in more detail the concept of a differential, illustrating it by the example of the plane ((t, x)), curves on it, emerging constraints, degrees of freedom, a parameter on a curve.

Thus, equation (2) connects the differentials t and x along the required IC. Then the integration of equation (2) in the manner shown at the beginning is perfectly legal - it means, if you like, integration over any variable chosen as independent.

In method I, we showed this by choosing t as the independent variable. Now let us show this by choosing the parameter s along the IK as the independent variable (since this more clearly shows the equality of t and x). Let the value s \u003d s0 correspond to the point (t0, x0).

Then we have: \u003d a (t (s)) t (s) ds, which after gives Here we should focus on the universality of symmetric notation, for example: a circle is not written either as x (t) or as t (x), but as x (s), t (s).

Some other ODEs of the first order are reduced to ERP, which can be seen when solving problems (for example, using a problem book).

Another important case is linear ODE:

Method I. Variation is constant.

this is a special case of a more general approach, which will be discussed in Part 2. The point is that finding a solution in a special form lowers the order of the equation.

Let's solve first the so-called. homogeneous equation:

By virtue of the uniqueness either x 0, or everywhere x \u003d 0. In the latter case (let, for definiteness, x 0), we obtain that (4) gives all solutions to (3) 0 (including zero and negative).

Formula (4) contains an arbitrary constant C1.

The method of variation of a constant is that the solution (3) C1 (t) \u003d C0 + The structure ORNU \u003d CRNU + OPROU is visible (as for algebraic linear systems) (more on this in Part 2).

If we want to solve the Cauchy problem x (t0) \u003d x0, then we need to find C0 from the Cauchy data - we can easily get C0 \u003d x0.

Method II. Let us find an IM, that is, such a function v by which we must multiply (3) (written so that all unknowns are collected on the left side: xa (t) x \u003d b (t)) so that on the left side we get the derivative of some a convenient combination.

We have: vx vax \u003d (vx) if v \u003d av, i.e. (such an equation in a way, (3) is equivalent to an equation that is already easily solved and gives (5). If the Cauchy problem is solved, then in (6) it is convenient to immediately take a definite integral Some others are reduced to linear ODEs (3), as can be seen when solving problems (for example, using a problem book) .The important case of linear ODEs (immediately for any n) will be considered in more detail in Part 2.

Both considered situations are special cases of the so-called. UPD. Consider a first-order ODE (for n \u003d 1) in a symmetric form:

As already mentioned, (7) specifies the IC in the (t, x) plane without specifying which variable is considered independent.

If we multiply (7) by an arbitrary function M (t, x), then we get an equivalent form of writing the same equation:

Thus, the same ODE has many symmetric records. Among them, a special role is played by the so-called. notation in total differentials, the name of the UPD is unfortunate, since this property is not an equation, but the form of its notation, that is, such that the left side of (7) is equal to dF (t, x) with some F.

It is clear that (7) is UPD if and only if A \u003d Ft, B \u003d Fx with some F. As is known from the analysis, it is necessary and sufficient for the latter. We do not substantiate strictly technical aspects, for example, the smoothness of all functions. The fact is that § plays a secondary role - it is not needed at all for other parts of the course, and I would not want to spend excessive efforts on its detailed presentation.

Thus, if (9) is satisfied, then there is an F (it is unique up to an additive constant) such that (7) can be rewritten as dF (t, x) \u003d 0 (along the IR), i.e.,

F (t, x) \u003d const along the IC, that is, the IC are the level lines of the function F. We find that the integration of the UPD is a trivial problem, since the search for F over A and B satisfying (9) is easy. If (9) is not satisfied, then you should find the so-called. IM M (t, x) is such that (8) is the UPD, for which it is necessary and sufficient to fulfill the analogue (9), which takes the form:

As follows from the first-order PDE theory (which we will discuss in Part 3), equation (10) always has a solution, so there is an IM. Thus, any equation of the form (7) has a record in the form of an UPD and therefore allows "explicit" integration. But these arguments do not give a constructive method in the general case, since for the solution of (10), generally speaking, it is required to find the solution (7), which we are looking for. Nevertheless, there are a number of methods for searching for MI, which are traditionally considered in practical classes (see for example).

Note that the above methods for solving ERP and linear ODEs are a special case of the IM ideology.

Indeed, the URS dx / dt \u003d a (t) b (x), written in the symmetric form dx \u003d a (t) b (x) dt, is solved by multiplying by the IM 1 / b (x), since after this turns into the UPD dx / b (x) \u003d a (t) dt, i.e. dB (x) \u003d dA (t). The linear equation dx / dt \u003d a (t) x + b (t), written in the symmetric form dx a (t) xdt b (t) dt, is solved by multiplying by the MI, almost all methods of solving the ODE "in explicit form

(with the exception of the large block associated with linear systems) are that, using special methods of order reduction and change of variables, they are reduced to first-order ODEs, which are then reduced to UPD, and they are solved by applying the main theorem of differential calculus: dF \u003d 0 F \u003d const. The issue of lowering the order is traditionally included in the course of practical training (see for example).

Let us say a few words about first-order ODEs that are not allowed with respect to the derivative:

As mentioned in Section 1, one can try to solve (11) with respect to x and obtain a normal form, but this is not always advisable. It is often more convenient to solve (11) directly.

Consider the space ((t, x, p)), where p \u003d x is temporarily considered as an independent variable. Then (11) defines in this space a surface (F (t, x, p) \u003d 0), which can be written parametrically:

It is useful to remember what this means, for example with the sphere in R3.

The sought solutions will correspond to the curves on this surface: t \u003d s, x \u003d x (s), p \u003d x (s) - one degree of freedom is lost because the solutions have a connection dx \u003d pdt. Let us write this connection in terms of parameters on the surface (12): gu du + gv dv \u003d h (fudu + fv dv), i.e.

Thus, the sought solutions correspond to curves on the surface (12), in which the parameters are related by equation (13). The latter is an ODE in a symmetric form that can be solved.

Case I. If in some region (gu hfu) \u003d 0, then (12) then t \u003d f ((v), v), x \u003d g ((v), v) gives a parametric notation of the required curves in the plane ( (t, x)) (i.e., we project onto this plane, since we do not need p).

Case II. Similarly, if (gv hfv) \u003d 0.

Case III. At some points at the same time gu hfu \u003d gv hfv \u003d 0. Here, a separate analysis is required, whether this set corresponds to some solutions (they are then called special).

Example. Clairaud's equation x \u003d tx + x 2. We have:

x \u003d tp + p2. Let us parametrize this surface: t \u003d u, p \u003d v, x \u003d uv + v 2. Equation (13) takes the form (u + 2v) dv \u003d 0.

Case I. Not implemented.

Case II. u + 2v \u003d 0, then dv \u003d 0, i.e. v \u003d C \u003d const.

Hence, t \u003d u, x \u003d Cu + C 2 is a parametric recording of the IC.

It is easy to write it explicitly x \u003d Ct + C 2.

Case III. u + 2v \u003d 0, i.e. v \u003d u / 2. This means that t \u003d u, x \u003d u2 / 4 is a parametric record of the “candidate for the IC”.

To check whether this is really an IK, we write it explicitly x \u003d t2 / 4. It turned out to be a (special) solution.

An exercise. Prove that the special decision applies to everyone else.

It is a general fact - the graph of any particular solution is the envelope of the family of all other solutions. This is the basis for another definition of a special solution precisely as an envelope (see).

An exercise. Prove that for a more general Clairaud equation x \u003d tx (x) with a convex function, the singular solution has the form x \u003d (t), where is the Legendre transform of, i.e., \u003d () 1, or (t) \u003d max (tv (v)). Similarly for the equation x \u003d tx + (x).

Comment. The content of § 3 is described in more detail and more accurately in the textbook.

Note to the teacher. When reading a course of lectures, it may be useful to expand § 3 by giving it a more rigorous form.

Now let us return to the main outline of the course, continuing the presentation begun in §§ 1, 2.

§ 4. Global solvability of the Cauchy problem In § 2 we proved the local existence of a solution to the Cauchy problem, that is, only on some interval containing the point t0.

Under some additional assumptions on f, we also proved the uniqueness of the solution, understanding it as the coincidence of two solutions defined on one interval. If f is linear in x, we get global existence, i.e., on the entire interval where the coefficients of the equation (system) are defined and continuous. However, as an attempt to apply the general theory to a linear system shows, the Peano-Picard interval, generally speaking, is less than that on which a solution can be constructed. Natural questions arise:

1.How to determine the maximum interval on which one can assert the existence of solution (1)?

2. Does this interval always coincide with the maximum at which the right-hand side (1) 1 still makes sense?

3. How to accurately formulate the concept of uniqueness of a solution without reservations about the interval of its definition?

The fact that the answer to question 2 is generally negative (or rather, requires great care) is indicated by the following Example. x \u003d x2, x (0) \u003d x0. If x0 \u003d 0, then x 0 - there are no other solutions by Osgood's theorem. If x0 \u003d 0, then we decide to make a drawing useful). The interval of existence of a solution cannot be greater than (, 1 / x0) or (1 / x0, +), respectively, for x0 0 and x0 0 (the second branch of the hyperbola has nothing to do with the solution! - this is a typical mistake of students). At first glance, nothing in the original problem "foreshadowed such an outcome." In § 4 we will find an explanation for this phenomenon.

The example of the equation x \u003d t2 + x2 reveals a typical student error about the interval of existence of a solution. Here the fact that "the equation is defined everywhere" does not at all imply the continuation of the solution to the whole line. This is clear even from a purely everyday point of view, for example, in connection with legal laws and the processes developing under them: even if the law does not explicitly prescribe the termination of the existence of any company in 2015, this does not mean that this company will not go bankrupt by this year. for internal reasons (although acting within the framework of the law).

In order to answer questions 1-3 (and even to articulate them clearly), the notion of a non-continuing solution is necessary. We will (as we agreed above) consider solutions of equation (1) 1 as pairs (, (tl (), tr ())).

Definition. The solution (, (tl (), tr ())) is an extension of the solution (, (tl (), tr ())), if (tl (), tr ()) (tl (), tr ()), and | (tl (), tr ()) \u003d.

Definition. A solution (, (tl (), tr ())) is non-extendable if it has no non-trivial (i.e., different from it) extensions. (see example above).

It is clear that it is the NRs that are of particular value, and in their terms it is necessary to prove the existence and uniqueness. A natural question arises - is it always possible to construct an IS based on some local solution, or on the Cauchy problem? It turns out, yes. To understand this, let's introduce the following concepts:

Definition. A set of solutions ((, (tl (), tr ()))) is consistent if any 2 solutions from this set coincide at the intersection of the intervals of their definition.

Definition. A consistent set of solutions is called a maximum if it is impossible to add another solution to it so that the new set is consistent and contains new points in the union of the domains of the solutions.

It is clear that the construction of the INN is equivalent to the construction of the IS, namely:

1. If there is an IS, then any INN containing it can only be a set of its restrictions.

An exercise. Verify.

2. If there is an INN, then the IS (, (t, t +)) is constructed as follows:

put (t) \u003d (t), where is any element of the INN defined at this point. Obviously, such a function will be uniquely determined on all (t, t +) (the uniqueness follows from the consistency of the collection), and at each point it coincides with all elements of the INN defined at this point. For any t (t, t +), there is some one defined in it, and hence in its neighborhood, and since in this neighborhood there is a solution (1) 1, then - too. Thus, there is a solution (1) 1 on all (t, t +). It is not extendable, since otherwise a nontrivial continuation could be added to the INN despite its maximality.

The construction of the INN of problem (1) in the general case (under the conditions of Peano's theorem), when there is no local uniqueness, is possible (see,), but rather cumbersome - it is based on a step-by-step application of Peano's theorem with a lower bound for the length of the continuation interval. Thus, HP always exists. We will justify this only in the case when there is local uniqueness, then the construction of the INN (and therefore also the NR) is trivial. For example, for definiteness, we will act within the framework of the TC-P.

Theorem. Let the conditions TK-P be satisfied in the domain B Rn + 1. Then, for any (t0, x0) B problem (1) has a unique IS.

Evidence. Consider the set of all solutions to problem (1) (it is not empty according to TK-P). It forms an INN - consistent due to local uniqueness, and maximal due to the fact that this is the set of all solutions of the Cauchy problem in general. This means that HP exists. It is unique due to local uniqueness.

If it is required to construct an IS based on the existing local solution (1) 1 (and not the Cauchy problem), then this problem, in the presence of local uniqueness, reduces to the Cauchy problem: you need to choose any point on the existing IC and consider the corresponding Cauchy problem. The MS of this problem will be a continuation of the original solution due to uniqueness. If there is no uniqueness, then the continuation of the given solution is carried out according to the procedure indicated above.

Comment. NR cannot be extended at the ends of the interval of its existence (regardless of the uniqueness condition) so that it is a solution at the endpoints as well. For justification, it is necessary to clarify what is meant by the solution of the ODE at the ends of the segment:

1. Approach 1. Let by the solution (1) 1 on an interval we mean a function that satisfies the equation at the ends in the sense of a one-sided derivative. Then the possibility of the specified extension of the definition of a solution, for example, at the right end of the interval of its existence (t, t +] means that the IC has an end point inside B, and C 1 (t, t +]. But then, having solved the Cauchy problem x (t +) \u003d (t +) for (1) and finding its solution, we obtain that for the right endpoint t + (at the point t + both one-sided derivatives exist and are equal to f (t +, (t +)), which means there is an ordinary derivative), i.e., not was HP.

2. Approach 2. If by the solution of (1) 1 on a segment we mean a function that is only continuous at the ends, but such that the ends of the IK lie in B (even if the equation does not need to be fulfilled at the ends), then the same reasoning will turn out, only in terms of the corresponding integral equation (see details).

Thus, immediately limiting ourselves to only open intervals as sets of definitions of solutions, we did not violate the generality (but only avoided unnecessary fiddling with one-sided derivatives, etc.).

As a result, we answered Question 3, posed at the beginning of Section 4: if the uniqueness condition (for example, Osgood or Cauchy-Picard) is satisfied, the uniqueness of the IS of the solution of the Cauchy problem takes place. If the uniqueness condition is violated, then there can be many IS of the Cauchy problem, each with its own interval of existence. Any solution (1) (or just (1) 1) can be continued to HP.

To answer questions 1, 2, it is necessary to consider not the variable t separately, but the behavior of the IC in the space Rn + 1. To the question of how the IC behaves "near the ends", he answers. Note that the existence interval has ends, but the IC may not have them (the IC end in B always does not exist - see Remark above, but the end may not exist on B - see below).

Theorem. (about leaving the compact).

we formulate it under the conditions of local uniqueness, but this is not necessary - see, there the TPK is formulated as a criterion for the NR.

Under the conditions of TK-P, the graph of any NR equation (1) 1 leaves any compact set K B, i.e., K B (t, t +): (t, (t)) K at t.

Example. K \u003d ((t, x) B | ((t, x), B)).

Comment. Thus, the IC NR near t ± approaches B: ((t, (t)), B) 0 at t t ± - the solution continuation process cannot be terminated strictly inside B.

positively, here it is useful as an exercise to prove that the distance between disjoint closed sets, one of which is compact, is positive.

Evidence. We fix K B. Take any 0 (0, (K, B)). If B \u003d Rn + 1, then by definition we assume (K, B) \u003d +. The set K1 \u003d ((t, x) | ((t, x), K) 0/2) is also compact in B, so there exists F \u003d max | f |. Let us choose the numbers T and R sufficiently small so that any cylinder of the form For example, it suffices to take T 2 + R2 2/4. Then the Cauchy problem of the form has a TC-P solution on the interval not narrower than (t T0, t + T0), where T0 \u003d min (T, R / F) for all (t, x) K.

Now we can take \u003d as the required segment. Indeed, it is necessary to show that if (t, (t)) K, then t + T0 t t + T0. Let us show, for example, the second inequality. The solution of the Cauchy problem (2) with x \u003d (t) exists to the right at least up to the point t + T0, but it is an HP of the same problem, which, due to uniqueness, is an extension, therefore t + T0 t +.

Thus, the HP graph always “reaches B”, so the range of HP existence depends on the IC geometry.

For example:

Statement. Let B \u003d (a, b) Rn (the interval is finite or infinite), f satisfies the TK-P conditions in B, be the IS of problem (1) with t0 (a, b). Then either t + \u003d b or | (t) | + for t t + (and similarly for t).

Evidence. So let t + b, then t + +.

Consider a compact set K \u003d B B. For any R + according to TPK, there is (R) t + such that for t ((R), t +) the point (t, (t)) K. But since t t +, this is possible only after account | (t) | R. But this also means | (t) | + for t t +.

In this particular case, we see that if f is defined "for all x", then the interval of existence of the IS can be less than the maximum possible (a, b) only due to the tending of the IS to when approaching the ends of the interval (t, t +) (in general case - to the boundary B).

An exercise. Generalize the last Statement to the case when B \u003d (a, b), where Rn is an arbitrary domain.

Comment. It should be understood that | (t) | + does not mean any k (t).

Thus, we have answered Question 2 (cf. the example at the beginning of Section 4): the IC reaches B, but its projection onto the t axis may not reach the ends of the projection of B onto the t axis. Question 1 remains - are there any signs by which, without solving the ODE, it is possible to judge the possibility of continuing the solution to the "widest possible interval"? We know that for linear ODEs this extension is always possible, but in the Example at the beginning of § 4 this is impossible.

Let us first consider, for illustration, a particular case of the URS for n \u003d 1:

the convergence of the improper integral h (s) ds (improper due to \u003d + or due to the singularity of h at a point) does not depend on the choice of (,). Therefore, in what follows we will simply write h (s) ds when it comes to the convergence or divergence of this integral.

this could be done already in Osgood's theorem and in related statements.

Statement. Let a C (,), b C (, +), both functions are positive on their intervals. Let the Cauchy problem (where t0 (,), x0) have an IS x \u003d x (t) on the interval (t, t +) (,). Then:

Consequence. If a \u003d 1, \u003d +, then t + \u003d + Proof. (Assertions). Note that x is monotonically increasing.

An exercise. Prove.

Therefore, there is x (t +) \u003d lim x (t) +. We have Case 1. t +, x (t +) + - is impossible by TPK, since x is an IS.

Both integrals are either finite or infinite.

An exercise. Complete the proof.

Rationale for the teacher. As a result, we get that in case 3: a (s) ds +, and in case 4 (if it is realized at all) the same thing.

Thus, for the simplest ODEs with n \u003d 1 of the form x \u003d f (x), the extensibility of solutions to is determined by co.

autonomous) equations see Part 3.

Example. For f (x) \u003d x, 1 (in particular, the linear case \u003d 1), and f (x) \u003d x ln x, we can guarantee the continuation of (positive) solutions to +. For f (x) \u003d x and f (x) \u003d x ln x at 1 the solutions are “destroyed in a finite time”.

In general, the situation is determined by many factors and is not so simple, but the importance of "the rate of growth of f along x" remains. For n 1, it is difficult to formulate extensibility criteria, but sufficient conditions exist. As a rule, they settle with the help of the so-called. a priori estimates of solutions.

Definition. Let h C (,), h 0. It is said that for solutions of some ODE, AO | x (t) | h (t) on (,) if any solution of this ODE satisfies this estimate on that part of the interval (,) where it is defined (i.e., it is not assumed that solutions are necessarily defined on the entire interval (,)).

But it turns out that the presence of AO guarantees that the solutions will nevertheless be determined on the entire (,) (and, therefore, satisfy the estimate on the entire interval), so that the a priori estimate turns into a posterior one:

Theorem. Let the Cauchy problem (1) satisfy the conditions TK-P, and for its solutions, there is an AO on the interval (,) with some h C (,), and the curvilinear cylinder (| x | h (t), t (,)) B . Then НР (1) is defined on all (,) (and hence, satisfies AO).

Evidence. Let us prove that t + (t is similar). Let's say t +. Consider a compact set K \u003d (| x | h (t), t) B. According to the TPK, at t t +, the point of the graph (t, x (t)) leaves K, which is impossible due to AO.

Thus, to prove the extendibility of the solution to some interval, it suffices to formally estimate the solution over the entire required interval.

Analogy: the measurability of a function according to Lebesgue and the formal estimate of the integral imply the real existence of the integral.

Here are some examples of situations how this logic works. Let's start with an illustration of the above thesis about "the growth of f in x is rather slow."

Statement. Let B \u003d (,) Rn, f satisfy the TK-P conditions in B, | f (t, x) | a (t) b (| x |), where a and b satisfy the conditions of the previous Statement with \u003d 0, and \u003d +. Then the IS of problem (1) exists on (,) for all t0 (,), x0 Rn.

Lemma. If and are continuous, (t0) (t0); for t t Proof. Note that in a neighborhood of (t0, t0 +): if (t0) (t0), then this is immediately obvious, and otherwise (if (t0) \u003d (t0) \u003d 0) we have (t0) \u003d g (t0, 0) (t0), which again gives the required.

Now suppose that there is t1 t0 such that (t1). By obvious reasoning we can find (t1) t2 (t0, t1] such that (t2) \u003d (t2), and on (t0, t2), but then at the point t2 we have \u003d, - a contradiction.

g is any, and in fact you only need, C, and wherever \u003d, there. But in order not to hammer our heads, we will consider it as in Lemma. Here is a strict inequality, but a nonlinear ODE, and there is also a so-called.

Note to the teacher. Inequalities of this kind as in Lemma are called Chaplygin-type inequalities (NP). It is easy to see that in Lemma the uniqueness condition was not needed, so that such a "strict NP" is also true within the framework of Peano's theorem. A "weak LF" is obviously wrong without uniqueness, since equality is a special case of a weak inequality. Finally, the "nonstrict NP" is true within the framework of the uniqueness condition, but it is possible to prove it only locally - using the IM.

Evidence. (Assertions). Let us prove that t + \u003d (t \u003d similarly). Suppose t +, then by the Statement above | x (t) | + for t t +, so we can assume x \u003d 0 on. If we prove AO | x | h on) (the ball is closed for convenience).

The Cauchy problem x (0) \u003d 0 has a unique IS x \u003d 0 on R.

Let us indicate a sufficient condition on f under which the existence of an IS on R + can be guaranteed for all sufficiently small x0 \u003d x (0). To do this, assume that (4) has the so-called. the Lyapunov function, i.e. such a function V such that:

1. V C 1 (B (0, R));

2.sgnV (x) \u003d sgn | x |;

Let's check the fulfillment of conditions A and B:

A. Consider the Cauchy problem where | x1 | R / 2. Let us construct a cylinder B \u003d R B (0, R) - the domain of definition of the function f, where it is bounded and of class C 1, so that there exists F \u003d max | f |. According to TK-P, there is a solution to (5) defined on the interval (t1 T0, t1 + T0), where T0 \u003d min (T, R / (2F)). By choosing a sufficiently large T, one can achieve T0 \u003d R / (2F). It is important that T0 does not depend on the choice of (t1, x1), as long as | x1 | R / 2.

B. While solution (5) is defined and remains in the ball B (0, R), we can carry out the following reasoning. We have:

V (x (t)) \u003d f (x (t)) V (x (t)) 0, i.e., V (x (t)) V (x1) M (r) \u003d max V (y) ... It is clear that m and M are non-decreasing, continuous | r are discontinuous at zero, m (0) \u003d M (0) \u003d 0, and outside zero they are positive. Therefore, there is R 0 such that M (R) m (R / 2). If | x1 | R, then V (x (t)) V (x1) M (R) m (R / 2), whence | x (t) | R / 2. Note that R R / 2.

Now we can formulate a theorem, which from Sec. A, B deduces the global existence of solutions (4):

Theorem. If (4) has a Lyapunov function in B (0, R), then for all x0 B (0, R) (where R is defined above), the HP of the Cauchy problem x (t0) \u003d x0 for system (4) (with any t0) defined before +.

Evidence. By virtue of item A, the solution can be constructed on, where t1 \u003d t0 + T0 / 2. This solution lies in B (0, R) and we apply item B to it, so that | x (t1) | R / 2. We apply item A again and obtain a solution on, where t2 \u003d t1 + T0 / 2, i.e., now the solution is built on. We apply item B to this solution and obtain | x (t2) | R / 2, and so on. In a countable number of steps, we obtain a solution in § 5. Dependence of solutions to ODEs on Consider the Cauchy problem where Rk. If for some, t0 (), x0 () this Cauchy problem has an HP, then it is x (t,). The question arises: how to study the dependence of x on? This question is important due to various applications (and will arise especially in Part 3), one of which (although perhaps not the most important) is an approximate solution to an ODE.

Example. Consider the Cauchy problem Its HP exists and is unique, as follows from TK-P, but it is impossible to express it in elementary functions. How then to investigate its properties? One of the ways is as follows: note that (2) is "close" to the problem y \u003d y, y (0) \u003d 1, the solution of which is easy to find: y (t) \u003d et. We can assume that x (t) y (t) \u003d et. This idea is clearly formulated as follows: consider the problem At \u003d 1/100 this is (2), and at \u003d 0 this is the problem for y. If we prove that x \u003d x (t,) is continuous in (in a certain sense), then we get that x (t,) y (t) at 0, and this means x (t, 1/100) y ( t) \u003d et.

True, it remains unclear how close x is to y, but the proof of the continuity of x in is the first necessary step, without which it is impossible to advance further.

Similarly, it is useful to study the dependence on the parameters in the initial data. As we will see later, this dependence can easily be reduced to a dependence on a parameter in the right-hand side of the equation, so for now we will restrict ourselves to a problem of the form Let f C (D), where D is a domain in Rn + k + 1; f is Lipschitz in x in any compact set from D convex in x (for example, C (D) is sufficient). We fix (t0, x0). We set M \u003d Rk | (t0, x0,) D is the set of admissible ones (for which problem (4) makes sense). Note that M is open. We will assume that (t0, x0) are chosen so that M \u003d. According to TK-P, for all M there is a unique IS of problem (4) - the function x \u003d (t,), defined on the interval t (t (), t + ()).

Strictly speaking, since it depends on many variables, it is necessary to write (4) as follows:

where (5) 1 holds on the set G \u003d ((t,) | M, t (t (), t + ())). However, the difference between the signs d / dt and / t is purely psychological (their use depends on the same psychological concept of “fix”). Thus, the set G is a natural maximal set of the definition of a function, and the question of continuity should be investigated precisely on G.

We need an auxiliary result:

Lemma. (Gronwalla). Let the function C, 0, satisfy the estimate for all t. Then, for all, it is true. Note to the teacher. When giving a lecture, you don't have to memorize this formula in advance, but leave space, and write it in after the conclusion.

But then keep this formula in plain sight, since it will be necessary in the ToNZ.

h \u003d A + B Ah + B, whence we obtain what is required.

The meaning of this lemma: differential equation and inequality, the relationship between them, integral equation and inequality, the relationship between all of them, differential and integral lemmas of Gronwall and the relationship between them.

Comment. It is possible to prove this lemma under more general assumptions about, A and B, but we do not need this yet, but will be done in the MFM course (for example, it is easy to see that we did not use the continuity of A and B, etc.).

Now we are ready to clearly state the result:

Theorem. (ToHZ) Under the assumptions made about f and in the notation introduced above, we can assert that G is open and C (G).

Comment. It is clear that the set M, generally speaking, is not connected, so that G may also be disconnected.

Note to the teacher. However, if we included (t0, x0) in the number of parameters, then the connection would be - this is done in.

Evidence. Let (t,) G. It is necessary to prove that:

Let, for definiteness, t t0. We have: M, so that (t,) is defined on (t (), t + ()) t, t0, and hence on some interval such that t point (t, (t,),) runs through a compact curve D (parallel hyperplanes (\u003d 0)). This means that a set of species Definition must be kept before your eyes constantly!

is also compact in D for sufficiently small a and b (convex in x), so that the function f is Lipschitz in x:

[This assessment must be kept in front of your eyes constantly! ] and is uniformly continuous in all variables, and even more so | f (t, x, 1) f (t, x, 2) | (| 12 |), (t, x, 1), (t, x, 2).

[This assessment must be kept in front of your eyes constantly! ] Consider an arbitrary 1 such that | 1 | b and the corresponding solution (t, 1). The set (\u003d 1) is compact in D (\u003d 1), and for t \u003d t0 the point (t, (t, 1), 1) \u003d (t0, x0, 1) \u003d (t0, (t0,), 1) (\u003d 1), and according to TPK at t t + (1) the point (t, (t, 1), 1) leaves (\u003d 1). Let t2 t0 (t2 t + (1)) be the very first value at which the mentioned point goes to.

By construction, t2 (t0, t1]. Our task is to show that t2 \u003d t1 under additional constraints on. Now let t3. We have (for all such t3, all quantities used below are defined by construction):

(t3, 1) (t3,) \u003d f (t, (t, 1), 1) f (t, (t,),) dt, Let's try to prove that this value is less than a in absolute value.

where the integrand is estimated as follows:

± f (t, (t,),), but not ± f (t, (t,),), since the difference | (t, 1) (t,) | there is no estimate yet, so (t, (t, 1),) is unclear, but for | 1 | is, and (t, (t,), 1) is known.

so that in the end | (t3, 1) (t3,) | K | (t, 1) (t,) | + (| 1 |) dt.

Thus, the function (t3) \u003d | (t3, 1) (t3,) | (this is a continuous function) satisfies the conditions of Gronwall's lemma with A (s) K 0, B (s) (| 1 |), T \u003d t2, \u003d 0, so this lemma yields [This estimate must be kept in front of your eyes at all times! ] if we take | 1 | 1 (t1). We will assume that 1 (t1) b. All our reasoning is correct for all t3.

Thus, with this choice of 1, when t3 \u003d t2, nevertheless | (t2, 1) (t2,) | a and also | 1 | b. Hence, (t2, (t2, 1), 1) is possible only due to the fact that t2 \u003d t1. But this, in particular, means that (t, 1) is defined on the entire interval, that is, t1 t + (1), and all points of the form (t, 1) G if t, | 1 | 1 (t1).

That is, although t + depends on, but the segment remains to the left of t + () for sufficiently close to. In the figure, the existence of numbers t4 t0 and 2 (t4) is shown in a similar way for t t0. If t t0, then the point (t,) B (, 1) G, similarly for t t0, and if t \u003d t0, then both cases are applicable, so that (t0,) B (, 3) G, where 3 \u003d min (12). It is important that for a fixed (t,) one can find t1 (t,) so that t1 t 0 (or, respectively, t4), and 1 (t1) \u003d 1 (t,) 0 (or, respectively, 2), so the choice 0 \u003d 0 (t,) is clear (since a ball can be inscribed in the resulting cylindrical neighborhood).

in fact, a more subtle property has been proved: if an IS is defined on a certain interval, then all ISs with sufficiently close parameters are defined on it (i.e.,

all a little outraged by HP). However, and vice versa, this property follows from the openness of G, as will be shown below, so these are equivalent formulations.

Thus, we have proved item 1.

If we are in the indicated cylinder in space, then the estimate is true for | 1 | 4 (, t,). At the same time | (t3,) (t,) | for | t3 t | 5 (, t,) in view of the continuity in t. As a result, for (t3, 1) B ((t,),) we have | (t3, 1) (t,) |, where \u003d min (4, 5). This is p. 2.

"Ministry of Education and Science of the Russian Federation Federal State Budgetary Educational Institution of Higher Professional Education STATE UNIVERSITY OF MANAGEMENT Institute for the training of scientific, pedagogical and scientific personnel. PROGRAM OF ENTRANCE TESTS ON SPECIAL DISCIPLINE OF SOCIOLOGY OF MANAGEMENT MOSCOW - 2014 MEDICAL ORGANIZATION. entrance examinations for graduate school in ... "

"Amur State University Department of Psychology and Pedagogy EDUCATIONAL-METHODOLOGICAL COMPLEX OF DISCIPLINE CONSULTATIVE PSYCHOLOGY Basic educational program in the direction of undergraduate 030300.62 Psychology Blagoveshchensk 2012 UMKd developed Considered and recommended at a meeting of the Department of Psychology and Pedagogy Protocol ..."

"Automotive industry) Omsk - 2009 3 Federal Agency for Education GOU VPO Siberian State Automobile and Highway Academy (SibADI) Department of Engineering Pedagogy METHODOLOGICAL INSTRUCTIONS for the study of the discipline Pedagogical technologies for students of specialty 050501 - Professional training (cars and automotive ..."

«Series Educational book G.S. Rosenberg, FN Ryanskiy THEORETICAL AND APPLIED ECOLOGY Textbook Recommended by the Educational-Methodological Association for classical university education of the Russian Federation as a textbook for students of higher educational institutions in environmental specialties 2nd edition Nizhnevartovsk Publishing house Nizhnevartovsk Pedagogical Institute 2005 BBK 28.080.1я73 Р64 Reviewers: Doctor of Biol. Sciences, Professor V.I. Popchenko (Institute of Ecology ... "

"MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN FEDERATION Federal State Budgetary Educational Institution of Higher Professional Education KRASNOYARSK STATE PEDAGOGICAL UNIVERSITY. V.P. Astafieva E.M. Antipova SMALL PRACTICE ON BOTANICS Electronic edition KRASNOYARSK 2013 LBC 28.5 А 721 Reviewers: Vasiliev AN, Doctor of Biological Sciences, Professor of the KSPU named after V.P. Astafieva; Yamskikh G.Yu., Doctor of Geological Sciences, Professor of Siberian Federal University Tretyakova I.N., Doctor of Biological Sciences, Professor, leading employee of the Forest Institute ... "

"Ministry of Education and Science of the Russian Federation Federal State Educational Budgetary Institution of Higher Professional Education Amur State University Department of Psychology and Pedagogy EDUCATIONAL-METHODOLOGICAL COMPLEX OF THE DISCIPLINE OF THE BASIS OF PEDIATRIC AND HYGIENE Basic educational program in the direction of training 050400.62 Psychological and pedagogical education Blagoveshchensk developed 2012 1 at a meeting of the Department of Psychology and ... "

«Checking tasks with a detailed answer State (final) certification of graduates of the 9th grade of educational institutions (in a new form) 2013 GEOGRAPHY Moscow 2013 Author-compiler: Ambartsumova E.M. Increasing the objectivity of the results of the state (final) certification of graduates of 9 classes of educational institutions (in ... "

“Practical recommendations for the use of reference and informational and methodological content for teaching Russian as the state language of the Russian Federation. Practical recommendations are addressed to teachers of the Russian language (including as a non-native language). Content: Practical recommendations and guidelines for the selection of 1. the content of the material for educational and educational sessions devoted to the problems of the functioning of the Russian language as the state language ... "

EV MURYUKINA DEVELOPMENT OF CRITICAL THINKING AND MEDIA COMPETENCE OF STUDENTS IN THE PROCESS OF PRESS ANALYSIS textbook for universities Taganrog 2008 2 Muryukina E.V. Development of critical thinking and media competence of students in the process of analyzing the press. Textbook for universities. Taganrog: NP Center for Personality Development, 2008.298 p. The textbook examines the development of critical thinking and media competence of students in the process of media education. Since the press today ... "

"ABOUT. P. Golovchenko ABOUT THE FORMATION OF HUMAN PHYSICAL ACTIVITY Part II P ED AG OGIK A DVI GAT ELN OY ACTIVITY VN OSTI 3 Educational publication Oleg Petrovich Golovchenko FORMATION OF HUMAN PHYSICAL ACTIVITY Textbook Part II Pedagogy of physical activity Edited by N. ... Kosenkova D.V. Smolyak and S.V. Potapova *** Signed for printing on 23.11. Format 60 x 90 / 1/16. Writing paper Times Headset Operational method of printing Conv. etc. .... "

STATE EDUCATIONAL INSTITUTION OF HIGHER PROFESSIONAL EDUCATION KAZAN STATE UNIVERSITY NAMED AFTER IN AND. ULYANOVA-LENINA Electronic libraries of scientific and educational resources. Teaching aid Abrosimov A.G. Lazareva Yu.I. Kazan 2008 Electronic libraries of scientific and educational resources. Study guide in the direction of Electronic educational resources. - Kazan: KSU, 2008. The teaching aid is published by decision ... "

«MINISTRY OF EDUCATION OF THE RUSSIAN FEDERATION State educational institution of higher professional education Orenburg State University Akbulak branch Department of Pedagogy V.А. TETSKOVA METHODOLOGY OF TEACHING ART IN THE ELEMENTARY SCHOOL OF A GENERAL EDUCATIONAL SCHOOL METHODOLOGICAL INSTRUCTIONS Recommended for publication by the Editorial and Publishing Council of the State Educational Institution of Higher Professional Education Orenburg State University ... "

MINISTRY OF EDUCATION AND SCIENCE OF THE RUSSIAN FEDERATION MINISTRY OF EDUCATION OF STAVROPOL REGION STATE EDUCATIONAL ESTABLISHMENT OF HIGHER PROFESSIONAL EDUCATION STAVROPOL STATE INSTITUTIONAL N. Dzhegutanova CHILDREN'S LITERATURE OF THE COUNTRIES OF THE STUDY LANGUAGE TEACHING AND METHODOLOGICAL COMPLEX Stavropol 2010 1 Published by the decision of UDC 82.0 of the Editorial and Publishing Council LBC 83.3 (0) GOU VPO Stavropol State Pedagogical Institute Reviewers: ... "

"REGULATIONS on the new system of intraschool education quality assessment MBOU Kamyshinskaya secondary school 1. General provisions 1.1. The regulation on the intra-school system for assessing the quality of education (hereinafter referred to as the regulation) establishes uniform requirements for the implementation of the intra-school system for assessing the quality of education (hereinafter referred to as SHSOCO) in the municipal budgetary educational institution of the Kamyshin secondary school (hereinafter referred to as the school). 1.2. The practical implementation of SHSOCO is built in accordance with ... "

"MINISTRY OF HEALTH OF THE REPUBLIC OF UZBEKISTAN TASHKENT MEDICAL ACADEMY DEPARTMENT OF HEALTHCARE OPPERS WITH CLINICAL ALLERGOLOGY APPROVED Vice-Rector for Academic Affairs Prof. O.R. Teshaev _ 2012 RECOMMENDATIONS FOR THE COMPOSITION OF TEACHING AND METHODOLOGICAL DEVELOPMENTS FOR PRACTICAL EXERCISES ON A UNIFIED METHODOLOGICAL SYSTEM Methodical instructions for teachers of medical universities Tashkent - 2012 MINISTRY OF HEALTH OF THE REPUBLIC OF UZBEKISTAN MEDICAL CENTER FOR DEVELOPMENT "

"Federal Agency for Education Gorno-Altai State University AP Makoshev POLITICAL GEOGRAPHY AND GEOPOLITICS Study Guide Gorno-Altaisk RIO of Gorno-Altai State University 2006 Published by the decision of the Editorial and Publishing Council of Gorno-Altai State University. Makoshev AP POLITICAL GEOGRAPHY AND GEOPOLITICS. Study guide. - Gorno-Altaysk: RIO GAGU, 2006.-103 p. The teaching aid is developed according to the educational ... "

"A.V. Novitskaya, L.I. Nikolaeva SCHOOL OF THE FUTURE MODERN EDUCATIONAL PROGRAM Stages of life CLASS 1 METHODOLOGICAL GUIDE FOR TEACHERS OF ELEMENTARY CLASSES Moscow 2009 UDC 371 (075.8) LBC 74.00 N 68 Copyright protected legally, reference to the authors is required. Novitskaya A.V., Nikolaeva L.I. Н 68 Modern educational program Stages of life. - M .: Avvallon, 2009 .-- 176 p. ISBN 978 5 94989 141 4 This brochure is addressed primarily to teachers, but undoubtedly for its information ... "

"Educational-methodical complex RUSSIAN BUSINESS LAW 030500 - Jurisprudence Moscow 2013 Author - compiler of the Department of Civil Law Disciplines Reviewer - The educational-methodical complex is considered and approved at a meeting of the Department of Civil Law Disciplines Protocol No. from _2013. Russian business law: educational and methodological ... "

"A. A. Yamashkin V. V. Ruzhenkov Al. A. Yamashkin GEOGRAPHY OF THE REPUBLIC of MORDOVIA Textbook SARANSK PUBLISHING HOUSE OF MORDOVSK UNIVERSITY 2004 UDC 91 (075) (470.345) BBK D9 (2R351-6Mo) Ya549 Reviewers: Department of Physical Geography, Voronezh State Pedagogical University; Doctor of Geographical Sciences, Professor A. M. Nosonov; teacher of the school-complex number 39 of Saransk A. V. Leontyev Published by the decision of the educational and methodological council of the faculty of pre-university training and secondary ... "

Alexander Viktorovich Abrosimov Date of birth: November 16, 1948 (1948 11 16) Place of birth: Kuibyshev Date of death ... Wikipedia

I Differential equations equations containing the desired functions, their derivatives of various orders and independent variables. The theory of D. at. arose at the end of the 17th century. influenced by the needs of mechanics and other natural science disciplines, ... ... Great Soviet Encyclopedia

Ordinary differential equations (ODE) is a differential equation of the form where is an unknown function (possibly a vector function, then, as a rule, also a vector function with values \u200b\u200bin the space of the same dimension; in this ... ... Wikipedia

Wikipedia has articles about other people with this surname, see Yudovich. Victor Iosifovich Yudovich Date of birth: October 4, 1934 (1934 10 04) Place of birth: Tbilisi, USSR Date of death ... Wikipedia

Differential - (Differential) Differential definition, function differential, differential lock Information about differential definition, function differential, differential lock Contents Contents math Informal description ... ... Investor encyclopedia

One of the basic concepts in the theory of partial differential equations. The role of X is manifested in the essential properties of these equations, such as local properties of solutions, solvability of various problems, their correctness, etc. Let ... ... Encyclopedia of Mathematics

An equation in which the unknown is a function of one independent variable, and this equation includes not only the unknown function itself, but also its derivatives of various orders. The term differential equations was proposed by G. ... ... Encyclopedia of Mathematics

Trenogin Vladilen Aleksandrovich VA Trenogin at a lecture at MISiS Date of birth ... Wikipedia

Trenogin, Vladilen Aleksandrovich Trenogin Vladilen Aleksandrovich VA Trenogin at a lecture at MISiS Date of birth: 1931 (1931) ... Wikipedia

Gaussian equation, linear ordinary differential equation of the second order or, in self-adjoint form, Variables and parameters in the general case can take any complex values. After substitution, the reduced form is obtained ... ... Encyclopedia of Mathematics


Close