Analytical Mechanics This page intentionally left blank
Download 10.87 Mb. Pdf ko'rish
|
2
∈ C be the eigenvalues of A. We distinguish two cases: Case I : the eigenvalues of A are real; Case II : the eigenvalues of A are complex conjugates. Case I : We need to distinguish various subcases. All eigenvalues of A are real. (I.1) λ
1 < λ 2
Let v 1
2 be the eigenvectors corresponding to λ 1 , λ
2 . Setting y(t) = η 1 (t)v
1 + η 2 (t)v
2 we find ˙
η i = λ i η i , i = 1, 2. Therefore y(t) = c 1 e tλ 1 v 1 + c
2 e tλ 2 v 2 A1.2 Some basic results on ordinary differential equations 699 and the constants can be determined by decomposing the initial condition y(0) = c 1 v + c 2 v 2 in the basis of R 2 given by v 1 and v
2 . When t
→ ∞ we have y(t) → 0 and the trajectory in the phase plane is tangent in y = 0 to v 2 (except if y(0) = c 1 v 1 ). (I.2) 0 < λ 1 < λ 2 (repulsive node) The discussion for case (I.1) can be repeated for the limit t → −∞.
(I.3) λ 1
2 (saddle)
The solutions are y(t) = c 1 e tλ 1 v 1 + c
2 e tλ 2 v 2 , and hence are asymptotic to the direction v 1 for t
→ +∞ and v 2 for t → −∞. (I.4) λ
1 = λ
2 and A diagonalisable (star node) In this case A = λ 1 1 0 0 1 , every vector of the plane is an eigenvector and the trajectories are rays of the form y(t) = y(0)e λ 1
. (I.5) λ
1 = λ
2 and A non-diagonalisable (Jordan node) By an invertible linear transformation, A can be reduced to a Jordan block λ 1 k 0 λ 1 , k =
/ 0. If λ 1 = / 0, the trajectories of the resulting system have equation x 1 = (k/λ
1 ) x
2 log
|x 2 /c |. The case λ 1 = 0 is trivial. Case II : Since the eigenvalue equation is λ 2 − Tr(A)λ + det (A) = 0, setting θ = 1 2 Tr(A) and ω = √ det A − θ 2 we obtain λ 1 = θ + i ω, λ 2 = θ
− i ω. Note that in this case, det A > θ 2 .
S −1 θ + iω 0 0 θ − iω S =
θ −ω ω θ , with S = 1 i i 1 . We can therefore reduce to the case A = θ
ω θ , where complex numbers do not appear. The corresponding differential system is ˙ x 1 = θx
1 − ωx
2 , ˙ x 2 = ωx 1 + θx
2 . To study the trajectories in the plane (x 1 , x
2 ) it is convenient to change to polar coordinates (r, ϕ) for which the equations decouple: ˙r = θr,
˙ ϕ = ω.
Hence we simply obtain dr dϕ = θ ω r and finally r = r 0 e (θ/ω)ϕ .
700 Some basic results on ordinary differential equations A1.2 Now the classification is evident. (II.1) θ = 0 (centre) The trajectories are circles. (II.2) θ = / 0 (focus) The trajectories are spirals converging towards the centre if θ < 0 (attractive case), and they move away from the centre if θ > 0 (repulsive case). Remark A1.5 A particularly interesting case for mechanics is when A is Hamiltonian, i.e. a 2 × 2 matrix with trace zero: A = a b c −a which corresponds to the quadratic Hamiltonian H = 1 2 (cx 2 1 − 2ax 1 x 2 − bx 2 2 ). The equation for the eigenvalues is λ 2 + det A = 0. Therefore we have the following cases: (1) det A = −(a 2
(2) det A < 0, the eigenvalues are real and opposite (saddle point); (3) det A = 0, the eigenvalues are both zero (finish as an exercise). In the n-dimensional case, suppose that u 1 , . . . , u n is a basis of R n of eigen- vectors of A. Exploiting the invariance of the eigenspaces of A for e tA we can better understand the behaviour of the solutions of (A1.4) by introducing E s = v ∈ R n , v =
n s i =1 v i u i where Au i = λ
i u i , λ i < 0, i = 1, . . . , n s E u = v ∈ R n , v = n u +n s i =n s +1 v i u i where Au i = λ i u i , λ i > 0, i = n s + 1, . . . , n u E c = v ∈ R n , v =
n i =n s +n u +1 v i u i where Au i = λ
i u i , λ i = 0, i = n u + n s + 1, . . . , n . These subspaces of R n are invariant under e tA and are called, respectively, the stable subspace E s , unstable subspace E u and central subspace E c . Clearly R n = E s ⊕ E
u ⊕ E
c . Example A1.4 Assume A = ⎛ ⎝ −1 −1 0 1 −1 0 0 0 2 ⎞ ⎠. Then E u =
3 ), y
3 ∈ R}, corresponding to the eigenvalue λ 3 = 2; E s = {(y 1 , y
2 , 0), (y
1 , y
2 ) ∈ R 2 }, corresponding to the eigenvalues λ 1 = −1 − i, λ 2 = −1 + i. The restriction A| E s has an attractive focus at the origin (see Fig. A1.2). A1.3 Some basic results on ordinary differential equations 701
2
3
1 Fig. A1.2 D efinition A1.1 A point x 0 is called singular if X(x 0 ) = 0.
T heorem A1.4 (Rectification) If x 0 is not a singular point, there exists a neigh- bourhood V 0 of x 0 and an invertible coordinate transformation y = y(x), defined on V 0
C 1 which transforms the equation (A1.1) into ˙ y 1 = 1, ˙ y i = 0,
i = 2, . . . , l. (A1.6)
Remark A1.6 If X is of class C r
≤ r ≤ ∞, the transformation y is also of class C r . A1.3 Dynamical systems on manifolds The problem of the global existence of solutions of ordinary differential equations can be formulated in greatest generality in the context of differentiable manifolds. The existence, uniqueness, continuous dependence and rectification theorems are easily extended to the case of differential equations on manifolds. Let M be a differentiable manifold of dimension l, and X : M → T M be
a C 1 vector field. A curve x : (t 1 , t 2 ) → M is a solution of the differential equation (A1.1) on the manifold M if it is an integral curve of X, and hence if for every t ∈ (t 1
2 ) the vector ˙x(t) ∈ T x(t)
M satisfies ˙x(t) = X(x(t)) (note that by definition X(x(t)) ∈ T x(t)
M ). 702 Some basic results on ordinary differential equations A1.3 T
X(x) = 0, ∀ x ∈ M\C, every solution of (A1.1) is global. Remark A1.7 If M = R
l , as is known, the conditions for global existence are less restrictive (see, e.g. Piccinini et al. 1984). From Theorem A1.4 we easily deduce the following. C orollary A1.1 If M is a compact manifold, the solutions of (A1.1) are global. Henceforth we generally assume the global existence of the solutions of (A1.1). Consider the map g : M
× R −→ M, which to each point x 0 ∈ M and each time t associates the solution x(t) of (A1.1) satisfying the initial condition x(0) = x 0 , and write x(t) = g(x 0 , t) = g t x 0 . Clearly
g(x 0 , 0) = g 0 x 0 = x 0 , (A1.7) for every x 0 ∈ M, and from the uniqueness theorem it follows that g t is invertible: x = g t
0 ⇔ x
0 = g
−t x. (A1.8) Hence, for every t ∈ R, g
t is a diffeomorphism of M . In addition, for every t, s ∈ R and for every x 0 ∈ M we have g t
s x 0 ) = g t +s x 0 . (A1.9) D efinition A1.2 A one-parameter family (g t ) t ∈R of diffeomorphisms of M satisfying the properties (A1.7)–(A1.9) is called a one-parameter group of diffeomorphisms. Remark A1.8 A one-parameter group of diffeomorphisms of M defines an action (cf. Section 1.8) of the additive group R on the manifold M . The manifold M is called the phase space of the differential equation (A1.1), and the group g t is called the phase flow of the equation. The integral curve of the field X passing through x 0 at time t = 0 is given by {x ∈ M|x = g t x 0 } and
it is also called the phase curve. 1 We can now give the abstract definition of a dynamical system on a manifold. 1 The phase curves are therefore the orbits of the points of M under the action of R, determined by the phase flow. A1.3 Some basic results on ordinary differential equations 703 D
on M . Clearly the phase flow associated with a differential equation on a manifold is an example of a dynamical system on a manifold. Indeed, the two notions are equivalent. T heorem A1.5 Every dynamical system on a manifold M determines a differential equation on M . Proof
Let g : R × M → M be the given dynamical system; we denote by g t = g(t,
·) the associated one-parameter group of diffeomorphisms. The vector field X(x) = ∂g
∂t (x)
t =0 (A1.10) is called the infinitesimal generator of g t . Setting x(t) = g t x 0 , it is easy to verify that x(t) is the solution of (A1.1) with initial condition x(0) = x 0 , where X is given by (A1.10). Indeed, ˙x(t) = lim ∆ t→0 g
+∆ t x 0 − g t x 0 ∆ t = lim ∆ t→0
g ∆ t
x(t) − g
0 x(t)
∆ t = X(x(t)). (A1.11) Remark A1.9 An interesting notion connected to the ones just discussed is that of a discrete dynamical system, obtained by substituting t ∈ R with t ∈ Z in the definition of a one-parameter group of diffeomorphisms. For example, if f : M → M is a diffeomorphism, setting f 0 = id
M , the identity on M , and f n = f
◦ · · · ◦ f n times, f
−n = f
−1 ◦ · · · ◦ f −1 n times, we see that (f n ) n ∈Z is a discrete dynamical system. The study of discrete dynamical systems is as interesting as that of ordinary differential equations (see Hirsch and Smale 1974, Arrowsmith and Place 1990, and Giaquinta and Modica 1999). Besides the singular points, i.e. the fixed points of the infinitesimal generator, particularly important orbits of a dynamical system are the periodic orbits x(t) = g t
0 = g
t +T x 0 = x(t + T ) for every t ∈ R. The period is min{T ∈ R such that x(t + T ) = x(t), ∀ t ∈ R}. In the case of dynamical systems on the plane or on the sphere, the dynamics are described asymptotically by periodic orbits or by singular points. To make this idea more precise we introduce the ω-limit set of a point x 0 (cf. Problem 15 of Section 13.13, for the notion of an ω-limit set in the discrete case): ω(x
0 ) =
∩ t 0 > 0 {g t x 0 , t ≥ t
0 }. It is immediate to verify that x ∈ ω(x 0 ) if and only if there exists a sequence t n → ∞ such that g t n x 0 → x for n → ∞. 704 Some basic results on ordinary differential equations A1.3 (a) singular point (b) periodic orbit (c) polycycle (d) polycycle Fig. A1.3 T heorem A1.6 (Poincar´e–Bendixon) Assume that the orbit {g t x 0 , t ≥ 0} of a dynamical system on the plane (or on the two-dimensional sphere) is contained in a bounded open set. Then the ω-limit set of x 0 is necessarily a singular point or a periodic orbit or a polycycle, and hence the union of singular points and of phase curves each tending for t → ±∞ to a singular point (not necessarily the same for all) (see Fig. A1.3). Only in dimension greater than two can the behaviour of a dynamical system be significantly more complex, including the possibility of chaotic motions, whose study employs the ideas of ergodic theory, introduced in Chapter 13.
APPENDIX 2: ELLIPTIC INTEGRALS AND ELLIPTIC FUNCTIONS The elliptic integrals owe their name to the fact that Wallis (in 1655) first introduced them in the calculation of the length of an arc of an ellipse. In their most general form they are given by R(x, y) dx, (A2.1) where R is a rational function of its arguments and y = P (x), with P a fourth-degree polynomial. Legendre showed in 1793 that every elliptic integral (A2.1) can be expressed as the sum of elementary functions plus a combination of integrals of the following three kinds: (1) F (ϕ, k) = ϕ 0 dψ 1 − k
2 sin
2 ψ = z 0 dx (1 − x
2 )(1
− k 2 x 2 ) , (A2.2) (2)
E(ϕ, k) = ϕ 0 1 − k
2 sin
2 ψ dψ =
z 0 1 − k 2 x 2 1 − x 2 dx,
(A2.3) (3)
Π (ϕ, k, n) = ϕ 0
(1 + n sin 2 ψ) 1 − k
2 sin
2 ψ = z 0 dx (1 + nx 2 ) (1 − x
2 )(1
− k 2 x 2 ) , (A2.4) where z = sin ϕ, ϕ is called the amplitude, the number k ∈ [0, 1] is called the modulus and n is the parameter (for elliptic integrals of the third kind). When ϕ = π/2, the elliptic integrals are called complete: we then have the complete integral of the first kind : K(k) = F π 2 , k = π/ 2 0 dψ 1 − k
2 sin
2 ψ . (A2.5) It is easy to check that K(k) is a strictly increasing function of k, and K(0) = π/2, while lim k →1 − K(k) = +
∞. In addition, it admits the series expansion K(k) =
π 2 1 + ∞ n =1 (2n − 1)!!
(2n)!! 2 k 2n . (A2.6) 706 Elliptic integrals and elliptic functions A2 Indeed, expanding as a series (1 − k 2 sin 2 ψ) −1/2 we find π/ 2 0 dψ 1 − k 2 sin 2 ψ = π 2 1 + 2 π ∞ n =1 (2n − 1)!! 2 n n! k 2n π/ 2 0 (sin ψ) 2n dψ , from which equation (A2.6) follows, taking into account that π/ 2 0 (sin ψ)
2n dψ =
1 2 2n 2n n π 2 and the identity (2n − 1)!!
2 3n (n!) 3 (2n)! =
(2n − 1)!!
(2n)!! 2 , that can be proved by induction. Similarly we introduce the complete integral of the second kind : E(k) = E π 2 , k = π/ 2 0 1 − k 2 sin 2 ψ dψ.
(A2.7) Setting u = F (ϕ, k), the problem of the inversion of the elliptic integral consists of finding the unknown function ϕ(u, k), and hence the amplitude as a function of u, for fixed k: ϕ = am(u). (A2.8)
This is possible because ∂F/∂ϕ = / 0. The sine and cosine of ϕ are called the sine amplitude and cosine amplitude of u and are denoted sn and cn: sn(u) = sin am(u), cn(u) = cos am(u). (A2.9)
When it is necessary to stress the dependence on k we write sn(k, u), etc. We also set dn(u) = 1
2 sn 2 (u), (A2.10)
and the function dn(u) is called the delta amplitude. The functions sn(u), cn(u) and dn(u) are the Jacobi elliptic functions, and as we have seen, they appear in the solution of the equation of motion in various problems of mechanics (see Chapters 3 and 7). The functions sn and cn are periodic of period 4K(k), while dn is periodic of period 2K(k). In addition sn is odd, while cn and dn are even functions; sn and cn take values in the interval [ −1, 1], while dn takes values in the interval [ √ 1
2 , 1].
A2 Elliptic integrals and elliptic functions 707 The following are important identities: sn 2 (u) + cn 2 (u) = 1,
dn 2 (u) + k 2 sn 2 (u) = 1, sn(0) = sn(2K) = 0, sn(K) = −sn(3K) = 1, (A2.11) and differentiation formulas: d du
Download 10.87 Mb. Do'stlaringiz bilan baham: |
ma'muriyatiga murojaat qiling