- #1

- 697

- 49

If x(t) is considered to be small so that higher powers ( greater than 2 ) can be neglected in a calculation does that also imply that the time derivative of x(t) can be considered small and powers greater than 2 be neglected ?

Thanks

You should upgrade or use an alternative browser.

- I
- Thread starter dyn
- Start date

- #1

- 697

- 49

If x(t) is considered to be small so that higher powers ( greater than 2 ) can be neglected in a calculation does that also imply that the time derivative of x(t) can be considered small and powers greater than 2 be neglected ?

Thanks

- #2

- 19,296

- 10,806

In terms of mathematics, no. Take the function ##f(x) = x\sin(\frac 1 x)##. The function is bounded for small ##x##, but ##f'(x) = \sin(\frac 1 x) - \frac 1 x \cos (\frac 1 x)## which is unbounded for small ##x##.

If x(t) is considered to be small so that higher powers ( greater than 2 ) can be neglected in a calculation does that also imply that the time derivative of x(t) can be considered small and powers greater than 2 be neglected ?

Thanks

In terms of physics, the physical constraints of the system may exclude this type of badly behaved function.

- #3

- 697

- 49

For small displacements r << 1 the Lagrangian is then simplified to ( A

Why does the term involving the square of the time derivative drop out ?

- #4

- 697

- 49

When the simplification that angles α and θ are both small is made the α(dot)θ(dot)sin(α-θ) term is then neglected. Why is this term neglected ?

- #5

Vanadium 50

Staff Emeritus

Science Advisor

Education Advisor

- 27,661

- 11,883

This.In terms of mathematics, no.

If position is small does that mean velocity is also small?

- #6

- 697

- 49

No , velocity could be large even for a small positionThis.

If position is small does that mean velocity is also small

But why are those term neglected in #3 and #4 ?

- #7

Office_Shredder

Staff Emeritus

Science Advisor

Gold Member

- 4,766

- 737

- #8

pasmith

Homework Helper

- 2,113

- 738

^{2}- a^{2}r(dot)^{2}-2ga)r where r(dot) is the time derivative of displacement , r

For small displacements r << 1 the Lagrangian is then simplified to ( A^{2}-2ga)r

Why does the term involving the square of the time derivative drop out ?

If you reduce your problem to a linear problem by neglecting terms which are not constant or linear in [itex]r[/itex] and its derivatives, then the long-term behviour of the system is of two possible types:

(1) It decays to an equilibrium state, in which case [itex]\dot r \to 0[/itex] is small.

(2) It goes off to infinity.

If you find yourself in the first case, then your neglect of the higher order terms was justified. If you find yourself in the second case, then your neglect is not justified, and the effect of those terms is to steer the system to a different attractor (or it might still head off to infinity).

Last edited:

- #9

ergospherical

Gold Member

- 487

- 619

In small oscillations problems, after determining the exact equations of motion using the Euler-Lagrange equation, one can obtain the so-called^{2}- a^{2}r(dot)^{2}-2ga)r where r(dot) is the time derivative of displacement , r

For small displacements r << 1 the Lagrangian is then simplified to ( A^{2}-2ga)r

Why does the term involving the square of the time derivative drop out ?

In your example, that turns out to be equivalent to simply dropping the middle term ##-a^2 \dot{r}^2 r## from the Lagrangian.

(This approach will give you the same results as if you were to instead use the "approximate" quadratic forms ##T = \dfrac{1}{2} a_{ij}(\boldsymbol{q}_0) \dot{q}_i \dot{q}_j## and ##U = \dfrac{1}{2} \partial_i \partial_j U \bigg{|}_{\boldsymbol{q}_0} q_i q_j## to derive the equations of motion, with ##\boldsymbol{q}_0## the equilibrium point).

- #10

- 697

- 49

If i have a simple pendulum with zero friction then it would oscillate forever. Is that an equilibrium state ? How does that imply that r(dot) is small ?If you reduce your problem to a linear problem by neglecting terms which are not constant or linear in [itex]r[/itex] and its derivatives, then the long-term behviour of the system is of two possible types:

(1) It decays to an equilibrium state, in which case [itex]\dot r \to 0[/itex] is small.

(2) It goes off to infinity.

If you find yourself in the first case, then your neglect of the higher order terms was justified. If you find yourself in the second case, then your neglect is not justified, and the effect of those terms is to steer the system to a different attractor (or it might still head off to infinity).

I have come across the following in some lecture notes "for small amplitude oscillations about θ = 0 then θ(dot) is also small". But why ? Can a harmonic oscillator not oscillate with a fast speed ?

- #11

Office_Shredder

Staff Emeritus

Science Advisor

Gold Member

- 4,766

- 737

- #12

- 697

- 49

- #13

Office_Shredder

Staff Emeritus

Science Advisor

Gold Member

- 4,766

- 737

- #14

ergospherical

Gold Member

- 487

- 619

\begin{align*}

\boldsymbol{F}(\mathbf{x}) = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0) + O(\mathbf{x}^2)

\end{align*}with ##\boldsymbol{J} = \left( \dfrac{\partial F_i}{\partial x_j} \right)## the Jacobian matrix of ##\boldsymbol{F}(\mathbf{x})##. The linearised equation is ##\dfrac{d\mathbf{x}}{dt} = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0)##.

At the bottom of page 100 of Arnold: if ##\mathbf{x}_L(t)## is a solution to the linearised equation and ##\mathbf{x}_E(t)## is a solution to the exact equation, then for any ##\varepsilon > 0## there is a ##\delta > 0## such that if ##|\mathbf{x}_E(0)| < \delta## then ##|\mathbf{x}_E(t) - \mathbf{x}_L(t)| < \varepsilon \delta## for all times ##0 < t < t_{\mathrm{end}}##.

This is why you can neglect non-linear terms in problems of small oscillations about stable equilibrium positions (in the sense of Liapunov!).

- #15

- 697

- 49

I thought maximum velocity occurs at zero displacement ?

- #16

Office_Shredder

Staff Emeritus

Science Advisor

Gold Member

- 4,766

- 737

- #17

- 19,296

- 10,806

That still assumes that ##F(x)## is "well-behaved". That can't possibly be true without some constraints on ##F## - having bounded derivatives would do it; or, possibly, having a Taylor series with non-zero radius of convergence about ##x_0## would be sufficient.

\begin{align*}

\boldsymbol{F}(\mathbf{x}) = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0) + O(\mathbf{x}^2)

\end{align*}with ##\boldsymbol{J} = \left( \dfrac{\partial F_i}{\partial x_j} \right)## the Jacobian matrix of ##\boldsymbol{F}(\mathbf{x})##. The linearised equation is ##\dfrac{d\mathbf{x}}{dt} = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0)##.

At the bottom of page 100 of Arnold: if ##\mathbf{x}_L(t)## is a solution to the linearised equation and ##\mathbf{x}_E(t)## is a solution to the exact equation, then for any ##\varepsilon > 0## there is a ##\delta > 0## such that if ##|\mathbf{x}_E(0)| < \delta## then ##|\mathbf{x}_E(t) - \mathbf{x}_L(t)| < \varepsilon \delta## for all times ##0 < t < t_{\mathrm{end}}##.

This is why you can neglect non-linear terms in problems of small oscillations about stable equilibrium positions (in the sense of Liapunov!).

- #18

S.G. Janssens

Science Advisor

Education Advisor

- 989

- 772

You are correct, and the statement from post #14 that you quoted is indeed incorrect, and not only because of missing smoothness conditions on ##F##.That still assumes that ##F(x)## is "well-behaved". That can't possibly be true without some constraints on ##F## - having bounded derivatives would do it; or, possibly, having a Taylor series with non-zero radius of convergence about ##x_0## would be sufficient.

(It is not that I think you need my confirmation, I just want to stress the point.)

- #19

ergospherical

Gold Member

- 487

- 619

- #20

- 19,296

- 10,806

Does Arnold say anything about the properties of ##F## he's assuming? There must be conditions on ##F## for his analysis to hold. E.g. putting a maximum bound on all derivatives of ##F## clearly does the trick.

- #21

ergospherical

Gold Member

- 487

- 619

- #22

- 19,296

- 10,806

I found a PDF of Arnold's Mechanics. I don't believe that theorem on page 100. It doesn't look right at all. What it says is::

For any duration ##T## (no matter how large) and for any precision ##\epsilon## (no matter how small), simply by choosing a small enough initial offset from equilibrium (##\delta##), the exact solution and linearised solution remain within ##\delta \epsilon##. That can't be right.

The problem with the theorem as stated is that the smaller you make your initial offset ##\delta##, so the smaller you make the allowable error ##\delta \epsilon##.

The loophole is that the sum of terms in the Taylor series such as ##\frac{f''(x_0)x^2}{2} \dots## needn't converge as ##x^2## for small ##x##. That's why you need the derivatives to be bounded, not simply the Taylor series to converge.

PS And, of course, when you write:

Then bounded derivatives are exactly what you are assuming.\begin{align*}

\boldsymbol{F}(\mathbf{x}) = \boldsymbol{F}(\mathbf{x}_0) + \boldsymbol{J}(\mathbf{x}_0)(\mathbf{x} - \mathbf{x}_0) + O(\mathbf{x}^2)

\end{align*}

Last edited:

Share: