Numerical methods

Symplectic schemes for separable Hamiltonian systems

posted May 14, 2016, 5:56 PM by François Mauger   [ updated May 14, 2016, 6:18 PM ]


A canonical Hamiltonian system is a dynamical system for which the dynamics is given by Hamilton’s equations

for some function called the Hamiltonian. It is clear that Eqs. (1) correspond to an ordinary differential system and thus it can be solved numerically, with an arbitrary precision, using standard schemes such as Euler or Runge-Kutta methods.

Hamiltonian systems are generally built up from first principles like energy conservation. Indeed, using Hamilton’s equations (1), one can show easily that the Hamiltonian is a conserved quantity (). More than the conservation of the Hamiltonian, (canonical) Hamiltonian systems are equipped with a symplectic structure which is preserved by the dynamics. The conservation of the Hamiltonian is only a consequence of this symplectic structure.

On the other hand, the discrete map offered by numerical schemes generally does not preserve the symplectic structure (standard Euler or Runge-Kutta methods for instance). This can be seen by looking at the evolution of the Hamiltonian with time, which typically presents a drift, characteristic of the breaking of the symplectic structure. To overcome this difficulty, a class of numerical schemes, called symplectic integrators, has been designed to preserve the symplectic structure for the discrete map corresponding to the numerical integration.

Here we propose a symplectic scheme for a class of canonical systems where the Hamiltonian is separable, that is

as is frequently encountered with dynamical systems. Typically, corresponds to the kinetic energy and to the potential.

If the system is not autonomous, the symplectic scheme can be used if the explicit time dependence is contained in the potential term only:

Reference and program


  • December 17, 2013
    • Allow installable output function (with output selection indices)
    • Allow events detection

Simulated annealing algorithm for finding periodic orbits

posted May 14, 2016, 5:04 PM by François Mauger   [ updated May 14, 2016, 5:53 PM ]

Why looking for periodic orbits?

Periodic orbits play a central role in the description and analysis of dynamical systems as they represent the skeleton of the dynamics. It means that some important dynamical properties can be deduced from these simple orbits. A fair amount of information is provided by the linear stability properties of these invariant structures. As an example, cycle expansions according to the length, stability or action of these orbits, are carried out to describe long time behavior such as averages of observables.

Various algorithms have been developed for finding periodic orbits. Among them, some are commonly used in atomic systems with a few number of electrons: Newton-Raphson algorithm or modified version of it such as the damped Newton-Raphson algorithm offers high speed convergence provided one has a sufficiently accurate initial guess for the periodic orbit. The precision of this initial guess is a thorny problem since the basin of attraction shrinks exponentially with the instability and the length of the orbit. Overall, the goal is to set up a deterministic dissipative dynamical system which has the periodic orbit as its attractor.

Independently of the chosen algorithm, the chance to see the algorithm converging strongly depends on the basin of attraction of the periodic orbit of the deterministic dynamical system. Often, there is a trade-off between the size of the basin of attraction with the speed of convergence of the algorithm. This is in particular the case for the Newton-Raphson method.

In [CNSNS 16, 2845 (2011)] we proposed a method which extends this basin of attraction by use of systematized trial and error converging procedure.  In order to do that, we combine a deterministic method with a Simulated Annealing algorithm to refine initial guesses for periodic orbit search. In other words, the goal of this stochastic method is to enable the determination of initial guesses with sufficient accuracy to lay them into the basin of attraction of a fast converging algorithm like the Newton-Raphson algorithm. As a consequence of the underlying stochastic nature of the algorithm, it enables one to determine several different periodic orbits for the considered dynamical system by launching the algorithm several times.

Simulated annealing algorithm

The Simulated Annealing algorithm is a metaheuristic algorithm designed for optimization under constraints which generalizes the ideas developed from the Metropolis algorithm. The main idea of the algorithm is to automate a trial and error search for a better solution within a controlled neighborhood whose diameter is successively reduced.

For a given function , called cost function, the goal is to approximate the global minimum of (within a given set). First a random initial condition is chosen to initiate the simulated annealing algorithm and we define two parameters , the initial temperature, and which serves as an error tolerance. The simulated annealing algorithm proceeds in successive stages of annealing and cooling steps.
  • During the annealing process, a random initial guess is chosen in the neighborhood of (the size of the neighborhood is controlled by ; for simplicity one usually chooses a ball with diameter ). It corresponds to a melting of the system with a controlled temperature .
    • If < then replaces in the following step.
    • If not, replaces with a probability .
    • Otherwise, the same is kept in the next step.
The process of melting for a fixed temperature is iterated until the system reaches a steady state.
  • The cooling consists in decreasing the temperature to a new temperature < , realized at the end of the annealing process. Then a new iteration of annealing with the updated temperature , is performed.
The successions of annealing and cooling steps are iterated until the system reaches a global steady state which is the resulting approximation of the global minimum. Since the size of the neighborhood where the perturbation is performed is controlled by the temperature, it has to shrink with iterations of cooling. At the limit for zero temperature, the neighborhood of control is reduced to a single point. We give a draft for the simulated annealing algorithm in the following figure.

Simulated annealing algorithm for finding periodic orbits

Periodic orbits correspond to loops in phase space. As a consequence, considering the function as the distance between the initial and final points for a given trajectory, global vanishing minima of correspond to periodic orbits and thus may be investigated using the simulated annealing algorithm. In the following figure, we give a draft of the simulated annealing algorithm adapted for the search for periodic orbits (User-defined constants are labeled by "#" and "%" starts comments. The function ReturnDist plays the role of the function . Finally, the function rand is a random generator uniformly distributed over ).
For improving the convergence, it is more efficient to combine the simulated annealing algorithm with a second method based on a Newton-Raphson method for instance. In the programs we use the function fsolve of Matlab, which is a Newton-Gauss minimizing scheme.

The parameter #DistError is used to stop the simulated annealing algorithm when a sufficiently accurate approximation of the periodic orbit is reached (i.e., the cost function is smaller than #DistError). It should be note that even when the simulated annealing algorithm has not converged (the cost function never reached #DistError) the final solution of the algorithm is worth of investigation with a fast converging method as it nevertheless may lie in the basin of attraction of this method.

In [CNSNS 16, 2845 (2011)] we give evidence of the feasibility of the method by applying it to the helium atom in the ground state for one to three spatial dimensions

Reference and program

1-2 of 2