The JiTCODE module

Note and remember that some relevant information can be found in the common JiTC*DE documentation. This includes installation instructions, compiler issues and optimisation, general performance considerations, how to implement network dynamics and conditional statements, and a small FAQ.

Overview

JiTCODE (just-in-time compilation for ordinary differential equations) is an extension of SciPy’s ODE (scipy.integrate.ode) or Solve IVP (scipy.integrate.solve_ivp). Where the latter take a Python function as an argument, JiTCODE takes an iterable (or generator function or dictionary) of symbolic expressions, which it translates to C code, compiles on the fly, and uses as the function to feed into SciPy’s ODE or Solve IVP. Symbolic expressions are mostly handled by SymEngine, SymPy’s compiled-backend-to-be (see SymPy vs. SymEngine for details).

This design has the following advantages:

  • Speed boost through compilation There are two main computational burdens when numerically solving ODEs. The first are simple vector operations for the integration procedure, which can be efficiently handled by NumPy or similar. The second is calculating the derivative (and Jacobian, if desired), which can be efficiently handled with just-in-time compilation. Note that for smaller ODEs and small integration times, the Python overhead may still take a considerable toll in comparison to pure compiled code.

  • Speed boost through symbolic optimisation If your derivative is automatically generated by some routine, you can simplify it symbolically to boost the speed. In fact, blatant optimisations such as \(y·(x-x)=0\) are done on the fly by SymEngine. This is for example interesting if you want to simulate dynamics on a sparse network, as non-existing links are not taken into account when calculating the derivative when integrating.

  • Automatically calculated Jacobian and Lyapunov exponents As the derivative is provided symbolically, SymEngines’s automatic differentiation routines can be employed to calculate the Jacobian desired by some integrators as well as the ODE for the tangent vector required for calculating the Lyapunov exponents (see Calculating Lyapunov exponents with jitcode_lyap).

  • Symbolic interface You can enter your differential equations almost like you would on paper. Also, if you are working with SymPy or SymEngine anyway – e.g., to calculate fixed points –, you do not need to bother much with translating your equations.

If compilation fails to work for whatever reason, pure Python functions can be employed as a fallback.

For most applications, the only difference to SciPy’s ODE in terms of handling is that you have to supply the derivative in the correct format – JiTCODE automatically takes care of the compilation. However, you can also tweak this step by step, if you desire (see Details of the building process and the notes on compilation).

As with SciPy’s ODE, this documentation assumes that the differential equation you want to solve is:

\[\dot{y} = f(t,y)\]

A quick example

This example is particularly intended for readers familiar with SciPy’s ODE. If you aren’t, skip to the next section.

Suppose our differential equation is \(\dot{y} = f(y)\) with \(y∈ℝ^4\),

\[\begin{split}f(y) = \left( \begin{matrix} y_0 ( a-y_0 ) ( y_0-1) - y_1 + k (y_2 - y_0) \\ b_1 y_0 - c y_1 \\ y_2 ( a-y_2 ) ( y_2-1 ) - y_3 + k (y_0 - y_2)\\ b_2 y_2 - c y_3 \end{matrix} \right),\end{split}\]

and \(a = -0.025794\), \(b_1 = 0.0065\), \(b_2 = 0.0135\), \(c = 0.02\), and \(k = 0.128\). Then the following code integrates the above for 100000 time units (after discarding 2000 time units of transients), with \(y(t=0) = (1,2,3,4)\), and writes the results to timeseries.dat:

 1from jitcode import jitcode, y
 2import numpy as np
 3
 4a  = -0.025794
 5b1 =  0.0065
 6b2 =  0.0135
 7c  =  0.02
 8k  =  0.128
 9
10f = [
11	y(0) * ( a-y(0) ) * ( y(0)-1.0 ) - y(1) + k * (y(2) - y(0)),
12	b1*y(0) - c*y(1),
13	y(2) * ( a-y(2) ) * ( y(2)-1.0 ) - y(3) + k * (y(0) - y(2)),
14	b2*y(2) - c*y(3)
15	]
16
17initial_state = np.array([1.,2.,3.,4.])
18
19ODE = jitcode(f)
20ODE.set_integrator("dopri5")
21ODE.set_initial_value(initial_state,0.0)
22
23times = 2000+np.arange(100000)
24data = []
25for time in times:
26	data.append(ODE.integrate(time))
27
28np.savetxt("timeseries.dat", data)

An explicit example

This example is particularly intended for readers who are using a numerical integrator for the first time.

Suppose, we want to implement the Lotka–Volterra model, which is described by the following equations:

\[\begin{split}\begin{alignat*}{3} \dot{B} &=& γ · B &- φ · R · B\\ \dot{R} &=&\, -ω · R &+ ν · R · B \end{alignat*}\end{split}\]

with \(γ = 0.6\), \(φ = 1.0\), \(ω = 0.5\), and \(ν = 0.5\).

We start with a few imports:

from jitcode import y, jitcode
import numpy as np

… and defining the control parameters:

γ = 0.6
φ = 1.0
ω = 0.5
ν = 0.5

The y that we imported from jitcode has to be used for the dynamical variables. However, to make our code use the same notation as the above equation, we can rename them:

R,B = dynvars = [ y(i) for i in range(2) ]

We here might as well have written R,B = y(0),y(1), but the above is more handy for larger systems. Note that the following code is written such that the order of our dynamical variables does not matter.

We implement the differential equation as a dictionary, mapping each of the dynamical variables to its derivative:

lotka_volterra_diff = {
		B:  γ*B - φ*R*B,
		R: -ω*R + ν*R*B,
	}

Note that there are other ways to define the derivative, e.g., used in the previous and following example.

Now, we have everything, we need to instantiate jitcode, i.e., to initialise the integration object:

ODE = jitcode(lotka_volterra_diff)

Before we start integrating, we have to choose an integrator and define initial conditions. We here choose the 5th-order Dormand–Prince method. Moreover the initial \(B\) shall be 0.5, the initial \(R\) shall be 0.2, and the starting time should be \(t=0\).

ODE.set_integrator("dopri5")
initial_state = { R: 0.2, B: 0.5 }
ODE.set_initial_value(initial_state,0.0)

We then define an array of time points where we want to observe the system, and a dictionary of empty lists that shall be filled with our results.

times = np.arange(0.0,100,0.1)
values = { R: [], B: [] }

Finally, we perform the actual integration by calling ODE.integrate for each of our times. After this, we use ODE.y_dict to access the current state as a dictionary and appending its values to the respective lists.

for time in times:
	ODE.integrate(time)
	for dynvar in dynvars:
		values[dynvar].append(ODE.y_dict[dynvar])

We can now plot or analyse our data, but that’s beyond the scope of JiTCODE. So we just save it:

np.savetxt("timeseries.dat",np.vstack((values[R],values[B])).T)

Taking everything together, our code is:

 1from jitcode import y, jitcode
 2import numpy as np
 3
 4γ = 0.6
 5φ = 1.0
 6ω = 0.5
 7ν = 0.5
 8
 9R,B = dynvars = [ y(i) for i in range(2) ]
10
11lotka_volterra_diff = {
12		B:  γ*B - φ*R*B,
13		R: -ω*R + ν*R*B,
14	}
15
16ODE = jitcode(lotka_volterra_diff)
17ODE.set_integrator("dopri5")
18initial_state = { R: 0.2, B: 0.5 }
19ODE.set_initial_value(initial_state,0.0)
20
21times = np.arange(0.0,100,0.1)
22values = { R: [], B: [] }
23for time in times:
24	ODE.integrate(time)
25	for dynvar in dynvars:
26		values[dynvar].append(ODE.y_dict[dynvar])
27
28np.savetxt("timeseries.dat",np.vstack((values[R],values[B])).T)

Details of the building process

To generate the functions needed by SciPy’s ODE or Solve IVP, JiTCODE executes a series of distinct processing steps, each of which can be invoked through a command and tweaked as needed. These commands execute previous steps as needed, i.e., if they have not been performed yet. If you are happy with the default options, however, you do not need to bother with this and can just use the commands at the very end of the chain, namely set_integrator, set_initial_value, or save_compiled.

The following diagram details which command calls which other command when needed:

digraph foo { bgcolor = "transparent" edge [arrowsize=0.5]; node [shape=box fontname="Monospace" fontsize=10 margin=0.05 fillcolor="#C8D5E3" penwidth=0 style=filled]; {rank = same; "save_compiled"; "set_integrator"} "set_initial_value" -> "set_integrator"; "save_compiled" -> "compile_C"; "set_integrator" -> "generate_functions" -> "compile_C" -> "generate_f_C" -> "generate_helpers_C"; "compile_C" -> "generate_jac_C" -> "generate_jac_sym"; "generate_jac_C" -> "generate_helpers_C"; "generate_functions" -> "generate_lambdas" -> "generate_f_lambda"; "generate_lambdas" -> "generate_jac_lambda"-> "generate_jac_sym"; }

A more complicated example

This example showcases several advanced features of JiTCODE that are relevant for an efficient integration of more complex systems as well as how to deal with some special situations. Therefore it is pretty bizarre from a dynamical perspective.

Suppose we want to integrate a system of \(N=500\) Rössler oscillators, with the \(i\)-th oscillator being described by the following differential equations (note that we used \(v\) instead of \(y\) for the second dynamical variable to avoid overloading symbols):

\[\begin{split}\begin{alignedat}{1} \dot{x}_i &= -ω_i v_i - z_i + k \sin(t) \sum_{j=0}^N A_{ij} (x_j-x_i) \\ \dot{v}_i &= ω_i x_i + a v_i \\ \dot{z}_i &= b + z_i (x_i -c) + k \sum_{j=0}^N (z_j-z_i) \end{alignedat}\end{split}\]

The control parameters shall be \(a = 0.165\), \(b = 0.2\), \(c = 10.0\), and \(k = 0.01\). The (frequency) parameter \(ω_i\) shall be picked randomly from the uniform distribution on \([0.8,1.0]\) for each \(i\). \(A∈ℝ^{N×N}\) shall be the adjacency matrix of a one-dimensional small-world network (which shall be provided by a function small_world_network in the following example code). So, the \(x\) compenents are coupled diffusively with a small-world coupling topology, while the \(z\) components are coupled diffusively to their mean field, with the coupling term being modulated with \(\sin(t)\).

Without further ado, here is the example code (complete running example); highlighted lines will be commented upon below:

 1from jitcode import jitcode, y, t
 2import numpy as np
 3import symengine
 4
 5# parameters
 6# ----------
 7
 8N = 500
 9ω = np.random.uniform(0.8,1.0,N)
10a = 0.165
11b = 0.2
12c = 10.0
13k = 0.01
14
15# get adjacency matrix of a small-world network
16A = small_world_network(
17	number_of_nodes = N,
18	nearest_neighbours = 20,
19	rewiring_probability = 0.1
20	)
21
22# generate differential equations
23# -------------------------------
24
25sum_z = symengine.Symbol("sum_z")
26helpers = [( sum_z, sum( y(3*j+2) for j in range(N) ) )]
27
28def f():
29	for i in range(N):
30		coupling_sum = sum( y(3*j)-y(3*i) for j in range(N) if A[i,j] )
31		coupling_term = k * symengine.sin(t) * coupling_sum
32		yield -ω[i] * y(3*i+1) - y(3*i+2) + coupling_term
33		yield  ω[i] * y(3*i) + a*y(3*i+1)
34		coupling_term_2 = k * (sum_z-N*y(3*i+2))
35		yield b + y(3*i+2) * (y(3*i) - c) + coupling_term_2
36
37# integrate
38# ---------
39
40initial_state = np.random.random(3*N)
41
42ODE = jitcode(f, helpers=helpers, n=3*N)
43ODE.generate_f_C(simplify=False, do_cse=False, chunk_size=150)
44ODE.set_integrator('dopri5')
45ODE.set_initial_value(initial_state,0.0)
46
47# data structure: x[0], v[0], z[0], x[1], …, x[N], v[N], z[N]
48data = np.vstack([ODE.integrate(T) for T in range(10,100000,10)])

Explanation of selected features and choices:

  • The values of \(ω\) are initialised globally (line 9). We cannot just define a function here, because the parameter is used twice for each oscillator. Moreover, if we were trying to calculate Lyapunov exponents or the Jacobian, the generator function would be called multiple times, and thus the value of the parameter would not be consistent (which would be desastrous).

  • Since we need \(\sum_{j=0}^N x_j\) to calculate the derivative of \(z\) for every oscillator, it is prudent to only calculate this once. Therefore we define a helper symbol for this in lines 25 and 26, which we employ in line 34. (See the arguments of jitcode for details.)

  • In line 31, we implement \(\sin(t)\). For this purpose we had to import t in line 1. Also, we need to use symengine.sin – in contrast to math.sin or numpy.sin.

  • As this is a large system, we use a generator function instead of a list to define \(f\) (lines 28-35) and have the code automatically be split into chunks of 150 lines, corresponding to the equations of fifty oscillators, in line 43. (For this system, any reasonably sized multiple of 3 is a good choice for chunk_size; for other systems, the precise choice of the value may be more relevant.) See large_systems for the rationale.

Calculating Lyapunov exponents with jitcode_lyap

jitcode_lyap is a simple extension of jitcode that almost automatically handles calculating Lyapunov exponents by evolving tangent vectors [BGGS80]. It works just like jitcode, except that it generates and integrates additional differential equations for the tangent vectors. After every call of integrate, the tangent vectors are orthonormalised, and the “local” Lyapunov exponents for this integration step are returned alongside with the system’s state. These can then be further processed to obtain the Lyapunov exponents. The tangent vectors are initialised with random vectors, and you have to take care of the preiterations that the tangent vectors require to align themselves. You also have to take care not to integrate for too long to avoid a numerical overflow in the tangent vectors.

Estimates for the Lyapunov vectors are returned as well.

For instance, we can calculate and print the Lyapunov exponents for the system from A quick example as follows (changes highlighted):

 1from jitcode import jitcode_lyap, y
 2from scipy.stats import sem
 3import numpy as np
 4
 5a  = -0.025794
 6b1 =  0.0065
 7b2 =  0.0135
 8c  =  0.02
 9k  =  0.128
10
11f = [
12	y(0) * ( a-y(0) ) * ( y(0)-1.0 ) - y(1) + k * (y(2) - y(0)),
13	b1*y(0) - c*y(1),
14	y(2) * ( a-y(2) ) * ( y(2)-1.0 ) - y(3) + k * (y(0) - y(2)),
15	b2*y(2) - c*y(3)
16	]
17
18initial_state = np.array([1.,2.,3.,4.])
19
20n = len(f)
21ODE = jitcode_lyap(f, n_lyap=n)
22ODE.set_integrator("vode")
23ODE.set_initial_value(initial_state,0.0)
24
25times = range(10,100000,10)
26lyaps = []
27for time in times:
28	lyaps.append(ODE.integrate(time)[1])
29
30# converting to NumPy array for easier handling
31lyaps = np.vstack(lyaps)
32
33for i in range(n):
34	lyap = np.average(lyaps[1000:,i])
35	stderr = sem(lyaps[1000:,i]) # Note that this only an estimate
36	print("%i. Lyapunov exponent: % .4f ± %.4f" % (i+1,lyap,stderr))

Calculating transversal Lyapunov exponents with jitcode_transversal_lyap

jitcode_transversal_lyap is a variant of jitcode_lyap that calculates Lyapunov exponents transversal to a user-defined synchronisation manifold. It automatically conflates the differential equations of a group of synchronised components into one equation on the synchronisation manifold. Moreover, it transforms the equations for the tangent vectors such that the tangent vector is automatically orthogonal to the synchronisation manifold. If you are interested in the mathematical details, please read the accompanying paper.

For instance, let’s interpret the system from A quick example as two oscillators (which is what it is), one consisting of the first and second and one of the third and fourth component. Furthermore, let’s change the control parameters a bit to make the two oscillators identical. We can then calculate the transversal Lyapunov exponents to the synchronisation manifold as follows (important changes are highlighted):

 1from jitcode import jitcode_transversal_lyap, y
 2from scipy.stats import sem
 3import numpy as np
 4
 5a = -0.025794
 6b =  0.01
 7c =  0.02
 8k =  0.128
 9
10f = [
11	y(0) * ( a-y(0) ) * ( y(0)-1.0 ) - y(1) + k * (y(2) - y(0)),
12	b*y(0) - c*y(1),
13	y(2) * ( a-y(2) ) * ( y(2)-1.0 ) - y(3) + k * (y(0) - y(2)),
14	b*y(2) - c*y(3)
15	]
16
17initial_state = np.array([1.,2.])
18
19groups = [ [0,2], [1,3] ]
20ODE = jitcode_transversal_lyap(f, groups=groups)
21ODE.set_integrator("lsoda")
22ODE.set_initial_value(initial_state,0.0)
23
24times = range(10,100000,10)
25lyaps = []
26for time in times:
27	lyaps.append(ODE.integrate(time)[1])
28
29lyap = np.average(lyaps[1000:])
30stderr = sem(lyaps[1000:]) # Note that this only an estimate
31print("transversal Lyapunov exponent: % .4f ± %.4f" % (lyap,stderr))
32

Note that the initial state (line 17) is reduced in dimensionality and there is only one component for each synchronisation group.

Other things you may want to do

  • JiTCDDE is like JiTCODE for delay differential equations.

  • JiTCSDE is like JiTCODE for stochastics differential equations.

  • If you want to have input that cannot be expressed in a simple function, you can use jitcdde_input or use a callback (see the next point).

  • If you want to call a Python function within the derivative, use the callback_functions argument.

  • The zvode integrator and complex arithmetics in general do not work, as they are not that easily implementable in C.

Command reference

y = <MagicMock name='mock.Function()' id='140582525044752'>

the symbol for the state that must be used to define the differential equation. It is a function and the integer argument denotes the component. You may just as well define an analogous function directly with SymEngine or SymPy, but using this function is the best way to get the most of future versions of JiTCODE, in particular avoiding incompatibilities. You can import a SymPy variant from the submodule sympy_symbols instead (see SymPy vs. SymEngine for details).

t = <MagicMock name='mock.Symbol()' id='140582525055248'>

the symbol for time for defining the differential equation. If your differential equation has no explicit time dependency (“autonomous system”), you do not need this. You may just as well define an analogous symbol directly with SymEngine or SymPy, but using this function is the best way to get the most of future versions of JiTCODE, in particular avoiding incompatibilities. You can import a SymPy variant from the submodule sympy_symbols instead (see SymPy vs. SymEngine for details).

test(omp=True, sympy=True)

Runs a quick simulation to test whether:

  • a compiler is available and can be interfaced by Setuptools,

  • OMP libraries are available and can be assessed,

  • SymPy is available.

The latter two tests can be deactivated with the respective argument. This is not a full software test but rather a quick sanity check of your installation. If successful, this function just finishes without any message.

class UnsuccessfulIntegration

This exception is raised when the integrator cannot meet the accuracy and step-size requirements. If you want to know the exact state of your system before the integration fails or similar, catch this exception.

The main class

class jitcode(f_sym=(), *, helpers=None, wants_jacobian=False, n=None, control_pars=(), callback_functions=(), verbose=True, module_location=None)
Parameters:
f_symiterable of symbolic expressions or generator function yielding symbolic expressions or dictionary

If an iterable or generator function, the i-th element is the i-th component of the value of the ODE’s derivative \(f(t,y)\). If a dictionary, it has to map the dynamical variables to its derivatives and the dynamical variables must be y(0), y(1), .

helperslist of length-two iterables, each containing a symbol and a symbolic expression

Each helper is a variable that will be calculated before evaluating the derivative and can be used in the latter’s computation. The first component of the tuple is the helper’s symbol as referenced in the derivative or other helpers, the second component describes how to compute it from t, y and other helpers. This is for example useful to realise a mean-field coupling, where the helper could look like (mean, sum(y(i) for i in range(100))/100). (See A more complicated example for an example.)

wants_jacobianboolean

Tell JiTCODE to calculate and compile the Jacobian. For vanilla use, you do not need to bother about this as this is automatically set to True if the selected method of integration desires the Jacobian. However, it is sometimes useful if you want to manually apply some code-generation steps (e.g., to apply some tweaks).

ninteger

Length of f_sym. While JiTCODE can easily determine this itself (and will, if necessary), this may take some time if f_sym is a generator function and n is large. Take care that this value is correct – if it isn’t, you will not get a helpful error message.

control_parsiterable of symbols

Each symbol corresponds to a control parameter that can be used when defining the equations and set after compilation using set_parameters (in the same order as given here). Using this makes sense if you need to do a parameter scan with short integrations for each parameter and you spend a considerable amount of time compiling.

callback_functionsiterable

Python functions that should be called at integration time (callback) when evaluating the derivative. Each element of the iterable represents one callback function as a tuple containing (in that order):

  • A SymEngine function object used in f_sym to represent the function call. If you want to use any JiTCODE features that need the derivative, this must have a properly defined f_diff method with the derivative being another callback function (or constant).

  • The Python function to be called. This function will receive the state array (y) as the first argument. All further arguments are whatever you use as arguments of the SymEngine function in f_sym. These can be any expression that you might use in the definition of the derivative and contain, e.g., dynamical variables, time, control parameters, and helpers. The only restriction is that the arguments are floats (and not vectors or similar). The return value must also be a float (or something castable to float). It is your responsibility to ensure that this function adheres to these criteria, is deterministic and sufficiently smooth with respect its arguments; expect nasty errors otherwise.

  • The number of arguments, excluding the state array as mandatory first argument. This means if you have a variadic Python function, you cannot just call it with different numbers of arguments in f_sym, but you have to define separate callbacks for each of numer of arguments.

See this example (for JiTCDDE) for how to use this.

verboseboolean

Whether JiTCODE shall give progress reports on the processing steps.

module_locationstring

location of a module file from which functions are to be loaded (see save_compiled). If you use this, you need not give f_sym as an argument, but in this case you must give n. If you used control_pars or callback_functions, you have to provide them again. Depending on the arguments you provide, functionalities such as recompiling may not be available; but then the entire point of this option is to avoid these.

generate_jac_sym(simplify=True)

generates the Jacobian using SymEngine’s differentiation.

Parameters:
simplifyboolean

Whether the resulting Jacobian should be simplified (with ratio=1.0). This is almost always a good thing.

generate_f_C(simplify=None, do_cse=False, chunk_size=100)

translates the derivative to C code using SymEngine’s C-code printer.

Parameters:
simplifyboolean or None

Whether the derivative should be simplified (with ratio=1.0) before translating to C code. The main reason why you could want to enable this is if you expect your derivative not to be optimised and not be so large that simplifying takes a considerable amount of time. If None, this will be automatically disabled for n>10.

do_cseboolean

Whether SymPy’s common-subexpression detection should be applied before translating to C code. It is almost always better to let the compiler do this (unless you want to set the compiler optimisation to -O2 or lower): For simple differential equations this should not make any difference to the compiler’s optimisations. For large ones, it may make a difference but also take long. As this requires all entries of f at once, it may void advantages gained from using generator functions as an input. Also, this feature uses SymPy and not SymEngine.

chunk_sizeinteger

If the number of instructions in the final C code exceeds this number, it will be split into chunks of this size. See Handling very large differential equations on why this is useful and how to best choose this value. If smaller than 1, no chunking will happen.

generate_jac_C(do_cse=False, chunk_size=100, sparse=True)

translates the symbolic Jacobian to C code using SymEngine’s C-code printer. If the symbolic Jacobian has not been generated, it generates it by calling generate_jac_sym.

Parameters:
do_cseboolean

Whether SymPy’s common-subexpression detection should be applied before translating to C code. It is almost always better to let the compiler do this (unless you want to set the compiler optimisation to -O2 or lower): For simple differential equations this should not make any difference to the compiler’s optimisations. For large ones, it may make a difference but also take long. As this requires the entire Jacobian at once, it may void advantages gained from using generator functions as an input. Also, this feature uses SymPy and not SymEngine.

chunk_sizeinteger

If the number of instructions in the final C code exceeds this number, it will be split into chunks of this size. See Handling very large differential equations on why this is useful and how to best choose this value. If smaller than 1, no chunking will happen.

sparseboolean

Whether a sparse Jacobian should be assumed for optimisation. Note that this does not mean that the Jacobian is stored, parsed or handled as a sparse matrix. This kind of optimisation would require ode or solve_ivp to be able to handle sparse matrices without structure in the sparseness.

generate_helpers_C(chunk_size=100)

translates the helpers to C code using SymEngine’s C-code printer.

Parameters:
chunk_sizeinteger

If the number of instructions in the final C code exceeds this number, it will be split into chunks of this size. See large_systems on why this is useful.

If there is an obvious grouping of your helpers, the group size suggests itself for chunk_size.

If smaller than 1, no chunking will happen.

compile_C(extra_compile_args=None, extra_link_args=None, verbose=False, modulename=None, omp=False)

compiles the C code (using Setuptools) and loads the compiled functions. If no C code exists, it is generated by calling generate_f_C and generate_jac_C. For detailed information on the arguments and other ways to tweak the compilation, read these notes.

Parameters:
extra_compile_argsiterable of strings
extra_link_argsiterable of strings

Arguments to be handed to the C compiler or linker, respectively.

verboseboolean

Whether the compiler commands shall be shown. This is the same as Setuptools’ verbose setting.

modulenamestring or None

The name used for the compiled module.

omppair of iterables of strings or boolean

What compiler arguments shall be used for multiprocessing (using OpenMP). If True, they will be selected automatically. If empty or False, no compilation for multiprocessing will happen (unless you supply the relevant compiler arguments otherwise).

generate_f_lambda(simplify=None, do_cse=False)

translates the symbolic derivative to a function using SymEngines Lambdify or LambdifyCSE.

Parameters:
simplifyboolean

Whether the derivative should be simplified (with ratio=1.0) before translating to C code. The main reason why you could want to enable this is if you expect your derivative not to be optimised and not be so large that simplifying takes a considerable amount of time. If None, this will be automatically disabled for n>10.

do_cseboolean

Whether a common-subexpression detection, namely LambdifyCSE, should be used.

generate_jac_lambda(do_cse=False)

translates the symbolic Jacobain to a function using SymEngines Lambdify or LambdifyCSE. If the symbolic Jacobian has not been generated, it is generated by calling generate_jac_sym.

Parameters:
do_cseboolean

Whether a common-subexpression detection, namely LambdifyCSE, should be used.

generate_lambdas()

If they do not already exists, this generates lambdified functions by calling self.generate_f_lambda() and, if wanted, generate_jac_lambda().

generate_functions()

The central function-generating function. Tries to compile the derivative and, if wanted, the Jacobian. If this fails, it generates lambdified functions as a fallback.

property y_dict

The current state of the system as a dictionary mapping dynamical variables to their current value. Note that if you use this often, you may want to use self.y instead for efficiency.

set_initial_value(initial_value, time=0.0)

Same as the analogous function in SciPy’s ODE, except that it also accepts the initial_value in form of a dictionary that maps dynamical variables to their initial value.

set_integrator(name, nsteps=1000000, interpolate=True, **integrator_params)

Analogous to the function in SciPy’s ODE with the same name. This automatically generates the derivative and Jacobian if they do not exist yet and are needed. You can also choose integrators from scipy.integrate.solve_ivp.

Parameters:
name: name of the integrator

One of the following (or a new method supported by either backend):

  • "dopri5" – Dormand’s and Prince’s explicit fifth-order method via ode

  • "RK45" – Dormand’s and Prince’s explicit fifth-order method via solve_ivp

  • "dop853" – DoP853 (explicit) via ode

  • "DOP853" – DoP853 (explicit) via solve_ivp

  • "RK23" – Bogacki’s and Shampine’s explicit third-order method via solve_ivp

  • "BDF" – Implicit backward-differentiation formula via solve_ivp

  • "lsoda" – LSODA (implicit) via ode

  • "LSODA" – LSODA (implicit) via solve_ivp

  • "Radau" – The implicit Radau method via solve_ivp

  • "vode" – VODE (implicit) via ode

The solve_ivp methods are usually slightly faster for large differential equations, but they come with a massive overhead that makes them considerably slower for small differential equations. Implicit solvers are slower than explicit ones, except for stiff problems. If you don’t know what to choose, start with "dopri5".

nsteps: integer

Same as the respective parameter of the ode solvers, but with a higher default value to avoid annoying errors when getting rid of transients.

interpolate: boolean

Whether the sampled solutions for solve_ivp solvers shall be obtained using interpolation. If your sampling step is small, this may make things faster; otherwise it depends. This may also make the results slightly less accurate.

integrator_params

Parameters passed to the respective integrator. See its documentation for more.

set_parameters(*args)

Same as set_f_params and set_jac_params for SciPy’s ODE (both sets of parameters are set simultaneuosly, because they should be the same anyway).

The parameters can be passed as different arguments or as a list or other sequence.

check(fail_fast=True)

Performs a series of checks that may not be feasible at runtime (usually due to their length). Whenever you run into an error that you cannot make sense of, try running this. It checks for the following mistakes:

  • negative arguments of y

  • arguments of y that are higher than the system’s dimension n

  • unused variables

Parameters:
fail_fastboolean

whether to abort on the first failure. If false, an error is raised only after all problems are printed.

render_and_write_code(expressions, name, chunk_size=100, arguments=(), omp=True)

Writes expressions to code.

Parameters:
expressions: iterator

expressions to be written

name: string

unique name of what is computed

chunk_size: integer

size of chunks. If smaller than 1, no chunking happens.

arguments: list of tuples

Each tuple contains the name, type, and size (optional, for arrays) of an argument needed by the code. This is so the arguments can be passed to chunked functions.

omp: boolean

whether OMP pragmas should be included

save_compiled(destination='', overwrite=False)

saves the module file with the compiled functions for later use (see the module_location argument). If no compiled derivative exists, it tries to compile it first using compile_C. In most circumstances, you should not rename this file, as the filename is needed to determine the module name.

Parameters:
destinationstring specifying a path

If this specifies only a directory (don’t forget the trailing / or similar), the module will be saved to that directory. If empty (default), the module will be saved to the current working directory. Otherwise, the functions will be (re)compiled to match that filename (which will ignore the accomplishments and settings of any previous call of compile_C). A file ending will be appended if needed.

overwriteboolean

Whether to overwrite the specified target if it already exists.

Returns:
filenamestring

The destination that was actually used.

Lyapunov exponents

class jitcode_lyap(f_sym=(), n_lyap=-1, simplify=None, **kwargs)

the handling is the same as that for jitcode except for:

Parameters:
n_lyapinteger

Number of Lyapunov exponents to calculate. If negative or larger than the dimension of the system, all Lyapunov exponents are calculated.

simplifyboolean

Whether the differential equations for the tangent vector shall be subjected to SymEngine’s simplify. Doing so may speed up the time evolution but may slow down the generation of the code (considerably for large differential equations). If None, this will be automatically disabled for n>10.

set_initial_value(y, time=0.0)

Same as the analogous function in SciPy’s ODE, except that it also accepts the initial_value in form of a dictionary that maps dynamical variables to their initial value.

set_integrator(name, interpolate=None, **kwargs)

Same as for jitcode except that interpolate defaults to False for RK45 and Radau to avoid an accumulation error through the nature of the interpolation algorithm. Using LSODA as an integrator is discouraged.

integrate(*args, **kwargs)

Like SciPy’s ODE’s integrate, except for orthonormalising the tangent vectors and:

Returns:
yone-dimensional NumPy array

The state of the system. Same as the output of jitcode’s integrate and ode’s integrate.

lyapsone-dimensional NumPy array

The “local” Lyapunov exponents as estimated from the growth or shrinking of the tangent vectors during the integration time of this very integrate command, i.e., \(\frac{\ln (α_i^{(p)})}{s_i}\) in the notation of [BGGS80]

lyap_vectorslist of one-dimensional NumPy arrays

The Lyapunov vectors (normalised) after integration.

property y_dict

The current state of the system as a dictionary mapping dynamical variables to their current value. Note that if you use this often, you may want to use self.y instead for efficiency.

class jitcode_transversal_lyap(f_sym=(), groups=(), simplify=None, average_dynamics=False, **kwargs)

Calculates the largest Lyapunov exponent in orthogonal direction to a predefined synchronisation manifold, i.e. the projection of the tangent vector onto that manifold vanishes. In contrast to jitcode_restricted_lyap, this performs some transformations tailored to this specific application that may strongly reduce the number of differential equations and ensure a dynamics on the synchronisation manifold.

The handling is the same as that for jitcode except for:

Parameters:
groupsiterable of iterables of integers

each group is an iterable of indices that identify dynamical variables that are synchronised on the synchronisation manifold.

simplifyboolean

Whether the transformed differential equations shall be subjected to SymEngine’s simplify. Doing so may speed up the time evolution but may slow down the generation of the code (considerably for large differential equations). If None, this will be automatically disabled for n>10.

average_dynamicsboolean

Whether the dynamics of synchronised variables shall be averaged over all variables in a synchronisation group. Otherwise, the dynamics of the first variable of a group is taken as representative (the default). Activating this option may be useful if the investigated synchrony is not complete – otherwise it has no effect. Using this voids the advantages of generator functions and may lead to problems with very large dynamics.

set_initial_value(y, time=0.0)

Like the analogous function of jitcode/scipy.integrate.ode, except that only one initial value per group of synchronised components has to be provided (in the same order as the groups argument of the constructor).

set_integrator(name, interpolate=False, **kwargs)

Same as for jitcode except that interpolate defaults to False for RK45 and Radau to avoid an accumulation error through the nature of the interpolation algorithm. Using LSODA as an integrator is discouraged.

integrate(*args, **kwargs)

Like SciPy’s ODE’s integrate, except for orthonormalising the tangent vector and:

Returns:
yone-dimensional NumPy array

The state of the system. Only one initial value per group of synchronised components is returned (in the same order as the groups argument of the constructor).

lyapsone-dimensional NumPy array

The “local” Lyapunov exponent as estimated from the growth or shrinking of the tangent vector during the integration time of this very integrate command, i.e., \(\frac{\ln (α_i^{(p)})}{s_i}\) in the notation of [BGGS80]

property y_dict

The current state of the system as a dictionary mapping dynamical variables to their current value. Note that if you use this often, you may want to use self.y instead for efficiency.

class jitcode_restricted_lyap(f_sym=(), vectors=(), **kwargs)

Calculates the largest Lyapunov exponent in orthogonal direction to a predefined plane, i.e. the projection of the tangent vector onto that plane vanishes. See this test for an example of usage. The handling is the same as that for jitcode_lyap except for:

Parameters:
vectorsiterable of pairs of NumPy arrays

A basis of the plane, whose projection shall be removed.

References

[BGGS80] (1,2,3)
  1. Benettin, L. Galgani, A. Giorgilli, and J.-M. Strelcyn: Lyapunov Characteristic Exponents for smooth dynamical systems and for Hamiltonian systems; A method for computing all of them. Meccanica 15, pp. 9–30 (1980), 10.1007/BF02128236.