scipy 1.8.0 Pypi GitHub Homepage
Other Docs
NotesParametersReturnsBackRef
root(fun, x0, args=(), method='hybr', jac=None, tol=None, callback=None, options=None)

Notes

This section describes the available solvers that can be selected by the 'method' parameter. The default method is hybr.

Method hybr uses a modification of the Powell hybrid method as implemented in MINPACK .

Method lm solves the system of nonlinear equations in a least squares sense using a modification of the Levenberg-Marquardt algorithm as implemented in MINPACK .

Method df-sane is a derivative-free spectral method.

Methods broyden1, broyden2, anderson, linearmixing, diagbroyden, excitingmixing, krylov are inexact Newton methods, with backtracking or full line searches . Each method corresponds to a particular Jacobian approximations.

warning

The algorithms implemented for methods diagbroyden, linearmixing and excitingmixing may be useful for specific problems, but whether they will work may depend strongly on the problem.

versionadded

Parameters

fun : callable

A vector function to find a root of.

x0 : ndarray

Initial guess.

args : tuple, optional

Extra arguments passed to the objective function and its Jacobian.

method : str, optional

Type of solver. Should be one of

  • 'hybr' (see here) <optimize.root-hybr>

  • 'lm' (see here) <optimize.root-lm>

  • 'broyden1' (see here) <optimize.root-broyden1>

  • 'broyden2' (see here) <optimize.root-broyden2>

  • 'anderson' (see here) <optimize.root-anderson>

  • 'linearmixing' (see here) <optimize.root-linearmixing>

  • 'diagbroyden' (see here) <optimize.root-diagbroyden>

  • 'excitingmixing' (see here) <optimize.root-excitingmixing>

  • 'krylov' (see here) <optimize.root-krylov>

  • 'df-sane' (see here) <optimize.root-dfsane>

jac : bool or callable, optional

If :None:None:`jac` is a Boolean and is True, :None:None:`fun` is assumed to return the value of Jacobian along with the objective function. If False, the Jacobian will be estimated numerically. :None:None:`jac` can also be a callable returning the Jacobian of :None:None:`fun`. In this case, it must accept the same arguments as :None:None:`fun`.

tol : float, optional

Tolerance for termination. For detailed control, use solver-specific options.

callback : function, optional

Optional callback function. It is called on every iteration as callback(x, f) where x is the current solution and f the corresponding residual. For all methods but 'hybr' and 'lm'.

options : dict, optional

A dictionary of solver options. E.g., :None:None:`xtol` or :None:None:`maxiter`, see show_options() for details.

Returns

sol : OptimizeResult

The solution represented as a OptimizeResult object. Important attributes are: x the solution array, success a Boolean flag indicating if the algorithm exited successfully and message which describes the cause of the termination. See OptimizeResult for a description of other attributes.

Find a root of a vector function.

See Also

show_options

Additional options accepted by the solvers

Examples

The following functions define a system of nonlinear equations and its jacobian.

>>> def fun(x):
...  return [x[0] + 0.5 * (x[0] - x[1])**3 - 1.0,
...  0.5 * (x[1] - x[0])**3 + x[1]]
>>> def jac(x):
...  return np.array([[1 + 1.5 * (x[0] - x[1])**2,
...  -1.5 * (x[0] - x[1])**2],
...  [-1.5 * (x[1] - x[0])**2,
...  1 + 1.5 * (x[1] - x[0])**2]])

A solution can be obtained as follows.

>>> from scipy import optimize
... sol = optimize.root(fun, [0, 0], jac=jac, method='hybr')
... sol.x array([ 0.8411639, 0.1588361])

Large problem

Suppose that we needed to solve the following integrodifferential equation on the square $[0,1]\times[0,1]$ :

$$$$

\nabla^2 P = 10 \left(\int_0^1\int_0^1\cosh(P)\,dx\,dy\right)^2

with $P(x,1) = 1$ and $P=0$ elsewhere on the boundary of the square.

The solution can be found using the method='krylov' solver:

>>> from scipy import optimize
... # parameters
... nx, ny = 75, 75
... hx, hy = 1./(nx-1), 1./(ny-1)
>>> P_left, P_right = 0, 0
... P_top, P_bottom = 1, 0
>>> def residual(P):
...  d2x = np.zeros_like(P)
...  d2y = np.zeros_like(P) ... ... d2x[1:-1] = (P[2:] - 2*P[1:-1] + P[:-2]) / hx/hx ... d2x[0] = (P[1] - 2*P[0] + P_left)/hx/hx ... d2x[-1] = (P_right - 2*P[-1] + P[-2])/hx/hx ... ... d2y[:,1:-1] = (P[:,2:] - 2*P[:,1:-1] + P[:,:-2])/hy/hy ... d2y[:,0] = (P[:,1] - 2*P[:,0] + P_bottom)/hy/hy ... d2y[:,-1] = (P_top - 2*P[:,-1] + P[:,-2])/hy/hy ... ... return d2x + d2y - 10*np.cosh(P).mean()**2
This example is valid syntax, but raise an exception at execution
>>> guess = np.zeros((nx, ny), float)
... sol = optimize.root(residual, guess, method='krylov')
... print('Residual: %g' % abs(residual(sol.x)).max()) Residual: 5.7972e-06 # may vary
This example is valid syntax, but raise an exception at execution
>>> import matplotlib.pyplot as plt
... x, y = np.mgrid[0:1:(nx*1j), 0:1:(ny*1j)]
... plt.pcolormesh(x, y, sol.x, shading='gouraud')
... plt.colorbar()
... plt.show()
See :

Back References

The following pages refer to to this document either explicitly or contain code examples using this.

scipy.optimize._nonlin.linearmixing scipy.optimize._nonlin.BroydenSecond scipy.optimize._root._root_broyden1_doc scipy.optimize._root_scalar.root_scalar scipy.optimize._nonlin.KrylovJacobian scipy.optimize._nonlin.newton_krylov scipy.optimize._optimize.show_options scipy.optimize._root.root scipy.optimize._nonlin.excitingmixing scipy.optimize._nonlin.broyden1 scipy.optimize._nonlin.BroydenFirst scipy.optimize._minpack_py.fsolve scipy.optimize._nonlin.broyden2 scipy.optimize._nonlin.diagbroyden scipy.optimize scipy.optimize._zeros_py.newton

Local connectivity graph

Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.

Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)

SVG is more flexible but power hungry; and does not scale well to 50 + nodes.

All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them


GitHub : /scipy/optimize/_root.py#24
type: <class 'function'>
Commit: