Python scipy optimize root

scipy.optimize.root#

Extra arguments passed to the objective function and its Jacobian.

method str, optional

Type of solver. Should be one of

  • ‘hybr’ (see here)
  • ‘lm’ (see here)
  • ‘broyden1’ (see here)
  • ‘broyden2’ (see here)
  • ‘anderson’ (see here)
  • ‘linearmixing’ (see here)
  • ‘diagbroyden’ (see here)
  • ‘excitingmixing’ (see here)
  • ‘krylov’ (see here)
  • ‘df-sane’ (see here)

If jac is a Boolean and is True, fun is assumed to return the value of Jacobian along with the objective function. If False, the Jacobian will be estimated numerically. jac can also be a callable returning the Jacobian of fun. In this case, it must accept the same arguments as fun.

tol float, optional

Tolerance for termination. For detailed control, use solver-specific options.

callback function, optional

Optional callback function. It is called on every iteration as callback(x, f) where x is the current solution and f the corresponding residual. For all methods but ‘hybr’ and ‘lm’.

options dict, optional

A dictionary of solver options. E.g., xtol or maxiter, see show_options() for details.

Returns : sol OptimizeResult

The solution represented as a OptimizeResult object. Important attributes are: x the solution array, success a Boolean flag indicating if the algorithm exited successfully and message which describes the cause of the termination. See OptimizeResult for a description of other attributes.

Additional options accepted by the solvers

This section describes the available solvers that can be selected by the ‘method’ parameter. The default method is hybr.

Method hybr uses a modification of the Powell hybrid method as implemented in MINPACK [1].

Method lm solves the system of nonlinear equations in a least squares sense using a modification of the Levenberg-Marquardt algorithm as implemented in MINPACK [1].

Method df-sane is a derivative-free spectral method. [3]

Methods broyden1, broyden2, anderson, linearmixing, diagbroyden, excitingmixing, krylov are inexact Newton methods, with backtracking or full line searches [2]. Each method corresponds to a particular Jacobian approximations.

  • Method broyden1 uses Broyden’s first Jacobian approximation, it is known as Broyden’s good method.
  • Method broyden2 uses Broyden’s second Jacobian approximation, it is known as Broyden’s bad method.
  • Method anderson uses (extended) Anderson mixing.
  • Method Krylov uses Krylov approximation for inverse Jacobian. It is suitable for large-scale problem.
  • Method diagbroyden uses diagonal Broyden Jacobian approximation.
  • Method linearmixing uses a scalar Jacobian approximation.
  • Method excitingmixing uses a tuned diagonal Jacobian approximation.

The algorithms implemented for methods diagbroyden, linearmixing and excitingmixing may be useful for specific problems, but whether they will work may depend strongly on the problem.

More, Jorge J., Burton S. Garbow, and Kenneth E. Hillstrom. 1980. User Guide for MINPACK-1.

C. T. Kelley. 1995. Iterative Methods for Linear and Nonlinear Equations. Society for Industrial and Applied Mathematics.

The following functions define a system of nonlinear equations and its jacobian.

>>> import numpy as np >>> def fun(x): . return [x[0] + 0.5 * (x[0] - x[1])**3 - 1.0, . 0.5 * (x[1] - x[0])**3 + x[1]] 
>>> def jac(x): . return np.array([[1 + 1.5 * (x[0] - x[1])**2, . -1.5 * (x[0] - x[1])**2], . [-1.5 * (x[1] - x[0])**2, . 1 + 1.5 * (x[1] - x[0])**2]]) 

A solution can be obtained as follows.

>>> from scipy import optimize >>> sol = optimize.root(fun, [0, 0], jac=jac, method='hybr') >>> sol.x array([ 0.8411639, 0.1588361]) 

Large problem

Suppose that we needed to solve the following integrodifferential equation on the square \([0,1]\times[0,1]\) :

with \(P(x,1) = 1\) and \(P=0\) elsewhere on the boundary of the square.

The solution can be found using the method=’krylov’ solver:

>>> from scipy import optimize >>> # parameters >>> nx, ny = 75, 75 >>> hx, hy = 1./(nx-1), 1./(ny-1) 
>>> P_left, P_right = 0, 0 >>> P_top, P_bottom = 1, 0 
>>> def residual(P): . d2x = np.zeros_like(P) . d2y = np.zeros_like(P) . . d2x[1:-1] = (P[2:] - 2*P[1:-1] + P[:-2]) / hx/hx . d2x[0] = (P[1] - 2*P[0] + P_left)/hx/hx . d2x[-1] = (P_right - 2*P[-1] + P[-2])/hx/hx . . d2y[:,1:-1] = (P[:,2:] - 2*P[:,1:-1] + P[. -2])/hy/hy . d2y[:,0] = (P[:,1] - 2*P[:,0] + P_bottom)/hy/hy . d2y[:,-1] = (P_top - 2*P[:,-1] + P[:,-2])/hy/hy . . return d2x + d2y - 10*np.cosh(P).mean()**2 
>>> guess = np.zeros((nx, ny), float) >>> sol = optimize.root(residual, guess, method='krylov') >>> print('Residual: %g' % abs(residual(sol.x)).max()) Residual: 5.7972e-06 # may vary 
>>> import matplotlib.pyplot as plt >>> x, y = np.mgrid[0:1:(nx*1j), 0:1:(ny*1j)] >>> plt.pcolormesh(x, y, sol.x, shading='gouraud') >>> plt.colorbar() >>> plt.show() 

Источник

Optimization and root finding ( scipy.optimize )#

SciPy optimize provides functions for minimizing (or maximizing) objective functions, possibly subject to constraints. It includes solvers for nonlinear problems (with support for both local and global optimization algorithms), linear programing, constrained and nonlinear least-squares, root finding, and curve fitting.

Common functions and objects, shared across different solvers, are:

Show documentation for additional options of optimization solvers.

Represents the optimization result.

Optimization#

Scalar functions optimization#

Minimization of scalar function of one variable.

The minimize_scalar function supports the following methods:

Local (multivariate) optimization#

minimize (fun, x0[, args, method, jac, hess, . ])

Minimization of scalar function of one or more variables.

The minimize function supports the following methods:

  • minimize(method=’Nelder-Mead’)
  • minimize(method=’Powell’)
  • minimize(method=’CG’)
  • minimize(method=’BFGS’)
  • minimize(method=’Newton-CG’)
  • minimize(method=’L-BFGS-B’)
  • minimize(method=’TNC’)
  • minimize(method=’COBYLA’)
  • minimize(method=’SLSQP’)
  • minimize(method=’trust-constr’)
  • minimize(method=’dogleg’)
  • minimize(method=’trust-ncg’)
  • minimize(method=’trust-krylov’)
  • minimize(method=’trust-exact’)

Constraints are passed to minimize function as a single object or as a list of objects from the following classes:

Nonlinear constraint on the variables.

Linear constraint on the variables.

Simple bound constraints are handled separately and there is a special class for them:

Bounds constraint on the variables.

Quasi-Newton strategies implementing HessianUpdateStrategy interface can be used to approximate the Hessian in minimize function (available only for the ‘trust-constr’ method). Available quasi-Newton methods implementing this interface are:

Broyden-Fletcher-Goldfarb-Shanno (BFGS) Hessian update strategy.

Symmetric-rank-1 Hessian update strategy.

Global optimization#

Find the global minimum of a function using the basin-hopping algorithm.

brute (func, ranges[, args, Ns, full_output, . ])

Minimize a function over a given range by brute force.

Finds the global minimum of a multivariate function.

shgo (func, bounds[, args, constraints, n, . ])

Finds the global minimum of a function using SHG optimization.

Find the global minimum of a function using Dual Annealing.

direct (func, bounds, *[, args, eps, maxfun, . ])

Finds the global minimum of a function using the DIRECT algorithm.

Least-squares and curve fitting#

Nonlinear least-squares#

Solve a nonlinear least-squares problem with bounds on the variables.

Linear least-squares#

Solve argmin_x || Ax — b ||_2 for x>=0 .

Solve a linear least-squares problem with bounds on the variables.

Curve fitting#

Use non-linear least squares to fit a function, f, to data.

Root finding#

Scalar functions#

Find a root of a scalar function.

brentq (f, a, b[, args, xtol, rtol, maxiter, . ])

Find a root of a function in a bracketing interval using Brent’s method.

brenth (f, a, b[, args, xtol, rtol, maxiter, . ])

Find a root of a function in a bracketing interval using Brent’s method with hyperbolic extrapolation.

ridder (f, a, b[, args, xtol, rtol, maxiter, . ])

Find a root of a function in an interval using Ridder’s method.

bisect (f, a, b[, args, xtol, rtol, maxiter, . ])

Find root of a function within an interval using bisection.

Find a root of a real or complex function using the Newton-Raphson (or secant or Halley’s) method.

Find a root using TOMS Algorithm 748 method.

Represents the root finding result.

The root_scalar function supports the following methods:

The table below lists situations and appropriate methods, along with asymptotic convergence rates per iteration (and per function evaluation) for successful convergence to a simple root(*). Bisection is the slowest of them all, adding one bit of accuracy for each function evaluation, but is guaranteed to converge. The other bracketing methods all (eventually) increase the number of accurate bits by about 50% for every function evaluation. The derivative-based methods, all built on newton , can converge quite quickly if the initial value is close to the root. They can also be applied to functions defined on (a subset of) the complex plane.

Источник

Читайте также:  Как заблокировать загрузку javascript
Оцените статью