line_search_wolfe2(f, myfprime, xk, pk, gfk=None, old_fval=None, old_old_fval=None, args=(), c1=0.0001, c2=0.9, amax=None, extra_condition=None, maxiter=10)
Uses the line search algorithm to enforce strong Wolfe conditions. See Wright and Nocedal, 'Numerical Optimization', 1999, pp. 59-61.
Objective function.
Objective function gradient.
Starting point.
Search direction.
Gradient value for x=xk (xk being the current parameter estimate). Will be recomputed if omitted.
Function value for x=xk. Will be recomputed if omitted.
Function value for the point preceding x=xk.
Additional arguments passed to objective function.
Parameter for Armijo condition rule.
Parameter for curvature condition rule.
Maximum step size
A callable of the form extra_condition(alpha, x, f, g)
returning a boolean. Arguments are the proposed step alpha
and the corresponding x
, f
and g
values. The line search accepts the value of alpha
only if this callable returns True
. If the callable returns False
for the step length, the algorithm will continue with new iterates. The callable is only called for iterates satisfying the strong Wolfe conditions.
Maximum number of iterations to perform.
Alpha for which x_new = x0 + alpha * pk
, or None if the line search algorithm did not converge.
Number of function evaluations made.
Number of gradient evaluations made.
New function value f(x_new)=f(x0+alpha*pk)
, or None if the line search algorithm did not converge.
Old function value f(x0)
.
The local slope along the search direction at the new value <myfprime(x_new), pk>
, or None if the line search algorithm did not converge.
Find alpha that satisfies strong Wolfe conditions.
>>> from scipy.optimize import line_search
A objective function and its gradient are defined.
>>> def obj_func(x):
... return (x[0])**2+(x[1])**2
... def obj_grad(x):
... return [2*x[0], 2*x[1]]
We can find alpha that satisfies strong Wolfe conditions.
>>> start_point = np.array([1.8, 1.7])See :
... search_gradient = np.array([-1.0, -1.0])
... line_search(obj_func, obj_grad, start_point, search_gradient) (1.0, 2, 1, 1.1300000000000001, 6.13, [1.6, 1.4])
The following pages refer to to this document either explicitly or contain code examples using this.
scipy.optimize._linesearch.line_search_wolfe2
Hover to see nodes names; edges to Self not shown, Caped at 50 nodes.
Using a canvas is more power efficient and can get hundred of nodes ; but does not allow hyperlinks; , arrows or text (beyond on hover)
SVG is more flexible but power hungry; and does not scale well to 50 + nodes.
All aboves nodes referred to, (or are referred from) current nodes; Edges from Self to other have been omitted (or all nodes would be connected to the central node "self" which is not useful). Nodes are colored by the library they belong to, and scaled with the number of references pointing them