mystic module documentation
abstract_ensemble_solver module
This module contains the base class for launching several mystic solvers
instances – utilizing a parallel map
function to enable parallel
computing. This module describes the ensemble solver interface. As with
the AbstractSolver
, the _Step
method must be overwritten with the
derived solver’s optimization algorithm. Similar to AbstractMapSolver
,
a call to map
is required. In addition to the class interface, a
simple function interface for a derived solver class is often provided.
For an example, see the following.
The default map API settings are provided within mystic, while distributed
and parallel computing maps can be obtained from the pathos
package
(http://dev.danse.us/trac/pathos).
Examples
A typical call to a ‘ensemble’ solver will roughly follow this example:
>>> # the function to be minimized and the initial values
>>> from mystic.models import rosen
>>> lb = [0.0, 0.0, 0.0]
>>> ub = [2.0, 2.0, 2.0]
>>>
>>> # get monitors and termination condition objects
>>> from mystic.monitors import Monitor
>>> stepmon = Monitor()
>>> from mystic.termination import CandidateRelativeTolerance as CRT
>>>
>>> # select the parallel launch configuration
>>> from pyina.launchers import Mpi as Pool
>>> NNODES = 4
>>> nbins = [4,4,4]
>>>
>>> # instantiate and configure the solver
>>> from mystic.solvers import NelderMeadSimplexSolver
>>> from mystic.solvers import LatticeSolver
>>> solver = LatticeSolver(len(nbins), nbins)
>>> solver.SetNestedSolver(NelderMeadSimplexSolver)
>>> solver.SetStrictRanges(lb, ub)
>>> solver.SetMapper(Pool(NNODES).map)
>>> solver.SetGenerationMonitor(stepmon)
>>> solver.SetTermination(CRT())
>>> solver.Solve(rosen)
>>>
>>> # obtain the solution
>>> solution = solver.Solution()
Handler
All solvers packaged with mystic include a signal handler that provides the following options:
sol: Print current best solution.
cont: Continue calculation.
call: Executes sigint_callback, if provided.
exit: Exits with current best solution.
Handlers are enabled with the enable_signal_handler
method,
and are configured through the solver’s Solve
method. Handlers
trigger when a signal interrupt (usually, Ctrl-C
) is given while
the solver is running.
Notes
The handler is currently disabled when the solver is run in parallel.
- class AbstractEnsembleSolver(dim, **kwds)
Bases:
AbstractMapSolver
AbstractEnsembleSolver base class for mystic optimizers that are called within a parallel map. This allows pseudo-global coverage of parameter space using non-global optimizers.
Takes one initial input:
dim -- dimensionality of the problem.
Additional inputs:
npop -- size of the trial solution population. [default = 1] nbins -- tuple of number of bins in each dimension. [default = [1]*dim] npts -- number of solver instances. [default = 1] rtol -- size of radial tolerance for sparsity. [default = None]
Important class members:
nDim, nPop = dim, npop generations - an iteration counter. evaluations - an evaluation counter. bestEnergy - current best energy. bestSolution - current best parameter set. [size = dim] popEnergy - set of all trial energy solutions. [size = npop] population - set of all trial parameter solutions. [size = dim*npop] solution_history - history of bestSolution status. [StepMonitor.x] energy_history - history of bestEnergy status. [StepMonitor.y] signal_handler - catches the interrupt signal. [***disabled***]
- Collapse(disp=False)
if solver has terminated by collapse, apply the collapse
- Parameters:
disp (bool, default=False) – print details about the solver state at collapse
- Returns:
a dict of
{info: collapse}
, whereinfo
is collapse termination info
Notes
collapse must be triggered by calling this method, unless both the “collapse” termination and a “stop” termination are simultaneously satisfied.
Note
* this method is not implemented and returns False *
- Collapsed(disp=False, info=False, all=None)
check if the solver meets the given collapse conditions
- Parameters:
- Returns:
information about the state of the solver collapse (see Notes)
Notes
all
can be one of{True, False, all, any}
if
all
is True, check if all solvers have collapsedif
all
is any (or None), check if any solver has collapsedif
all
is all, check if all solvers have the same collapseif
all
is False, check if the ‘best’ solver collapsedif
info
is False, return a bool regarding the collapseif
info
is True, return an informative string about the collapse, or a list of strings (depending on the value ofall
)
- SetDistribution(dist=None)
Set the distribution used for determining solver starting points
- Inputs:
dist: a mystic.math.Distribution instance
- SetInitialPoints(x0, radius=0.05)
Set Initial Points with Guess (
x0
)- Parameters:
- Returns:
None
Note
* this method must be overwritten *
- SetMultinormalInitialPoints(mean, var=None)
Generate Initial Points from Multivariate Normal.
- Parameters:
- Returns:
None
Notes
var
must be symmetric, positive-semidefinite, and lengthself.nDim
if
var
is None, thenvar
becomes the Identity matrix,I
if
var
is a scalar, thenvar
is set tovar * I
Note
* this method must be overwritten *
- SetNestedSolver(solver)
set the nested solver
- Parameters:
solver (solver) – a solver instance (e.g.
NelderMeadSimplexSolver(3)
)- Returns:
None
- SetRandomInitialPoints(min=None, max=None)
Generate Random Initial Points within given Bounds
- Parameters:
- Returns:
None
Notes
each
min[i]
must be less than or equal to the correspondingmax[i]
Note
* this method must be overwritten *
- SetSampledInitialPoints(dist=None)
Generate Random Initial Points from Distribution (dist)
- Parameters:
dist (Distribution, default=None) – a mystic.math.Distribution instance
- Returns:
None
Notes
if
dist
is None, use a uniform distribution in the interval[0, 1)
Note
* this method must be overwritten *
- Terminated(disp=False, info=False, termination=None, all=None)
check if the solver meets the given termination conditions
- Parameters:
- Returns:
information about the state of the solver termination (see Notes)
Notes
all
can be one of{True, False, None}
if
all
is True, return a list checking termination for each solverif
all
is None, check whether all solvers have terminatedif
all
is False, check if the ‘best’ solver terminatedif
info
is False, return a bool regarding the terminationif
info
is True, return an informative string about the termination, or a list of strings (depending on the value ofall
)if
termination
is None, the solver’s stored termination is be used
- _InitialPoints()
Generate a grid of starting points for the ensemble of optimizers
Note
* this method must be overwritten *
- _Solve(cost, ExtraArgs, **settings)
Run the optimizer to termination, using the given settings.
- Parameters:
- Returns:
None
Notes
Optimizer settings for ensemble solvers include the
step
keyword, which causes theStep
method to be used. By default, ensemble solvers run to completion (i.e.Solve
), for efficiency. UsingStep
enables the ensemble to communicate state between members of the ensemble at each iteration, which may slow execution.
- _Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration
- Parameters:
cost (func, default=None) – objective, of form
y = cost(x, *ExtraArgs)
, wherex
is a candidate solution vectorExtraArgs (tuple, default=None) – tuple of positional arguments required to evaluate the objective
- Returns:
None
Notes
This method accepts additional
kwds
that are specific for the current solver, as detailed in the_process_inputs
method.
- __all_bestEnergy()
get bestEnergy from all solvers
- __all_bestSolution()
get bestSolution from all solvers
- __all_evals()
count of all function calls
- __all_iters()
count of all iterations
- __get_solver_instance(reset=False)
ensure the solver is a solver instance
- Parameters:
reset (bool, default=False) – reset the monitor instances
- Returns:
a nested solver instance
- __init_allSolvers(reset=False)
populate
NestedSolver
state toallSolvers
- Parameters:
reset (bool, default=False) – reset the monitor instances
- Returns:
returns a list of solver instances
- __total_evals()
total number of function calls
- __total_iters()
total number of iterations
- __update_allSolvers(results)
replace allSolvers with solvers found in results
- __update_bestSolver()
update _bestSolver from _allSolvers
- __update_state()
update solver state from _bestSolver
- property _all_bestEnergy
get bestEnergy from all solvers
- property _all_bestSolution
get bestSolution from all solvers
- property _all_evals
count of all function calls
- property _all_iters
count of all iterations
- _is_best()
get the id of the bestSolver
- _is_new()
determine if solver has been run or not
- _process_inputs(kwds)
process and activate input settings
- Parameters:
callback (func, default=None) – function to call after each iteration. The interface is
callback(xk)
, withxk
the current parameter vector.disp (bool, default=False) – if True, print convergence messages.
EvaluationMonitor (monitor, default=None) – a monitor instance to capture each evaluation of cost.
StepMonitor (monitor, default=None) – a monitor instance to capture each iteration’s best results.
penalty (penalty, default=None) – function of the form:
y' = penalty(xk)
, withy = cost(xk) + y'
andxk
is the current parameter vector.constraints (constraint, default=None) – function of the form:
xk' = constraints(xk)
, wherexk
is the current parameter vector.step (bool, default=False) – if True, enable
Step
within the ensemble.
Notes
callback
anddisp
are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the other inputs are not sticky, and are thus set for a one-time use.
- property _total_evals
total number of function calls
- property _total_iters
total number of iterations
abstract_launcher module
This module contains the base classes for pathos pool and pipe objects, and describes the map and pipe interfaces. A pipe is defined as a connection between two ‘nodes’, where a node is something that does work. A pipe may be a one-way or two-way connection. A map is defined as a one-to-many connection between nodes. In both map and pipe connections, results from the connected nodes can be returned to the calling node. There are several variants of pipe and map, such as whether the connection is blocking, or ordered, or asynchronous. For pipes, derived methods must overwrite the ‘pipe’ method, while maps must overwrite the ‘map’ method. Pipes and maps are available from worker pool objects, where the work is done by any of the workers in the pool. For more specific point-to-point connections (such as a pipe between two specific compute nodes), use the pipe object directly.
Usage
A typical call to a pathos map will roughly follow this example:
>>> # instantiate and configure the worker pool
>>> from pathos.pools import ProcessPool
>>> pool = ProcessPool(nodes=4)
>>>
>>> # do a blocking map on the chosen function
>>> results = pool.map(pow, [1,2,3,4], [5,6,7,8])
>>>
>>> # do a non-blocking map, then extract the results from the iterator
>>> results = pool.imap(pow, [1,2,3,4], [5,6,7,8])
>>> print("...")
>>> results = list(results)
>>>
>>> # do an asynchronous map, then get the results
>>> results = pool.amap(pow, [1,2,3,4], [5,6,7,8])
>>> while not results.ready():
... time.sleep(5); print(".", end=' ')
...
>>> results = results.get()
Notes
Each of the pathos worker pools rely on a different transport protocol (e.g. threads, multiprocessing, etc), where the use of each pool comes with a few caveats. See the usage documentation and examples for each worker pool for more information.
- class AbstractPipeConnection(*args, **kwds)
Bases:
object
AbstractPipeConnection base class for pathos pipes.
- Required input:
???
- Additional inputs:
???
- Important class members:
???
- Other class members:
???
- __repr__()
Return repr(self).
- class AbstractWorkerPool(*args, **kwds)
Bases:
object
AbstractWorkerPool base class for pathos pools.
- Important class members:
nodes - number (and potentially description) of workers ncpus - number of worker processors servers - list of worker servers scheduler - the associated scheduler workdir - associated $WORKDIR for scratch calculations/files
- Other class members:
scatter - True, if uses ‘scatter-gather’ (instead of ‘worker-pool’) source - False, if minimal use of TemporaryFiles is desired timeout - number of seconds to wait for return value from scheduler
- __enter__()
- __exit__(*args)
- __get_nodes()
get the number of nodes in the pool
- __imap(f, *args, **kwds)
default filter for imap inputs
- __init(*args, **kwds)
default filter for __init__ inputs
- __map(f, *args, **kwds)
default filter for map inputs
- __nodes = 1
- __pipe(f, *args, **kwds)
default filter for pipe inputs
- __repr__()
Return repr(self).
- __set_nodes(nodes)
set the number of nodes in the pool
- _serve(*args, **kwds)
Create a new server if one isn’t already initialized
- amap(f, *args, **kwds)
run a batch of jobs with an asynchronous map
Returns a results object which containts the results of applying the function f to the items of the argument sequence(s). If more than one sequence is given, the function is called with an argument list consisting of the corresponding item of each sequence. To retrieve the results, call the get() method on the returned results object. The call to get() is blocking, until all results are retrieved. Use the ready() method on the result object to check if all results are ready. Some maps accept the chunksize keyword, which causes the sequence to be split into tasks of approximately the given size.
- apipe(f, *args, **kwds)
submit a job asynchronously to a queue
Returns a results object which containts the result of calling the function f on a selected worker. To retrieve the results, call the get() method on the returned results object. The call to get() is blocking, until the result is available. Use the ready() method on the results object to check if the result is ready.
- clear()
Remove server with matching state
- imap(f, *args, **kwds)
run a batch of jobs with a non-blocking and ordered map
Returns a list iterator of results of applying the function f to the items of the argument sequence(s). If more than one sequence is given, the function is called with an argument list consisting of the corresponding item of each sequence. Some maps accept the chunksize keyword, which causes the sequence to be split into tasks of approximately the given size.
- map(f, *args, **kwds)
run a batch of jobs with a blocking and ordered map
Returns a list of results of applying the function f to the items of the argument sequence(s). If more than one sequence is given, the function is called with an argument list consisting of the corresponding item of each sequence. Some maps accept the chunksize keyword, which causes the sequence to be split into tasks of approximately the given size.
- pipe(f, *args, **kwds)
submit a job and block until results are available
Returns result of calling the function f on a selected worker. This function will block until results are available.
- uimap(f, *args, **kwds)
run a batch of jobs with a non-blocking and unordered map
Returns a list iterator of results of applying the function f to the items of the argument sequence(s). If more than one sequence is given, the function is called with an argument list consisting of the corresponding item of each sequence. The order of the resulting sequence is not guaranteed. Some maps accept the chunksize keyword, which causes the sequence to be split into tasks of approximately the given size.
abstract_map_solver module
This module contains the base class for mystic solvers that utilize
a parallel map
function to enable parallel computing. This module
describes the map solver interface. As with the AbstractSolver
, the
_Step
method must be overwritten with the derived solver’s optimization
algorithm. Additionally, for the AbstractMapSolver
, a call to map
is
required. In addition to the class interface, a simple function interface for
a derived solver class is often provided. For an example, see the following.
The default map API settings are provided within mystic, while distributed
and parallel computing maps can be obtained from the pathos
package
(http://dev.danse.us/trac/pathos).
Examples
A typical call to a ‘map’ solver will roughly follow this example:
>>> # the function to be minimized and the initial values
>>> from mystic.models import rosen
>>> lb = [0.0, 0.0, 0.0]
>>> ub = [2.0, 2.0, 2.0]
>>>
>>> # get monitors and termination condition objects
>>> from mystic.monitors import Monitor
>>> stepmon = Monitor()
>>> from mystic.termination import CandidateRelativeTolerance as CRT
>>>
>>> # select the parallel launch configuration
>>> from pyina.launchers import Mpi as Pool
>>> NNODES = 4
>>> npts = 20
>>>
>>> # instantiate and configure the solver
>>> from mystic.solvers import BuckshotSolver
>>> solver = BuckshotSolver(len(lb), npts)
>>> solver.SetMapper(Pool(NNODES).map)
>>> solver.SetGenerationMonitor(stepmon)
>>> solver.SetTermination(CRT())
>>> solver.Solve(rosen)
>>>
>>> # obtain the solution
>>> solution = solver.Solution()
Handler
All solvers packaged with mystic include a signal handler that provides the following options:
sol: Print current best solution.
cont: Continue calculation.
call: Executes sigint_callback, if provided.
exit: Exits with current best solution.
Handlers are enabled with the enable_signal_handler
method,
and are configured through the solver’s Solve
method. Handlers
trigger when a signal interrupt (usually, Ctrl-C
) is given while
the solver is running.
Notes
The handler is currently disabled when the solver is run in parallel.
- class AbstractMapSolver(dim, **kwds)
Bases:
AbstractSolver
AbstractMapSolver base class for mystic optimizers that utilize parallel map.
Takes one initial input:
dim -- dimensionality of the problem.
Additional inputs:
npop -- size of the trial solution population. [default = 1]
Important class members:
nDim, nPop = dim, npop generations - an iteration counter. evaluations - an evaluation counter. bestEnergy - current best energy. bestSolution - current best parameter set. [size = dim] popEnergy - set of all trial energy solutions. [size = npop] population - set of all trial parameter solutions. [size = dim*npop] solution_history - history of bestSolution status. [StepMonitor.x] energy_history - history of bestEnergy status. [StepMonitor.y] signal_handler - catches the interrupt signal. [***disabled***]
- SetMapper(map, **kwds)
Set the map and any mapping keyword arguments.
Sets a callable map to enable parallel and/or distributed evaluations. Any kwds given are passed to the map.
- Inputs:
map – the callable map instance [DEFAULT: python_map]
abstract_sampler module
This module contains the base class for mystic samplers, and describes the mystic sampler interface.
- class AbstractSampler(bounds, model, npts=None, **kwds)
Bases:
object
AbstractSampler base class for optimizer-directed samplers
- Parameters:
bounds (list[tuples]) – (lower,upper) bound for each model input
model (function) –
y = model(x)
, wherex
is an input parameter vectornpts (int, default=None) – number of points to sample the model
**kwds (dict, default={}) – keywords for the underlying ensemble of solvers;
(evalmon, stepmon, maxiter, maxfun, dist, saveiter, state, termination, constraints, penalty, reducer, solver, id, map, tightrange, cliprange)
are available for use. Seemystic.ensemble
for more details.
- _init_solver()
initialize the ensemble solver
- _reset_sampler()
reinitialize the ensemble solver/sampler
- _reset_solved()
reinitialize all terminated solvers in the ensemble
- _sample(reset=False)
collect a sample for each member in the ensemble
- Parameters:
reset (bool, default=False) – reset all solvers before sampling; alternately, if
reset
is None, then only reset the terminated solvers- Returns:
None
- evals(all=False)
get the total number of function evaluations
- Parameters:
all (bool, default=False) – report the
evals
for each ensemble member
- iters(all=False)
get the total number of solver iterations
- Parameters:
all (bool, default=False) – report the
iters
for each ensemble member
- sample(if_terminated=None, reset_all=True)
sample npts using vectorized solvers
- Parameters:
- Returns:
None
Notes
When
if_terminated
is None, reset regardless of termination.When
if_terminated
is True, reset if the best solver is terminated.When
if_terminated
is False, reset if no solvers are terminated.When
if_terminated
is all, reset if all solvers are terminated.When
if_terminated
is any, reset if any solvers are terminated.If
reset_all
is None, never reset.If
reset_all
is True, reset all solvers ifif_terminated
is met.If
reset_all
is False, similarly reset only the terminated solvers.
- sample_until(iters=None, evals=None, terminated=None, **kwds)
sample until one of the stop conditions are met
- Possible stop conditions are:
solver iterations
iters()
equals or exceedsiters
solver evaluations
evals()
equals or exceedsevals
number of terminated solvers equals or exceeds
terminated
- Parameters:
iters (int, default=inf) – maximum number of iterations
evals (int, default=inf) – maximum number of evaluations
terminated (int, default=inf) – maximum number of terminated solvers
if_terminated (bool, default=None) – the amount of termination; must be one of
{all, True, any, False, None}
reset_all (bool, default=True) – action to take when
if_terminated
is met; must be one of{True, False, None}
Notes
The default sampler configuration is to always reset (
reset_all=True
)If
termination != None
, the default is never reset (reset_all=None
)A limit for at least one of
(iters, evals, termination)
must be set.terminated
may also be one of{all, True, any, False, None}
, where{all: 'all', True: 'best', any: '1', False: '0', None: 'inf', N: 'N'}
if_terminated
may be one of{all, True, any, False, None}
, where{all: 'all', True: 'best', any: '1', False: '0', None: 'always'}
reset_all
may be one of{True, False, None}
, where{True: 'reset all', False: 'reset solved', None: 'never reset'}
- terminated(*args, **kwds)
check if termination conditions have been met
- Parameters:
Notes
disp
expects a bool, but can also take'verbose'
for more verbosityall
, by default (i.e. None), will show only the terminated solvers
abstract_solver module
This module contains the base class for mystic solvers, and describes
the mystic solver interface. The _Step
method must be overwritten
with the derived solver’s optimization algorithm. In addition to the
class interface, a simple function interface for a derived solver class
is often provided. For an example, see mystic.scipy_optimize
, and
the following.
Examples
A typical call to a solver will roughly follow this example:
>>> # the function to be minimized and the initial values
>>> from mystic.models import rosen
>>> x0 = [0.8, 1.2, 0.7]
>>>
>>> # get monitors and termination condition objects
>>> from mystic.monitors import Monitor
>>> stepmon = Monitor()
>>> evalmon = Monitor()
>>> from mystic.termination import CandidateRelativeTolerance as CRT
>>>
>>> # instantiate and configure the solver
>>> from mystic.solvers import NelderMeadSimplexSolver
>>> solver = NelderMeadSimplexSolver(len(x0))
>>> solver.SetInitialPoints(x0)
>>> solver.SetEvaluationMonitor(evalmon)
>>> solver.SetGenerationMonitor(stepmon)
>>> solver.enable_signal_handler()
>>> solver.SetTermination(CRT())
>>> solver.Solve(rosen)
>>>
>>> # obtain the solution
>>> solution = solver.Solution()
An equivalent, but less flexible, call using the function interface is:
>>> # the function to be minimized and the initial values
>>> from mystic.models import rosen
>>> x0 = [0.8, 1.2, 0.7]
>>>
>>> # configure the solver and obtain the solution
>>> from mystic.solvers import fmin
>>> solution = fmin(rosen,x0)
Handler
All solvers packaged with mystic include a signal handler that provides the following options:
sol: Print current best solution.
cont: Continue calculation.
call: Executes sigint_callback, if provided.
exit: Exits with current best solution.
Handlers are enabled with the enable_signal_handler
method, and are
configured through the solver’s Solve
method. Handlers trigger when a
signal interrupt (usually, Ctrl-C
) is given while the solver is running.
- class AbstractSolver(dim, **kwds)
Bases:
object
AbstractSolver base class for mystic optimizers.
Takes one initial input:
dim -- dimensionality of the problem.
Additional inputs:
npop -- size of the trial solution population. [default = 1]
Important class members:
nDim, nPop = dim, npop generations - an iteration counter. evaluations - an evaluation counter. bestEnergy - current best energy. bestSolution - current best parameter set. [size = dim] popEnergy - set of all trial energy solutions. [size = npop] population - set of all trial parameter solutions. [size = dim*npop] solution_history - history of bestSolution status. [StepMonitor.x] energy_history - history of bestEnergy status. [StepMonitor.y] signal_handler - catches the interrupt signal.
- Collapse(disp=False)
if solver has terminated by collapse, apply the collapse
- Parameters:
disp (bool, default=False) – print details about the solver state at collapse
- Returns:
a dict of
{info: collapse}
, whereinfo
is collapse termination info
Notes
updates the solver’s termination conditions and constraints
collapse must be triggered by calling this method, unless both the “collapse” termination and a “stop” termination are simultaneously satisfied.
- Collapsed(disp=False, info=False)
check if the solver meets the given collapse conditions
- Parameters:
- Returns:
information about the state of the solver collapse (see Notes)
Notes
if
info
is False, return a bool regarding the collapseif
info
is True, return an informative string about the collapse
- Finalize()
cleanup upon exiting the main optimization loop
- SaveSolver(filename=None, **kwds)
save solver state to a restart file
- input::
filename: string of full filepath for the restart file
- note::
any additional keyword arguments are passed to dill.dump
- SetConstraints(constraints)
apply a constraints function to the optimization
- input::
a constraints function of the form: xk’ = constraints(xk), where xk is the current parameter vector. Ideally, this function is constructed so the parameter vector it passes to the cost function will satisfy the desired (i.e. encoded) constraints.
- SetEvaluationLimits(generations=None, evaluations=None, new=False, **kwds)
set limits for generations and/or evaluations
- input::
generations: maximum number of solver iterations (i.e. steps)
evaluations: maximum number of function evaluations
new: bool, if True, the above limit the new evaluations and iterations; otherwise, the limits refer to total evaluations and iterations.
- SetEvaluationMonitor(monitor, new=False)
select a callable to monitor (x, f(x)) after each cost function evaluation
- input::
a monitor instance or monitor type used to track (x, f(x)). Any data collected in an existing evaluation monitor will be prepended, unless new is True.
- SetGenerationMonitor(monitor, new=False)
select a callable to monitor (x, f(x)) after each solver iteration
- input::
a monitor instance or monitor type used to track (x, f(x)). Any data collected in an existing generation monitor will be prepended, unless new is True.
- SetInitialPoints(x0, radius=0.05)
Set Initial Points with Guess (
x0
)
- SetMultinormalInitialPoints(mean, var=None)
Generate Initial Points from Multivariate Normal.
- Parameters:
- Returns:
None
Notes
var
must be symmetric, positive-semidefinite, and lengthself.nDim
if
var
is None, thenvar
becomes the Identity matrix,I
if
var
is a scalar, thenvar
is set tovar * I
- SetObjective(cost, ExtraArgs=None)
set the cost function for the optimization
- input::
cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.
- note::
this method decorates the objective with bounds, penalties, monitors, etc
- SetPenalty(penalty)
apply a penalty function to the optimization
- input::
a penalty function of the form: y’ = penalty(xk), with y = cost(xk) + y’, where xk is the current parameter vector. Ideally, this function is constructed so a penalty is applied when the desired (i.e. encoded) constraints are violated. Equality constraints should be considered satisfied when the penalty condition evaluates to zero, while inequality constraints are satisfied when the penalty condition evaluates to a non-positive number.
- SetRandomInitialPoints(min=None, max=None)
Generate Random Initial Points within given Bounds
- Parameters:
- Returns:
None
Notes
each
min[i]
must be less than or equal to the correspondingmax[i]
- SetReducer(reducer, arraylike=False)
apply a reducer function to the cost function
- input::
a reducer function of the form: y’ = reducer(yk), where yk is a results vector and y’ is a single value. Ideally, this method is applied to a cost function with a multi-value return, to reduce the output to a single value. If arraylike, the reducer provided should take a single array as input and produce a scalar; otherwise, the reducer provided should meet the requirements of the python’s builtin ‘reduce’ method (e.g. lambda x,y: x+y), taking two scalars and producing a scalar.
- SetSampledInitialPoints(dist=None)
Generate Random Initial Points from Distribution (dist)
- Parameters:
dist (Distribution, default=None) – a mystic.math.Distribution instance
- Returns:
None
Notes
if
dist
is None, use a uniform distribution in the interval[0, 1)
- SetSaveFrequency(generations=None, filename=None)
set frequency for saving solver restart file
- input::
generations = number of solver iterations before next save of state
filename = name of file in which to save solver state
- note::
SetSaveFrequency(None) will disable saving solver restart file
- SetStrictRanges(min=None, max=None, **kwds)
ensure solution is within bounds
- input::
min, max: must be a sequence of length self.nDim
each min[i] should be <= the corresponding max[i]
- additional input::
tight (bool): if True, apply bounds concurrent with other constraints
clip (bool): if True, bounding constraints will clip exterior values
- note::
SetStrictRanges(None) will remove strict range constraints
- note::
By default, the bounds are coupled to the other constraints with a coupler (e.g.
mystic.coupler.outer
), and not applied concurrently (i.e. withmystic.constraints.and_
). Using a coupler favors speed over robustness, and relies on the user to formulate the constraints so they do not conflict with imposing the bounds.- note::
The keyword
clip
controls the clipping behavior for the bounding constraints. The default is to rely on_clipGuessWithinRangeBoundary
when ensuring the bounds are respected, and to not take action when the other constraints are being imposed. However whentight=True
, the default is that the bounds constraints clip at the bounds. By default, bounds constraints are applied with a symbolic solver, as the symbolic solver is generally faster thanmystic.constraints.impose_bounds
. All of the above default behaviors are active whenclip=None
.- note::
If
clip=False
,impose_bounds
will be used to map the candidate solution inside the bounds, whileclip=True
will useimpose_bounds
to clip the candidate solution at the bounds. Note thatclip=True
is not the same as the default (clip=None
, which uses a symbolic solver). Ifclip
is specified whiletight
is not, thentight
will be set toTrue
.
- SetTermination(termination)
set the termination conditions
- input::
termination = termination conditions to check against
- note::
terminates a solver due to ‘termination’ or the inherent EvaluationLimits
- note::
SetTermination(None) sets termination to the inherent EvaluationLimits
- Solution()
return the best solution
- Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a ‘cost’ function with given termination conditions.
Uses an optimization algorithm to find the minimum of a function of one or more variables.
- Parameters:
cost (func, default=None) – the function to be minimized:
y = cost(x)
.termination (termination, default=None) – termination conditions.
ExtraArgs (tuple, default=None) – extra arguments for cost.
sigint_callback (func, default=None) – callback function for signal handler.
callback (func, default=None) – function to call after each iteration. The interface is
callback(xk)
, with xk the current parameter vector.disp (bool, default=False) – if True, print convergence messages.
- Returns:
None
- Step(cost=None, termination=None, ExtraArgs=None, **kwds)
Take a single optimization step using the given ‘cost’ function.
Uses an optimization algorithm to take one ‘step’ toward the minimum of a function of one or more variables.
- Parameters:
cost (func, default=None) – the function to be minimized:
y = cost(x)
.termination (termination, default=None) – termination conditions.
ExtraArgs (tuple, default=None) – extra arguments for cost.
callback (func, default=None) – function to call after each iteration. The interface is
callback(xk)
, with xk the current parameter vector.disp (bool, default=False) – if True, print convergence messages.
- Returns:
None
Notes
To run the solver until termination, call
Solve()
. Alternately, useTerminated()
as the stop condition in a while loop overStep
.If the algorithm does not meet the given termination conditions after the call to
Step
, the solver may be left in an “out-of-sync” state. When abandoning an non-terminated solver, one should callFinalize()
to make sure the solver is fully returned to a “synchronized” state.This method accepts additional args that are specific for the current solver, as detailed in the
_process_inputs
method.
- Terminated(disp=False, info=False, termination=None, **kwds)
check if the solver meets the given termination conditions
- Parameters:
- Returns:
information about the state of the solver termination (see Notes)
Notes
if
info
is False, return a bool regarding the terminationif
info
is True, return an informative string about the terminationif
termination
is None, the solver’s stored termination is be used
- _SetEvaluationLimits(iterscale=None, evalscale=None)
set the evaluation limits
- Parameters:
Notes
iterscale
andevalscale
are used to set the maximum iteration and evaluation limits, respectively. The new limit is defined aslimit = (nDim * nPop * scale) + count
, wherecount
is the number of existing iterations or evaluations, respectively.
- _Solve(cost, ExtraArgs, **settings)
Run the optimizer to termination, using the given settings.
- _Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration
- Parameters:
cost (func, default=None) – objective, of form
y = cost(x, *ExtraArgs)
, wherex
is a candidate solution vectorExtraArgs (tuple, default=None) – tuple of positional arguments required to evaluate the objective
- Returns:
None
Notes
This method accepts additional
kwds
that are specific for the current solver, as detailed in the_process_inputs
method.
Note
* this method must be overwritten *
- __bestEnergy()
get the bestEnergy (default: bestEnergy = popEnergy[0])
- __bestSolution()
get the bestSolution (default: bestSolution = population[0])
- __collapse_constraints(state, collapses)
get updated constraints for the given state and collapses
- __collapse_termination(collapses)
get (initial state, resulting termination) for the give collapses
- __copy__()
return a shallow copy of the solver
- __deepcopy__(memo)
return a deep copy of the solver
- __energy_history()
get the energy_history (default: energy_history = _stepmon._y)
- __evaluations()
get the number of function calls
- __generations()
get the number of iterations
- __get_collapses(disp=False)
get dict of {collapse termination info: collapse}
- input::
disp: if True, print details about the solver state at collapse
- __load_state(solver, **kwds)
load solver.__dict__ into self.__dict__; override with kwds
- input::
solver is a solver instance, while kwds are a dict of solver state
- __save_state(force=False)
save the solver state, if chosen save frequency is met
- input::
if force is True, save the solver state regardless of save frequency
- __set_bestEnergy(energy)
set the bestEnergy (energy=None will sync with popEnergy[0])
- __set_bestSolution(params)
set the bestSolution (params=None will sync with population[0])
- __set_energy_history(energy)
set the energy_history (energy=None will sync with _stepmon._y)
- __set_solution_history(params)
set the solution_history (params=None will sync with _stepmon.x)
- __solution_history()
get the solution_history (default: solution_history = _stepmon.x)
- _bootstrap_objective(cost=None, ExtraArgs=None)
HACK to enable not explicitly calling _decorate_objective
- input::
cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.
- _boundsconstraints(**kwds)
if _useStrictRange, build a constraint from (_strictMin,strictMax)
symbolic: bool, if True, use symbolic constraints [default: None] clip: bool, if True, clip exterior values to the bounds [default: None]
NOTE: By default, the bounds and constraints are imposed sequentially with a coupler. Using a coupler chooses speed over robustness, and relies on the user to formulate the constraints so that they do not conflict with imposing the bounds. Hence, if no keywords are provided, the bounds and constraints are applied sequentially.
NOTE: If any of the keyword arguments are used, then the bounds and constraints are imposed concurrently. This is slower but more robust than applying the bounds and constraints sequentially (the default). When the bounds and constraints are applied concurrently, the defaults for the keywords (symbolic and clip) are set to True, unless otherwise specified.
NOTE: If symbolic=True, use symbolic constraints to impose the bounds; otherwise use mystic.constraints.impose_bounds. Using clip=False will set symbolic=False unless symbolic is specified otherwise.
- _clipGuessWithinRangeBoundary(x0, at=True)
ensure that initial guess is set within bounds
- input::
x0: must be a sequence of length self.nDim
at: bool, if True, then clip at the bounds
- _decorate_objective(cost, ExtraArgs=None)
decorate the cost function with bounds, penalties, monitors, etc
- Parameters:
cost (func) – objective, of form
y = cost(x, *ExtraArgs)
, wherex
is a candidate solution vectorExtraArgs (tuple, default=None) – tuple of positional arguments required to evaluate the objective
- Returns:
decorated objective function
- _is_new()
determine if solver has been run or not
- _process_inputs(kwds)
process and activate input settings
- Parameters:
callback (func, default=None) – function to call after each iteration. The interface is
callback(xk)
, withxk
the current parameter vector.disp (bool, default=False) – if True, print convergence messages.
EvaluationMonitor (monitor, default=None) – a monitor instance to capture each evaluation of cost.
StepMonitor (monitor, default=None) – a monitor instance to capture each iteration’s best results.
penalty (penalty, default=None) – function of the form:
y' = penalty(xk)
, withy = cost(xk) + y'
andxk
is the current parameter vector.constraints (constraint, default=None) – function of the form:
xk' = constraints(xk)
, wherexk
is the current parameter vector.
Notes
callback
anddisp
are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the other inputs are not sticky, and are thus set for a one-time use.
- _update_objective()
decorate the cost function with bounds, penalties, monitors, etc
- property bestEnergy
bestEnergy = popEnergy[0])
- Type:
get the bestEnergy (default
- property bestSolution
bestSolution = population[0])
- Type:
get the bestSolution (default
- disable_signal_handler()
disable workflow interrupt handler while solver is running
- enable_signal_handler()
enable workflow interrupt handler while solver is running
- property energy_history
energy_history = _stepmon._y)
- Type:
get the energy_history (default
- property evaluations
get the number of function calls
- property generations
get the number of iterations
- property solution_history
solution_history = _stepmon.x)
- Type:
get the solution_history (default
bounds module
cartesian bounds and measure bounds instances
- class Bounds(*bounds, **kwds)
Bases:
object
create a bounds instance
bounds is a tuple of (lower, upper) bound
- additionally, one can specify:
xlb: lower bound
xub: upper bound
n: repeat
Examples
>>> b = Bounds(7, 8, n=2) >>> b.xlb, b.xub (7, 8) >>> b() [(7, 8), (7, 8)] >>> b.lower [7, 7] >>> b.upper [8, 8]
>>> c = Bounds((0,1),(3,4), n=2) >>> c() [(0, 3), (0, 3), (1, 4), (1, 4)]
- __add__(other)
add the contents of self and the given other bounds
- __call__()
get list of tuples of (lower, upper) bounds
- __len__()
get length of list of tuples of (lower, upper) bounds
- __lower()
get list of lower bounds
- __none()
- __repr__()
Return repr(self).
- __set_lower(lb)
- __set_upper(ub)
- __upper()
get list of upper bounds
- property lower
get list of lower bounds
- property upper
get list of upper bounds
- property wlower
- property wupper
- property xlower
get list of lower bounds
- property xupper
get list of upper bounds
- class MeasureBounds(*bounds, **kwds)
Bases:
Bounds
create a measure bounds instance
bounds is a tuple of (lower, upper) bound
- additionally, one can specify:
wlb: weight lower bound
wub: weight upper bound
xlb: lower bound
xub: upper bound
n: repeat
Examples
>>> b = MeasureBounds(7, 8, n=2) >>> b.wlb, b.wub (0, 1) >>> b.xlb, b.xub (7, 8) >>> b() [(0, 1), (0, 1), (7, 8), (7, 8)] >>> b.lower [0, 0, 7, 7] >>> b.upper [1, 1, 8, 8]
>>> c = MeasureBounds((0,1),(4,5), n=1, wlb=(0,1), wub=(2,3)) >>> c.lower [0, 0, 1, 1] >>> c.upper [2, 4, 3, 5] >>> c() [(0, 2), (0, 4), (1, 3), (1, 5)]
>>> c = MeasureBounds((0,1),(4,5), n=2) >>> c() [(0, 1), (0, 1), (0, 4), (0, 4), (0, 1), (0, 1), (1, 5), (1, 5)]
- __add__(other)
add the contents of self and the given other bounds
- __len__()
get length of list of tuples of (lower, upper) bounds
- __lower()
get list of lower bounds
- __repr__()
Return repr(self).
- __set_lower(lb)
- __set_upper(ub)
- __upper()
get list of upper bounds
- __wlower()
- __wupper()
- __xlower()
- __xupper()
- property lower
get list of lower bounds
- property upper
get list of upper bounds
- property wlower
- property wupper
- property xlower
get list of lower bounds
- property xupper
get list of upper bounds
cache module
decorators for caching function outputs, with function inputs as the keys, and interactors for reading and writing to databases of functions and data.
several klepto.cache strategies are included here for backward compatability; please also see the klepto package for available archives (i.e. dir_archive, file_archive, hdf_archive, sql_archive), cache strategies, filters, encodings, serialization, and other configuration options
- cached(**kwds)
build a caching archive for an objective function
- Input:
type: the type of klepto.cache [default: inf_cache] archive: the archive (str name for a new archive, or archive instance) maxsize: maximum cache size [default: None] keymap: cache key encoder [default: klepto.keymap.keymap()] ignore: function argument names to ignore [default: ‘**’] tol: int tolerance for rounding [default: None] deep: bool rounding depth (default: False] purge: bool for purging cache to archive [default: False] multivalued: bool if multivalued return of objective [default: False]
- Returns:
cached objective function
Notes
inverse (y = -objective(x)) is at objective.__inverse__ cache of objective is at objective.__cache__ inverse and objective cache to the same archive
- class inf_cache(maxsize=None, cache=None, keymap=None, ignore=None, tol=None, deep=False, purge=False)
Bases:
object
infinitely-growing (INF) cache decorator.
This decorator memoizes a function’s return value each time it is called. If called later with the same arguments, the cached value is returned, and not re-evaluated. This cache will grow without bound. To avoid memory issues, it is suggested to frequently dump and clear the cache. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.
maxsize = maximum cache size [fixed at maxsize=None] cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.hashmap(flat=True)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (fixed at False)
If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap requires arguments to the cached function to be hashable.
If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’.
If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).
View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().
- __call__(user_function)
Call self as a function.
- __get__(obj, objtype)
support instance methods
- __reduce__()
Helper for pickle.
- class lfu_cache(*args, **kwds)
Bases:
object
least-frequenty-used (LFU) cache decorator.
This decorator memoizes a function’s return value each time it is called. If called later with the same arguments, the cached value is returned, and not re-evaluated. To avoid memory issues, a maximum cache size is imposed. For caches with an archive, the full cache dumps to archive upon reaching maxsize. For caches without an archive, the LFU algorithm manages the cache. Caches with an archive will use the latter behavior when ‘purge’ is False. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.
maxsize = maximum cache size cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.hashmap(flat=True)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (default is False)
If maxsize is None, this cache will grow without bound.
If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap requires arguments to the cached function to be hashable.
If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’.
If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).
View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().
See: http://en.wikipedia.org/wiki/Cache_algorithms#Least_Frequently_Used
- __call__(user_function)
Call self as a function.
- __get__(obj, objtype)
support instance methods
- static __new__(cls, *args, **kwds)
- __reduce__()
Helper for pickle.
- class lru_cache(*args, **kwds)
Bases:
object
least-recently-used (LRU) cache decorator.
This decorator memoizes a function’s return value each time it is called. If called later with the same arguments, the cached value is returned, and not re-evaluated. To avoid memory issues, a maximum cache size is imposed. For caches with an archive, the full cache dumps to archive upon reaching maxsize. For caches without an archive, the LRU algorithm manages the cache. Caches with an archive will use the latter behavior when ‘purge’ is False. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.
maxsize = maximum cache size cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.hashmap(flat=True)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (default is False)
If maxsize is None, this cache will grow without bound.
If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap requires arguments to the cached function to be hashable.
If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’.
If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).
View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().
See: http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used
- __call__(user_function)
Call self as a function.
- __get__(obj, objtype)
support instance methods
- static __new__(cls, *args, **kwds)
- __reduce__()
Helper for pickle.
- class mru_cache(*args, **kwds)
Bases:
object
most-recently-used (MRU) cache decorator.
This decorator memoizes a function’s return value each time it is called. If called later with the same arguments, the cached value is returned, and not re-evaluated. To avoid memory issues, a maximum cache size is imposed. For caches with an archive, the full cache dumps to archive upon reaching maxsize. For caches without an archive, the MRU algorithm manages the cache. Caches with an archive will use the latter behavior when ‘purge’ is False. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.
maxsize = maximum cache size cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.hashmap(flat=True)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (default is False)
If maxsize is None, this cache will grow without bound.
If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap requires arguments to the cached function to be hashable.
If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’.
If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).
View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().
See: http://en.wikipedia.org/wiki/Cache_algorithms#Most_Recently_Used
- __call__(user_function)
Call self as a function.
- __get__(obj, objtype)
support instance methods
- static __new__(cls, *args, **kwds)
- __reduce__()
Helper for pickle.
- class no_cache(maxsize=0, cache=None, keymap=None, ignore=None, tol=None, deep=False, purge=True)
Bases:
object
empty (NO) cache decorator.
Unlike other cache decorators, this decorator does not cache. It is a dummy that collects statistics and conforms to the caching interface. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.
maxsize = maximum cache size [fixed at maxsize=0] cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.hashmap(flat=True)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (fixed at True)
If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap requires arguments to the cached function to be hashable.
If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’. Here, the keymap is only used to look up keys in an associated archive.
If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).
View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().
- __call__(user_function)
Call self as a function.
- __get__(obj, objtype)
support instance methods
- __reduce__()
Helper for pickle.
- class rr_cache(*args, **kwds)
Bases:
object
random-replacement (RR) cache decorator.
This decorator memoizes a function’s return value each time it is called. If called later with the same arguments, the cached value is returned, and not re-evaluated. To avoid memory issues, a maximum cache size is imposed. For caches with an archive, the full cache dumps to archive upon reaching maxsize. For caches without an archive, the RR algorithm manages the cache. Caches with an archive will use the latter behavior when ‘purge’ is False. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.
maxsize = maximum cache size cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.hashmap(flat=True)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (default is False)
If maxsize is None, this cache will grow without bound.
If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap requires arguments to the cached function to be hashable.
If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’.
If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).
View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().
http://en.wikipedia.org/wiki/Cache_algorithms#Random_Replacement
- __call__(user_function)
Call self as a function.
- __get__(obj, objtype)
support instance methods
- static __new__(cls, *args, **kwds)
- __reduce__()
Helper for pickle.
collapse module
- _index_selector(mask)
generate a selector for a mask of indices
- _pair_selector(mask)
generate a selector for a mask of tuples (pairs)
- _position_filter(mask)
generate a filter for a position mask (dict, set, or where)
- _split_mask(mask)
separate a mask into a list of ints and list of tuples (pairs). mask should be composed of indices and pairs of indices
- _weight_filter(mask)
generate a filter for a weight mask (dict, set, or where)
- collapse_as(stepmon, offset=False, tolerance=0.005, generations=50, mask=None)
return a set of pairs of indices where the parameters exhibit a dimensional collapse. Dimensional collapse is defined by: max(pairwise(parameters)) <= tolerance over N generations (offset=False), ptp(pairwise(parameters)) <= tolerance over N generations (offset=True).
collapse will be ignored at any pairs of indices specififed in the mask. If single indices are provided, ignore all pairs with the given indices.
- collapse_at(stepmon, target=None, tolerance=0.005, generations=50, mask=None)
return a set of indices where the parameters exhibit a dimensional collapse at the specified target. Dimensional collapse is defined by: change(param[i]) <= tolerance over N generations, where: change(param[i]) = max(param[i]) - min(param[i]) if target = None, or change(param[i]) = abs(param[i] - target) otherwise.
target can be None, a single value, or a list of values of param length
collapse will be ignored at any indices specififed in the mask
- collapse_cost(stepmon, clip=False, limit=1.0, samples=50, mask=None)
return a dict of {index:bounds} where the parameters exhibit a collapse in bounds for regions of parameter space with a comparably high cost value. Bounds are provided by an interval (min,max), or a list of intervals. Bounds collapse will occur when: cost(param) - min(cost) >= limit, for all N samples within an interval.
if clip is True, then clip beyond the space sampled by stepmon
if mask is provided, the intersection of bounds and mask is returned. mask is a dict of {index:bounds}, formatted same as the return value.
- collapse_position(stepmon, tolerance=0.005, generations=50, mask=None)
return a dict of {measure: pairs_of_indices} where the product_measure exhibits a dimensional collapse in position. Dimensional collapse in position is defined by:
collapse will be ignored at (measure,pairs) as specified in the mask. Format of mask will determine the return value for this function. Default mask format is dict of {measure: pairs_of_indices}, with alternate formatting available as a set of tuples of (measure,pair).
- collapse_weight(stepmon, tolerance=0.005, generations=50, mask=None)
return a dict of {measure:indices} where the product_measure exhibits a dimensional collapse in weight. Dimensional collapse in weight is defined by: max(weight[i]) <= tolerance over N generations.
collapse will be ignored at (measure,indices) as specified in the mask. Format of mask will determine the return value for this function. Default mask format is dict of {measure: indices}, with alternate formatting available as a set of tuples of (measure,index).
- collapsed(message)
extract collapse result from collapse message
- selector(mask)
generate a selector for a mask of pairs and/or indices
constraints module
Tools for building and applying constraints and penalties.
- and_(*constraints, **settings)
combine several constraints into a single constraint
- Inputs:
constraints – constraint functions
- Additional Inputs:
maxiter – maximum number of iterations to attempt to solve [default: 100] onexit – function x’ = f(x) to call on success [default: None] onfail – function x’ = f(x) to call on failure [default: None]
Note
If a repeating cycle is detected, some of the inputs may be randomized.
- as_constraint(penalty, *args, **kwds)
Convert a penalty function to a constraints solver.
- Inputs:
penalty – a penalty function
- Additional Inputs:
lower_bounds – list of lower bounds on solution values. upper_bounds – list of upper bounds on solution values. nvars – number of parameter values. solver – the mystic solver to use in the optimization. termination – the mystic termination to use in the optimization. tightrange – if True, impose bounds and constraints concurrently. cliprange – if True, bounding constraints clip exterior values.
- NOTE: The default solver is ‘diffev’, with npop=min(40, ndim*5). The default
termination is ChangeOverGeneration(), and the default guess is randomly selected points between the upper and lower bounds.
- as_penalty(constraint, ptype=None, *args, **kwds)
Convert a constraints solver to a penalty function.
- Inputs:
constraint – a constraints solver ptype – penalty function type [default: quadratic_equality]
- Additional Inputs:
args – arguments for the constraints solver [default: ()] kwds – keyword arguments for the constraints solver [default: {}] k – penalty multiplier h – iterative multiplier
- bounded(seq, bounds, index=None, clip=True, nearest=True)
bound a sequence by bounds = [min,max]
Examples
>>> sequence = [0.123, 1.244, -4.755, 10.731, 6.207] >>> >>> bounded(sequence, (0,5)) array([0.123, 1.244, 0. , 5. , 5. ]) >>> >>> bounded(sequence, (0,5), index=(0,2,4)) array([ 0.123, 1.244, 0. , 10.731, 5. ]) >>> >>> bounded(sequence, (0,5), clip=False) array([0.123 , 1.244 , 3.46621839, 1.44469038, 4.88937466]) >>> >>> bounds = [(0,5),(7,10)] >>> bounded(sequence, bounds) array([ 0.123, 1.244, 0. , 10. , 7. ]) >>> >>> bounded(sequence, bounds, nearest=False) array([ 0.123, 1.244, 7. , 10. , 5. ]) >>> >>> bounded(sequence, bounds, nearest=False, clip=False) array([0.123 , 1.244 , 0.37617154, 8.79013111, 7.40864242]) >>> >>> bounded(sequence, bounds, clip=False) array([0.123 , 1.244 , 2.38186577, 7.41374049, 9.14662911])
- boundsconstrain(min, max, **kwds)
build a constraint from the bounds (min,max)
- Input:
min: list of floats specifying the lower bounds max: list of floats specifying the upper bounds
- Additional Input:
symbolic: bool, if True, use symbolic constraints [default: True] clip: bool, if True, clip exterior values to the bounds [default: True]
Notes
Prepares a constraint function that imposes the bounds. The intent is so that the bounds and other constraints can be imposed concurrently (e.g. using mystic.constraints.and_). This is slower but more robust than applying the bounds sequential with the other constraints (the default for a solver).
For entries where there is no upper bound, use either inf or None. Similarly for entries where there is no lower bound. When no upper bound exists for any of the entries,
max
should be an iterable with all entries of inf or None. Again, similarly for lower bounds.If symbolic=True, use symbolic constraints to impose the bounds; otherwise use mystic.constraints.impose_bounds. Using clip=False requires symbolic=False.
- discrete(samples, index=None)
impose a discrete set of input values for the selected function
The function’s input will be mapped to the given discrete set
Examples
>>> @discrete([1.0, 2.0]) ... def identity(x): ... return x ... >>> identity([0.123, 1.789, 4.000]) [1.0, 2.0, 2.0]
>>> @discrete([1,3,5,7], index=(0,3)) ... def squared(x): ... return [i**2 for i in x] ... >>> squared([0,2,4,6,8,10]) [1, 4, 16, 25, 64, 100]
- has_unique(x)
check for uniqueness of the members of x
- impose_as(mask, offset=None)
generate a function, where some input tracks another input
mask should be a set of tuples of positional index and tracked index, where the tuple should contain two different integers. The mask will be applied to the input, before the decorated function is called.
The offset is applied to the second member of the tuple, and can accumulate.
Examples
>>> @impose_as([(0,1),(3,1),(4,5),(5,6),(5,7)]) ... def same(x): ... return x ... >>> same([9,8,7,6,5,4,3,2,1]) [9, 9, 7, 9, 5, 5, 5, 5, 1] >>> same([0,1,0,1]) [0, 0, 0, 0] >>> same([-1,-2,-3,-4,-5,-6,-7]) [-1, -1, -3, -1, -5, -5, -5]
>>> @impose_as([(0,1),(3,1),(4,5),(5,6),(5,7)], 10) ... def doit(x): ... return x ... >>> doit([9,8,7,6,5,4,3,2,1]) [9, 19, 7, 9, 5, 15, 25, 25, 1] >>> doit([0,1,0,1]) [0, 10, 0, 0] >>> doit([-1,-2,-3,-4,-5,-6]) [-1, 9, -3, -1, -5, 5] >>> doit([-1,-2,-3,-4,-5,-6,-7]) [-1, 9, -3, -1, -5, 5, 15]
- impose_at(index, target=0.0)
generate a function, where some input is set to the target
index should be a set of indices to be fixed at the target. The target can either be a single value (e.g. float), or a list of values.
Examples
>>> @impose_at([1,3,4,5,7], -99) ... def same(x): ... return x ... >>> same([1,1,1,1,1,1,1]) [1, -99, 1, -99, -99, -99, 1] >>> same([1,1,1,1]) [1, -99, 1, -99] >>> same([1,1]) [1, -99]
>>> @impose_at([1,3,4,5,7], [0,2,4,6]) ... def doit(x): ... return x ... >>> doit([1,1,1,1,1,1,1]) [1, 0, 1, 2, 4, 6, 1] >>> doit([1,1,1,1]) [1, 0, 1, 2] >>> doit([1,1]) [1, 0]
- impose_bounds(bounds, index=None, clip=True, nearest=True)
generate a function where bounds=[min,max] on a sequence are imposed
Examples
>>> sequence = [0.123, 1.244, -4.755, 10.731, 6.207] >>> >>> @impose_bounds((0,5)) ... def simple(x): ... return x ... >>> simple(sequence) [0.123, 1.244, 0.0, 5.0, 5.0] >>> >>> @impose_bounds((0,5), index=(0,2,4)) ... def double(x): ... return [i*2 for i in x] ... >>> double(sequence) [0.246, 2.488, 0.0, 21.462, 10.0] >>> >>> @impose_bounds((0,5), index=(0,2,4), clip=False) ... def square(x): ... return [i*i for i in x] ... >>> square(sequence) [0.015129, 1.547536, 14.675791119810688, 115.154361, 1.399551896073788] >>> >>> @impose_bounds([(0,5),(7,10)]) ... def simple(x): ... return x ... >>> simple(sequence) [0.123, 1.244, 0.0, 10.0, 7.0] >>> >>> @impose_bounds([(0,5),(7,10)], nearest=False) ... def simple(x): ... return x ... >>> simple(sequence) [0.123, 1.244, 0.0, 5.0, 5.0] >>> simple(sequence) [0.123, 1.244, 7.0, 10.0, 5.0] >>> >>> @impose_bounds({0:(0,5), 2:(0,5), 4:[(0,5),(7,10)]}) ... def simple(x): ... return x ... >>> simple(sequence) [0.123, 1.244, 0.0, 10.731, 7.0] >>> >>> @impose_bounds({0:(0,5), 2:(0,5), 4:[(0,5),(7,10)]}, index=(0,2)) ... def simple(x): ... return x ... >>> simple(sequence) [0.123, 1.244, 0.0, 10.731, 6.207]
- impose_measure(npts, tracking={}, noweight={})
generate a function, that constrains measure positions and weights
npts is a tuple of the product_measure dimensionality
tracking is a dict of collapses, or a tuple of dicts of collapses. a tracking collapse is a dict of {measure: {pairs of indices}}, where the pairs of indices are where the positions will be constrained to have the same value, and the weight from the second index in the pair will be removed and added to the weight of the first index
noweight is a dict of collapses, or a tuple of dicts of collapses. a noweight collapse is a dict of {measure: {indices}), where the indices are where the measure will be constrained to have zero weight
Examples
>>> pos = {0: {(0,1)}, 1:{(0,2)}} >>> wts = {0: {1}, 1: {1, 2}} >>> npts = (3,3) >>> >>> @impose_measure(npts, pos) ... def same(x): ... return x ... >>> same([.5, 0., .5, 2., 4., 6., .25, .5, .25, 6., 4., 2.]) [0.5, 0.0, 0.5, 2.0, 2.0, 6.0, 0.5, 0.5, 0.0, 5.0, 3.0, 5.0] >>> same([1./3, 1./3, 1./3, 1., 2., 3., 1./3, 1./3, 1./3, 1., 2., 3.]) [0.6666666666666666, 0.0, 0.3333333333333333, 1.3333333333333335, 1.3333333333333335, 3.3333333333333335, 0.6666666666666666, 0.3333333333333333, 0.0, 1.6666666666666667, 2.666666666666667, 1.6666666666666667] >>> >>> @impose_measure(npts, {}, wts) ... def doit(x): ... return x ... >>> doit([.5, 0., .5, 2., 4., 6., .25, .5, .25, 6., 4., 2.]) [0.5, 0.0, 0.5, 2.0, 4.0, 6.0, 1.0, 0.0, 0.0, 4.0, 2.0, 0.0] >>> doit([1./3, 1./3, 1./3, 1., 2., 3., 1./3, 1./3, 1./3, 1., 2., 3.]) [0.5, 0.0, 0.5, 1.0, 2.0, 3.0, 1.0, 0.0, 0.0, 2.0, 3.0, 4.0] >>> >>> @impose_measure(npts, pos, wts) ... def both(x): ... return x ... >>> both([1./3, 1./3, 1./3, 1., 2., 3., 1./3, 1./3, 1./3, 1., 2., 3.]) [0.66666666666666663, 0.0, 0.33333333333333331, 1.3333333333333335, 1.3333333333333335, 3.3333333333333335, 1.0, 0.0, 0.0, 2.0, 3.0, 2.0]
- impose_position(npts, tracking)
generate a function, that constrains measure positions
npts is a tuple of the product_measure dimensionality
tracking is a dict of collapses, or a tuple of dicts of collapses. a tracking collapse is a dict of {measure: {pairs of indices}}, where the pairs of indices are where the positions will be constrained to have the same value, and the weight from the second index in the pair will be removed and added to the weight of the first index
Examples
>>> pos = {0: {(0,1)}, 1:{(0,2)}} >>> npts = (3,3) >>> >>> @impose_position(npts, pos) ... def same(x): ... return x ... >>> same([.5, 0., .5, 2., 4., 6., .25, .5, .25, 6., 4., 2.]) [0.5, 0.0, 0.5, 2.0, 2.0, 6.0, 0.5, 0.5, 0.0, 5.0, 3.0, 5.0] >>> same([1./3, 1./3, 1./3, 1., 2., 3., 1./3, 1./3, 1./3, 1., 2., 3.]) [0.6666666666666666, 0.0, 0.3333333333333333, 1.3333333333333335, 1.3333333333333335, 3.3333333333333335, 0.6666666666666666, 0.3333333333333333, 0.0, 1.6666666666666667, 2.666666666666667, 1.6666666666666667]
- impose_unique(seq=None)
ensure all values are unique and found in the given set
Examples
>>> @impose_unique(range(11)) ... def doit(x): ... return x ... >>> doit([1,2,3,1,2,4]) [1, 2, 3, 9, 8, 4] >>> doit([1,2,3,1,2,10]) [1, 2, 3, 8, 5, 10] >>> try: ... doit([1,2,3,1,2,13]) ... raise Exception ... except ValueError: ... pass ...
- impose_weight(npts, noweight)
generate a function, that constrains measure weights
npts is a tuple of the product_measure dimensionality
noweight is a dict of collapses, or a tuple of dicts of collapses. a noweight collapse is a dict of {measure: {indices}), where the indices are where the measure will be constrained to have zero weight
Examples
>>> wts = {0: {1}, 1: {1, 2}} >>> npts = (3,3) >>> >>> @impose_weight(npts, wts) ... def doit(x): ... return x ... >>> doit([.5, 0., .5, 2., 4., 6., .25, .5, .25, 6., 4., 2.]) [0.5, 0.0, 0.5, 2.0, 4.0, 6.0, 1.0, 0.0, 0.0, 4.0, 2.0, 0.0] >>> doit([1./3, 1./3, 1./3, 1., 2., 3., 1./3, 1./3, 1./3, 1., 2., 3.]) [0.5, 0.0, 0.5, 1.0, 2.0, 3.0, 1.0, 0.0, 0.0, 2.0, 3.0, 4.0]
- integers(ints=True, index=None)
impose the set of integers (by rounding) for the given function
- The function’s input will be mapped to the ints, where:
if ints is True, return results as ints; otherwise, use floats
if index tuple provided, only round at the given indices
Examples
>>> @integers() ... def identity(x): ... return x ... >>> identity([0.123, 1.789, 4.000]) [0, 2, 4]
>>> @integers(ints=float, index=(0,3,4)) ... def squared(x): ... return [i**2 for i in x] ... >>> squared([0.12, 0.12, 4.01, 4.01, 8, 8]) [0.0, 0.0144, 16.080099999999998, 16.0, 64.0, 64.0]
- issolution(constraints, guess, tol=0.001)
Returns whether the guess is a solution to the constraints
- Input:
constraints – a constraints solver function or a penalty function guess – list of parameter values proposed to solve the constraints tol – residual error magnitude for which constraints are considered solved
Examples
>>> @normalized() ... def constraint(x): ... return x ... >>> constraint([.5,.5]) [0.5, 0.5] >>> issolution(constraint, [.5,.5]) True
>>> from mystic.penalty import quadratic_inequality >>> @quadratic_inequality(lambda x: x[0] - x[1] + 10) ... def penalty(x): ... return 0.0 ... >>> penalty([-10,.5]) 0.0 >>> issolution(penalty, [-10,.5]) True
- monotonic(ascending=True, outer=False, index=None)
impose monotonicity on the given function
- The function will become monotonic, where:
ascending is bool, increasing with index if True and decreasing if False
outer is bool, applied to the outputs if True and to the inputs if False
if index is given, only apply at the given sequence defined by index
Examples
>>> @monotonic(outer=True) ... def negative(x): ... return [-i for i in x] ... >>> negative([5.34, -.121, -4.11, 9.01, 11.3, -16.4]) [-5.34, 0.121, 4.11, 4.11, 4.11, 16.4] >>> >>> @monotonic() ... def negative(x): ... return [-i for i in x] ... >>> negative([5.34, -.121, -4.11, 9.01, 11.3, -16.4]) [-5.34, -5.34, -5.34, -9.01, -11.3, -11.3] >>> >>> @monotonic(outer=True) ... def squared(x): ... return [i*i for i in x] ... >>> squared([5.34, -.121, -4.11, 9.01, 11.3, -16.4]) [28.5156, 28.5156, 28.5156, 81.1801, 127.69000000000001, 268.96] >>> >>> @monotonic() ... def squared(x): ... return [i*i for i in x] ... >>> squared([5.34, -.121, -4.11, 9.01, 11.3, -16.4]) [28.5156, 28.5156, 28.5156, 81.1801, 127.69000000000001, 127.69000000000001] >>> >>> @monotonic(index=(0,1)) ... def squared(x): ... return [i*i for i in x] ... >>> squared([5.34, -.121, -4.11, 9.01, 11.3, -16.4]) [28.5156, 28.5156, 16.892100000000003, 81.1801, 127.69000000000001, 268.96]
- near_integers(x)
the sum of all deviations from int values
- normalized(mass=1.0)
bind a normalization constraint to a given constraints function.
- Inputs:
mass – the target sum of normalized weights
A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling of “normalize” onto another constraints function c(x), such that: x’ = normalize(c(x), mass).
Examples
>>> @normalized() ... def constraint(x): ... return x ... >>> constraint([1,2,3]) [0.16666666666666666, 0.33333333333333331, 0.5]
- not_(constraint, **settings)
invert the region where the given constraints are valid, then solve
- Inputs:
constraint – constraint function
- Additional Inputs:
maxiter – maximum number of iterations to attempt to solve [default: 100] onexit – function x’ = f(x) to call on success [default: None] onfail – function x’ = f(x) to call on failure [default: None]
Note
If a repeating cycle is detected, some of the inputs may be randomized.
- or_(*constraints, **settings)
create a constraint that is satisfied if any constraints are satisfied
- Inputs:
constraints – constraint functions
- Additional Inputs:
maxiter – maximum number of iterations to attempt to solve [default: 100] onexit – function x’ = f(x) to call on success [default: None] onfail – function x’ = f(x) to call on failure [default: None]
Note
If a repeating cycle is detected, some of the inputs may be randomized.
- precision(digits=None, index=None)
rounded outputs for the given function
- The function’s output will be mapped to the given precision, where:
digits is an int that sets the rounding precision (can be negative)
if index tuple provided, only round at the given indices
Examples
>>> @precision() ... def identity(x): ... return x ... >>> identity([0.123, 1.789, 4.000]) [0.0, 2.0, 4.0]
>>> @precision(digits=1, index=(0,3,4)) ... def squared(x): ... return [i**2 for i in x] ... >>> squared([123.45, 123.45, 4.01, 4.01, 0.012, 0.012]) [15239.9, 15239.9025, 16.080099999999998, 16.1, 0.0, 0.000144]
>>> @precision(digits=-1, index=(0,3,4)) ... def square(x): ... return [i**2 for i in x] ... >>> square([123.45, 123.45, 4.01, 4.01, 0.012, 0.012]) [15240.0, 15239.9025, 16.080099999999998, 20.0, 0.0, 0.000144]
- rounded(digits=None, index=None)
rounded inputs for the given function
- The function’s input will be mapped to the given precision, where:
digits is an int that sets the rounding precision (can be negative)
if index tuple provided, only round at the given indices
Examples
>>> @rounded() ... def identity(x): ... return x ... >>> identity([0.123, 1.789, 4.000]) [0.0, 2.0, 4.0]
>>> @rounded(digits=1, index=(0,3,4)) ... def squared(x): ... return [i**2 for i in x] ... >>> squared([123.45, 123.45, 4.01, 4.01, 0.012, 0.012]) [15227.560000000001, 15239.9025, 16.080099999999998, 16.0, 0.0, 0.000144]
>>> @rounded(digits=-1, index=(0,3,4)) ... def square(x): ... return [i**2 for i in x] ... >>> square([123.45, 123.45, 4.01, 4.01, 0.012, 0.012]) [14400.0, 15239.9025, 16.080099999999998, 0.0, 0.0, 0.000144]
- solve(constraints, guess=None, nvars=None, solver=None, lower_bounds=None, upper_bounds=None, termination=None, tightrange=None, cliprange=None)
Use optimization to find a solution to a set of constraints.
- Inputs:
constraints – a constraints solver function or a penalty function
- Additional Inputs:
guess – list of parameter values proposed to solve the constraints. lower_bounds – list of lower bounds on solution values. upper_bounds – list of upper bounds on solution values. nvars – number of parameter values. solver – the mystic solver to use in the optimization. termination – the mystic termination to use in the optimization. tightrange – if True, impose bounds and constraints concurrently. cliprange – if True, bounding constraints clip exterior values.
- NOTE: The resulting constraints will likely be more expensive to evaluate
and less accurate than writing the constraints solver from scratch.
- NOTE: The ensemble solvers are available, using the default NestedSolver,
where the keyword ‘guess’ can be used to set the number of solvers.
- NOTE: The default solver is ‘diffev’, with npop=min(40, ndim*5). The default
termination is ChangeOverGeneration(), and the default guess is randomly selected points between the upper and lower bounds.
- sorting(ascending=True, outer=False, index=None)
impose sorting on the given function
- The function will use sorting, where:
ascending is bool, increasing with index if True and decreasing if False
outer is bool, applied to the outputs if True and to the inputs if False
if index is given, only apply at the given sequence defined by index
Examples
>>> @sorting(outer=True) ... def squared(x): ... return [i*i for i in x] ... >>> squared([5.34, -.121, -4.11, 9.01, 11.3, -16.4]) [0.014641, 16.892100000000003, 28.5156, 81.1801, 127.69000000000001, 268.96] >>> >>> @sorting() ... def squared(x): ... return [i*i for i in x] ... >>> squared([5.34, -.121, -4.11, 9.01, 11.3, -16.4]) [268.96, 16.892100000000003, 0.014641, 28.5156, 81.1801, 127.69000000000001] >>> >>> @sorting(index=(0,1,2)) ... def squared(x): ... return [i*i for i in x] ... >>> squared([5.34, -.121, -4.11, 9.01, 11.3, -16.4]) [16.892100000000003, 0.014641, 28.5156, 81.1801, 127.69000000000001, 268.96]
- unique(seq, full=None)
replace the duplicate values with unique values in ‘full’
If full is a type (int or float), then unique values of the given type are selected from range(min(seq),max(seq)). If full is a dict of {‘min’:min, ‘max’:max}, then unique floats are selected from range(min(seq),max(seq)). If full is a sequence (list or set), then unique values are selected from the given sequence.
Examples
>>> unique([1,2,3,1,2,4], range(11)) [1, 2, 3, 9, 8, 4]
>>> unique([1,2,3,1,2,9], range(11)) [1, 2, 3, 8, 5, 9]
>>> try: ... unique([1,2,3,1,2,13], range(11)) ... raise Exception ... except ValueError: ... pass ...
>>> unique([1,2,3,1,2,4], {'min':0, 'max':11}) [1, 2, 3, 4.175187820357143, 2.5407265707465716, 4]
>>> unique([1,2,3,1,2,4], {'min':0, 'max':11, 'type':int}) [1, 2, 3, 6, 8, 4]
>>> unique([1,2,3,1,2,4], float) [1, 2, 3, 1.012375036824941, 3.9821250727509905, 4]
>>> unique([1,2,3,1,2,10], int) [1, 2, 3, 9, 6, 10]
>>> try: ... unique([1,2,3,1,2,4], int) ... raise Exception ... except ValueError: ... pass ...
- vectorize(constraint, axis=1)
vectorize a 1D constraint function
x' = c(x)
for 2D input and output- Parameters:
constraint (function) – a function
c
, wherex' = c(x)
withx
andx'
lists of the same lengthaxis (int, default=1) – index of the axis of
x
to apply constraints; must be either 0 or 1
- Returns:
- a function
k
, wherex' = k(x)
withx
andx'
2D arrays of the same shape
- a function
Notes
Produces a
transform
function that is of the form required bysklearn.preprocessing.FunctionTransformer(func=transform)
, and takes anumpy
array of shape(samples, features)
as input.
Examples
>>> from mystic.constraints import (impose_bounds, integers, ... with_mean, and_) >>> cons = and_(impose_bounds([(0,5),(7,10)])(lambda x:x), ... integers()(lambda x:x), with_mean(6.0)(lambda x:x)) >>> import numpy as np >>> data = np.random.randn(6,4) >>> c = vectorize(cons, axis=0) >>> c(data) array([[ 3, 9, 3, 10], [ 7, 8, 7, 4], [ 9, 3, 7, 7], [ 7, 3, 8, 7], [ 3, 5, 4, 4], [ 7, 8, 7, 4]]) >>> _.mean(axis=0) array([6., 6., 6., 6.]) >>> c = vectorize(cons, axis=1) >>> c(data) array([[ 3, 3, 9, 9], [ 8, 3, 4, 9], [ 5, 10, 4, 5], [ 7, 8, 7, 2], [ 2, 4, 8, 10], [ 7, 10, 5, 2]]) >>> _.mean(axis=1) array([6., 6., 6., 6., 6., 6.]) >>> k = FunctionTransformer(func=c) >>> k.fit(data).transform(data).mean(axis=1) array([6., 6., 6., 6., 6., 6.])
- with_constraint(ctype, *args, **kwds)
convert a set transformation to a constraints solver of the chosen type
transformation f(x) is a mapping between x and x’, where x’ = f(x). ctype is a mystic.coupler type [inner, outer, inner_proxy, outer_proxy].
Examples
>>> @with_constraint(inner, kwds={'target':5.0}) ... def constraint(x, target): ... return impose_mean(target, x) ... >>> x = constraint([1,2,3,4,5]) >>> print(x) [3.0, 4.0, 5.0, 6.0, 7.0] >>> mean(x) 5.0
- with_mean(target)
bind a mean constraint to a given constraints function.
- Inputs:
target – the target mean
A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling of “impose_mean” onto another constraints function c(x), such that: x’ = impose_mean(target, c(x)).
Examples
>>> @with_mean(5.0) ... def constraint(x): ... x[-1] = x[0] ... return x ... >>> x = constraint([1,2,3,4]) >>> print(x) [4.25, 5.25, 6.25, 4.25] >>> mean(x) 5.0
- with_penalty(ptype, *args, **kwds)
convert a condition to a penalty function of the chosen type
condition f(x) is satisfied when f(x) == 0.0 for equality constraints and f(x) <= 0.0 for inequality constraints. ptype is a mystic.penalty type.
Examples
>>> @with_penalty(quadratic_equality, kwds={'target':5.0}) ... def penalty(x, target): ... return mean(x) - target >>> >>> penalty([1,2,3,4,5]) 400.0 >>> penalty([3,4,5,6,7]) 7.8886090522101181e-29
- with_spread(target)
bind a range constraint to a given constraints function.
- Inputs:
target – the target range
A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling of “impose_spread” onto another constraints function c(x), such that: x’ = impose_spread(target, c(x)).
Examples
>>> @with_spread(10.0) ... def constraint(x): ... return [i**2 for i in x] ... >>> x = constraint([1,2,3,4]) >>> print(x) [3.1666666666666665, 5.1666666666666661, 8.5, 13.166666666666666] >>> spread(x) 10.0
- with_std(target)
bind a standard deviation constraint to a given constraints function.
- Inputs:
target – the target standard deviation
A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling of “impose_std” onto another constraints function c(x), such that: x’ = impose_std(target, c(x)).
Examples
>>> @with_std(1.0) ... def constraint(x): ... x[-1] = x[0] ... return x ... >>> x = constraint([1,2,3]) >>> print(x) [0.6262265521467858, 2.747546895706428, 0.6262265521467858] >>> std(x) 0.99999999999999956
- with_variance(target)
bind a variance constraint to a given constraints function.
- Inputs:
target – the target variance
A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling of “impose_variance” onto another constraints function c(x), such that: x’ = impose_variance(target, c(x)).
Examples
>>> @with_variance(1.0) ... def constraint(x): ... x[-1] = x[0] ... return x ... >>> x = constraint([1,2,3]) >>> print(x) [0.6262265521467858, 2.747546895706428, 0.6262265521467858] >>> variance(x) 0.99999999999999956
coupler module
Function Couplers
These methods can be used to couple two functions together, and represent some common patterns found in applying constraints and penalty methods.
For example, the “outer” method called on y = f(x), with outer=c(x), will convert y = f(x) to y’ = c(f(x)). Similarly, the “inner” method called on y = f(x), with inner=c(x), will convert y = f(x) to y’ = f(c(x)).
- additive(penalty=<function <lambda>>, args=None, kwds=None)
penalize a function with another function: y = f(x) to y’ = f(x) + p(x)
This is useful, for example, in penalizing a cost function where the constraints are violated; thus, the satisfying the constraints will be preferred at every cost function evaluation.
Examples
>>> def squared(x): ... return x**2 ... >>> # equivalent to: (x+1) + (x**2) >>> @additive(squared) ... def constrain(x): ... return x+1 ... >>> from numpy import array >>> x = array([1,2,3,4,5]) >>> constrain(x) array([ 3, 7, 13, 21, 31])
- additive_proxy(penalty=<function <lambda>>, args=None, kwds=None)
penalize a function with another function: y = f(x) to y’ = f(x) + p(x)
This is useful, for example, in penalizing a cost function where the constraints are violated; thus, the satisfying the constraints will be preferred at every cost function evaluation.
This function does not preserve decorated function signature, but passes args and kwds to the penalty function.
- and_(*penalties, **settings)
combine several penalties into a single penalty function by summation
- Inputs:
penalties – penalty functions
- Additional Inputs:
ptype – penalty function type [default: linear_equality] args – arguments for the penalty function [default: ()] kwds – keyword arguments for the penalty function [default: {}] k – penalty multiplier [default: 1] h – iterative multiplier [default: 5]
- NOTE: The defaults provide a linear combination of the individual penalties
without any scaling. A different ptype (from ‘mystic.penalty’) will apply a nonlinear scaling to the combined penalty, while a different k will apply a linear scaling.
- NOTE: This function is also useful for combining constraints solvers
into a single constraints solver, however can not do so directly. Constraints solvers must first be converted to penalty functions (i.e. with ‘as_penalty’), then combined, then can be converted to a constraints solver (i.e. with ‘as_constraint’). The resulting constraints will likely be more expensive to evaluate and less accurate than writing the constraints solver from scratch.
- inner(inner=<function <lambda>>, args=None, kwds=None)
nest a function within another function: convert y = f(x) to y’ = f(c(x))
This is a useful function for nesting one constraint in another constraint. A constraints function takes an iterable x as input, returning a modified x. The “inner” coupler is utilized by mystic.solvers to bind constraints to a cost function; thus the constraints are imposed every cost function evaluation.
Examples
>>> def squared(x): ... return x**2 ... >>> # equivalent to: ((x**2)+1) >>> @inner(squared) ... def constrain(x): ... return x+1 ... >>> from numpy import array >>> x = array([1,2,3,4,5]) >>> constrain(x) array([ 2, 5, 10, 17, 26])
- inner_proxy(inner=<function <lambda>>, args=None, kwds=None)
nest a function within another function: convert y = f(x) to y’ = f(c(x))
This is a useful function for nesting one constraint in another constraint. A constraints function takes an iterable x as input, returning a modified x.
This function applies the “inner” coupler pattern. However, it does not preserve decorated function signature – it passes args and kwds to the inner function instead of the decorated function.
- not_(penalty, **settings)
invert, so penalizes the region where the given penalty is valid
- Inputs:
penalty – a penalty function
- Additional Inputs:
ptype – penalty function type [default: linear_equality] args – arguments for the penalty function [default: ()] kwds – keyword arguments for the penalty function [default: {}] k – penalty multiplier [default: 1] h – iterative multiplier [default: 5]
- or_(*penalties, **settings)
create a single penalty that selects the minimum of several penalties
- Inputs:
penalties – penalty functions
- Additional Inputs:
ptype – penalty function type [default: linear_equality] args – arguments for the penalty function [default: ()] kwds – keyword arguments for the penalty function [default: {}] k – penalty multiplier [default: 1] h – iterative multiplier [default: 5]
- NOTE: The defaults provide a linear combination of the individual penalties
without any scaling. A different ptype (from ‘mystic.penalty’) will apply a nonlinear scaling to the combined penalty, while a different k will apply a linear scaling.
- NOTE: This function is also useful for combining constraints solvers
into a single constraints solver, however can not do so directly. Constraints solvers must first be converted to penalty functions (i.e. with ‘as_penalty’), then combined, then can be converted to a constraints solver (i.e. with ‘as_constraint’). The resulting constraints will likely be more expensive to evaluate and less accurate than writing the constraints solver from scratch.
- outer(outer=<function <lambda>>, args=None, kwds=None)
wrap a function around another function: convert y = f(x) to y’ = c(f(x))
This is a useful function for nesting one constraint in another constraint. A constraints function takes an iterable x as input, returning a modified x.
Examples
>>> def squared(x): ... return x**2 ... >>> # equivalent to: ((x+1)**2) >>> @outer(squared) ... def constrain(x): ... return x+1 ... >>> from numpy import array >>> x = array([1,2,3,4,5]) >>> constrain(x) array([ 4, 9, 16, 25, 36])
- outer_proxy(outer=<function <lambda>>, args=None, kwds=None)
wrap a function around another function: convert y = f(x) to y’ = c(f(x))
This is a useful function for nesting one constraint in another constraint. A constraints function takes an iterable x as input, returning a modified x.
This function applies the “outer” coupler pattern. However, it does not preserve decorated function signature – it passes args and kwds to the outer function instead of the decorated function.
differential_evolution module
Solvers
This module contains a collection of optimization routines based on Storn and Price’s differential evolution algorithm. The core solver algorithm was adapted from Phillips’s DETest.py. An alternate solver is provided that follows the logic in Price, Storn, and Lampen – in that both a current generation and a trial generation are maintained, and all vectors for creating difference vectors and mutations draw from the current generation… which remains invariant until the end of the iteration.
A minimal interface that mimics a scipy.optimize interface has also been implemented, and functionality from the mystic solver API has been added with reasonable defaults.
- Minimal function interface to optimization routines::
diffev – Differential Evolution (DE) solver diffev2 – Price & Storn’s Differential Evolution solver
- The corresponding solvers built on mystic’s AbstractSolver are::
DifferentialEvolutionSolver – a DE solver DifferentialEvolutionSolver2 – Storn & Price’s DE solver
- Mystic solver behavior activated in diffev and diffev2::
EvaluationMonitor = Monitor()
StepMonitor = Monitor()
strategy = Best1Bin
- termination = ChangeOverGeneration(ftol,gtol), if gtol provided
‘’ = VTRChangeOverGenerations(ftol), otherwise
Storn & Price’s DE Solver has also been implemented to use the “map” interface. Mystic enables the user to override the standard python map function with their own ‘map’ function, or one of the maps provided by the pathos package (see http://dev.danse.us/trac/pathos) for distributed and high-performance computing.
Usage
Practical advice for how to configure the Differential Evolution Solver for your own objective function can be found on R. Storn’s web page (http://www.icsi.berkeley.edu/~storn/code.html), and is reproduced here:
First try the following classical settings for the solver configuration:
Choose a crossover strategy (e.g. Rand1Bin), set the number of parents
NP to 10 times the number of parameters, select ScalingFactor=0.8, and
CrossProbability=0.9.
It has been found recently that selecting ScalingFactor from the interval
[0.5, 1.0] randomly for each generation or for each difference vector,
a technique called dither, improves convergence behaviour significantly,
especially for noisy objective functions.
It has also been found that setting CrossProbability to a low value,
e.g. CrossProbability=0.2 helps optimizing separable functions since
it fosters the search along the coordinate axes. On the contrary,
this choice is not effective if parameter dependence is encountered,
something which is frequently occuring in real-world optimization
problems rather than artificial test functions. So for parameter
dependence the choice of CrossProbability=0.9 is more appropriate.
Another interesting empirical finding is that rasing NP above, say, 40
does not substantially improve the convergence, independent of the
number of parameters. It is worthwhile to experiment with these suggestions.
Make sure that you initialize your parameter vectors by exploiting
their full numerical range, i.e. if a parameter is allowed to exhibit
values in the range [-100, 100] it's a good idea to pick the initial
values from this range instead of unnecessarily restricting diversity.
Keep in mind that different problems often require different settings
for NP, ScalingFactor and CrossProbability (see Ref 1, 2). If you
experience misconvergence, you typically can increase the value for NP,
but often you only have to adjust ScalingFactor to be a little lower or
higher than 0.8. If you increase NP and simultaneously lower ScalingFactor
a little, convergence is more likely to occur but generally takes longer,
i.e. DE is getting more robust (a convergence speed/robustness tradeoff).
If you still get misconvergence you might want to instead try a different
crossover strategy. The most commonly used are Rand1Bin, Rand1Exp,
Best1Bin, and Best1Exp. The crossover strategy is not so important a
choice, although K. Price claims that binomial (Bin) is never worse than
exponential (Exp).
In case of continued misconvergence, check the choice of objective function.
There might be a better one to describe your problem. Any knowledge that
you have about the problem should be worked into the objective function.
A good objective function can make all the difference.
See mystic.examples.test_rosenbrock for an example of using DifferentialEvolutionSolver. DifferentialEvolutionSolver2 has the identical interface and usage.
All solvers included in this module provide the standard signal handling. For more information, see mystic.mystic.abstract_solver.
References
Storn, R. and Price, K. Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. Journal of Global Optimization 11: 341-359, 1997.
Price, K., Storn, R., and Lampinen, J. - Differential Evolution, A Practical Approach to Global Optimization. Springer, 1st Edition, 2005.
- class DifferentialEvolutionSolver(dim, NP=4)
Bases:
AbstractSolver
Differential Evolution optimization.
- Takes two initial inputs:
dim – dimensionality of the problem NP – size of the trial solution population. [requires: NP >= 4]
All important class members are inherited from AbstractSolver.
- SetConstraints(constraints)
apply a constraints function to the optimization
- Parameters:
constraints (function) – function of the form:
xk' = constraints(xk)
, wherexk
is the current parameter vector. Ideally, this function is constructed so the parameter vector it passes to the cost function will satisfy the desired (i.e. encoded) constraints.
- Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using differential evolution.
Uses a differential evolution algorithm to find the minimum of a function of one or more variables.
- Parameters:
cost (function, default=None) – function to be minimized:
y = cost(x)
.termination (termination, default=None) – termination conditions.
ExtraArgs (tuple, default=None) – extra arguments for cost.
strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial solutions.
CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.
ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.
sigint_callback (function, default=None) – signal handler callback function.
callback (function, default=None) – function called after each iteration. The interface is
callback(xk)
, withxk
the current parameter vector.disp (bool, default=False) – if True, print convergence messages.
- Returns:
None
- UpdateGenealogyRecords(id, newchild)
create an in-memory log of the genealogy of the population
- _Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration
- Parameters:
cost (func, default=None) – objective, of form
y = cost(x, *ExtraArgs)
, wherex
is a candidate solution vectorExtraArgs (tuple, default=None) – tuple of positional arguments required to evaluate the objective
- Returns:
None
Notes
This method accepts additional
kwds
that are specific for the current solver, as detailed in the_process_inputs
method.
- _decorate_objective(cost, ExtraArgs=None)
decorate the cost function with bounds, penalties, monitors, etc
- Parameters:
cost (func) – objective, of form
y = cost(x, *ExtraArgs)
, wherex
is a candidate solution vectorExtraArgs (tuple, default=None) – tuple of positional arguments required to evaluate the objective
- Returns:
decorated objective function
- _process_inputs(kwds)
process and activate input settings
- Parameters:
callback (function, default=None) – function called after each iteration. The interface is
callback(xk)
, withxk
the current parameter vector.disp (bool, default=False) – if True, print convergence messages.
EvaluationMonitor (monitor, default=None) – a monitor instance to capture each evaluation of cost.
StepMonitor (monitor, default=None) – a monitor instance to capture each iteration’s best results.
penalty (penalty, default=None) – function of the form:
y' = penalty(xk)
, withy = cost(xk) + y'
andxk
is the current parameter vector.constraints (constraint, default=None) – function of the form:
xk' = constraints(xk)
, wherexk
is the current parameter vector.strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial solutions.
CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.
ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.
Notes
callback
anddisp
are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the other inputs are not sticky, and are thus set for a one-time use.
- class DifferentialEvolutionSolver2(dim, NP=4)
Bases:
AbstractMapSolver
Differential Evolution optimization, using Storn and Price’s algorithm.
- Alternate implementation:
utilizes a map-reduce interface, extensible to parallel computing
both a current and a next generation are kept, while the current generation is invariant during the main DE logic
- Parameters:
- SetConstraints(constraints)
apply a constraints function to the optimization
- Parameters:
constraints (function) – function of the form:
xk' = constraints(xk)
, wherexk
is the current parameter vector. Ideally, this function is constructed so the parameter vector it passes to the cost function will satisfy the desired (i.e. encoded) constraints.
- Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using differential evolution.
Uses a differential evolution algorithm to find the minimum of a function of one or more variables. This implementation holds the current generation invariant until the end of each iteration.
- Parameters:
cost (function, default=None) – function to be minimized:
y = cost(x)
.termination (termination, default=None) – termination conditions.
ExtraArgs (tuple, default=None) – extra arguments for cost.
strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial solutions.
CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.
ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.
sigint_callback (function, default=None) – signal handler callback function.
callback (function, default=None) – function called after each iteration. The interface is
callback(xk)
, withxk
the current parameter vector.disp (bool, default=False) – if True, print convergence messages.
- Returns:
None
- UpdateGenealogyRecords(id, newchild)
create an in-memory log of the genealogy of the population
- _Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration
- Parameters:
cost (func, default=None) – objective, of form
y = cost(x, *ExtraArgs)
, wherex
is a candidate solution vectorExtraArgs (tuple, default=None) – tuple of positional arguments required to evaluate the objective
- Returns:
None
Notes
This method accepts additional
kwds
that are specific for the current solver, as detailed in the_process_inputs
method.
- _decorate_objective(cost, ExtraArgs=None)
decorate the cost function with bounds, penalties, monitors, etc
- Parameters:
cost (func) – objective, of form
y = cost(x, *ExtraArgs)
, wherex
is a candidate solution vectorExtraArgs (tuple, default=None) – tuple of positional arguments required to evaluate the objective
- Returns:
decorated objective function
- _process_inputs(kwds)
process and activate input settings
- Parameters:
callback (function, default=None) – function called after each iteration. The interface is
callback(xk)
, withxk
the current parameter vector.disp (bool, default=False) – if True, print convergence messages.
EvaluationMonitor (monitor, default=None) – a monitor instance to capture each evaluation of cost.
StepMonitor (monitor, default=None) – a monitor instance to capture each iteration’s best results.
penalty (penalty, default=None) – function of the form:
y' = penalty(xk)
, withy = cost(xk) + y'
andxk
is the current parameter vector.constraints (constraint, default=None) – function of the form:
xk' = constraints(xk)
, wherexk
is the current parameter vector.strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial solutions.
CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.
ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.
Notes
callback
anddisp
are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the other inputs are not sticky, and are thus set for a one-time use.
- diffev(cost, x0, npop=4, args=(), bounds=None, ftol=0.005, gtol=None, maxiter=None, maxfun=None, cross=0.9, scale=0.8, full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using differential evolution.
Uses a differential evolution algorithm to find the minimum of a function of one or more variables. Mimics a
scipy.optimize
style interface.- Parameters:
cost (function) – the function or method to be minimized:
y = cost(x)
.x0 (ndarray) – the initial guess parameter vector
x
if desired start is a single point, otherwise takes a list of (min,max) bounds that define a region from which random initial points are drawn.npop (int, default=4) – size of the trial solution population.
args (tuple, default=()) – extra arguments for cost.
bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.
ftol (float, default=5e-3) – acceptable relative error in
cost(xopt)
for convergence.gtol (int, default=None) – maximum iterations to run without improvement.
maxiter (int, default=None) – the maximum number of iterations to perform.
maxfun (int, default=None) – the maximum number of function evaluations.
cross (float, default=0.9) – the probability of cross-parameter mutations.
scale (float, default=0.8) – multiplier for mutations on the trial solution.
full_output (bool, default=False) – True if fval and warnflag are desired.
disp (bool, default=True) – if True, print convergence messages.
retall (bool, default=False) – True if allvecs is desired.
callback (function, default=None) – function called after each iteration. The interface is
callback(xk)
, withxk
the current parameter vector.handler (bool, default=False) – if True, enable handling interrupt signals.
id (int, default=None) – the
id
of the solver used in logging.strategy (strategy, default=None) – override the default mutation strategy.
itermon (monitor, default=None) – override the default GenerationMonitor.
evalmon (monitor, default=None) – override the default EvaluationMonitor.
constraints (function, default=None) – function
xk' = constraints(xk)
, wherexk
is the current parameter vector, andxk'
is a parameter vector that satisfies the encoded constraints.penalty (function, default=None) – function
y = penalty(xk)
, wherexk
is the current parameter vector, andy' == 0
when the encoded constraints are satisfied (andy' > 0
otherwise).tightrange (bool, default=None) – impose bounds and constraints concurrently.
cliprange (bool, default=None) – bounding constraints clip exterior values.
map (function, default=None) – a (parallel) map instance
y = map(f, x)
.
- Returns:
(xopt, {fopt, iter, funcalls, warnflag}, {allvecs})
Notes
xopt (ndarray): the minimizer of the cost function
fopt (float): value of cost function at minimum:
fopt = cost(xopt)
iter (int): number of iterations
funcalls (int): number of function calls
- warnflag (int): warning flag:
1 : Maximum number of function evaluations
2 : Maximum number of iterations
allvecs (list): a list of solutions at each iteration
- diffev2(cost, x0, npop=4, args=(), bounds=None, ftol=0.005, gtol=None, maxiter=None, maxfun=None, cross=0.9, scale=0.8, full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using Storn & Price’s differential evolution.
Uses Storn & Prices’s differential evolution algorithm to find the minimum of a function of one or more variables. Mimics a
scipy.optimize
style interface.- Parameters:
cost (function) – the function or method to be minimized:
y = cost(x)
.x0 (ndarray) – the initial guess parameter vector
x
if desired start is a single point, otherwise takes a list of (min,max) bounds that define a region from which random initial points are drawn.npop (int, default=4) – size of the trial solution population.
args (tuple, default=()) – extra arguments for cost.
bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.
ftol (float, default=5e-3) – acceptable relative error in
cost(xopt)
for convergence.gtol (int, default=None) – maximum iterations to run without improvement.
maxiter (int, default=None) – the maximum number of iterations to perform.
maxfun (int, default=None) – the maximum number of function evaluations.
cross (float, default=0.9) – the probability of cross-parameter mutations.
scale (float, default=0.8) – multiplier for mutations on the trial solution.
full_output (bool, default=False) – True if fval and warnflag are desired.
disp (bool, default=True) – if True, print convergence messages.
retall (bool, default=False) – True if allvecs is desired.
callback (function, default=None) – function called after each iteration. The interface is
callback(xk)
, withxk
the current parameter vector.handler (bool, default=False) – if True, enable handling interrupt signals.
id (int, default=None) – the
id
of the solver used in logging.strategy (strategy, default=None) – override the default mutation strategy.
itermon (monitor, default=None) – override the default GenerationMonitor.
evalmon (monitor, default=None) – override the default EvaluationMonitor.
constraints (function, default=None) – function
xk' = constraints(xk)
, wherexk
is the current parameter vector, andxk'
is a parameter vector that satisfies the encoded constraints.penalty (function, default=None) – function
y = penalty(xk)
, wherexk
is the current parameter vector, andy' == 0
when the encoded constraints are satisfied (andy' > 0
otherwise).tightrange (bool, default=None) – impose bounds and constraints concurrently.
cliprange (bool, default=None) – bounding constraints clip exterior values.
map (function, default=None) – a (parallel) map instance
y = map(f, x)
.
- Returns:
(xopt, {fopt, iter, funcalls, warnflag}, {allvecs})
Notes
xopt (ndarray): the minimizer of the cost function
fopt (float): value of cost function at minimum:
fopt = cost(xopt)
iter (int): number of iterations
funcalls (int): number of function calls
- warnflag (int): warning flag:
1 : Maximum number of function evaluations
2 : Maximum number of iterations
allvecs (list): a list of solutions at each iteration
ensemble module
Solvers
This module contains a collection of optimization routines that use “map” to distribute several optimizer instances over parameter space. Each solver accepts a imported solver object as the “nested” solver, which becomes the target of the callable map instance.
- The set of solvers built on mystic’s AbstractEnsembleSolver are::
LatticeSolver – start from center of N grid points BuckshotSolver – start from N random points in parameter space SparsitySolver – start from N points sampled in sparse regions of space
Usage
See mystic.examples.buckshot_example06 for an example of using BuckshotSolver. See mystic.examples.lattice_example06 or an example of using LatticeSolver.
All solvers included in this module provide the standard signal handling. For more information, see mystic.mystic.abstract_solver.
- class BuckshotSolver(dim, npts=8)
Bases:
AbstractEnsembleSolver
parallel mapped optimization starting from N uniform randomly sampled points
- Takes two initial inputs:
dim – dimensionality of the problem npts – number of parallel solver instances
All important class members are inherited from AbstractEnsembleSolver.
- _InitialPoints()
Generate a grid of starting points for the ensemble of optimizers
- class LatticeSolver(dim, nbins=8)
Bases:
AbstractEnsembleSolver
parallel mapped optimization starting from the centers of N grid points
- Takes two initial inputs:
dim – dimensionality of the problem nbins – tuple of number of bins in each dimension
All important class members are inherited from AbstractEnsembleSolver.
- _InitialPoints()
Generate a grid of starting points for the ensemble of optimizers
- class SparsitySolver(dim, npts=8, rtol=None)
Bases:
AbstractEnsembleSolver
parallel mapped optimization starting from N points sampled from sparse regions
- Takes three initial inputs:
dim – dimensionality of the problem npts – number of parallel solver instances rtol – size of radial tolerance for sparsity
All important class members are inherited from AbstractEnsembleSolver.
- _InitialPoints()
Generate a grid of starting points for the ensemble of optimizers
- buckshot(cost, ndim, npts=8, args=(), bounds=None, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using the buckshot ensemble solver.
Uses a buckshot ensemble algorithm to find the minimum of a function of one or more variables. Mimics the
scipy.optimize.fmin
interface. Starts npts solver instances at random points in parameter space.- Parameters:
cost (func) – the function or method to be minimized:
y = cost(x)
.ndim (int) – dimensionality of the problem.
npts (int, default=8) – number of solver instances.
args (tuple, default=()) – extra arguments for cost.
bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.
ftol (float, default=1e-4) – acceptable relative error in
cost(xopt)
for convergence.gtol (int, default=10) – maximum iterations to run without improvement.
maxiter (int, default=None) – the maximum number of iterations to perform.
maxfun (int, default=None) – the maximum number of function evaluations.
full_output (bool, default=False) – True if fval and warnflag are desired.
disp (bool, default=True) – if True, print convergence messages.
retall (bool, default=False) – True if allvecs is desired.
callback (func, default=None) – function to call after each iteration. The interface is
callback(xk)
, with xk the current parameter vector.solver (solver, default=None) – override the default nested Solver instance.
handler (bool, default=False) – if True, enable handling interrupt signals.
id (int, default=None) – the
id
of the solver used in logging.itermon (monitor, default=None) – override the default GenerationMonitor.
evalmon (monitor, default=None) – override the default EvaluationMonitor.
constraints (func, default=None) – a function
xk' = constraints(xk)
, where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints.penalty (func, default=None) – a function
y = penalty(xk)
, where xk is the current parameter vector, andy' == 0
when the encoded constraints are satisfied (andy' > 0
otherwise).tightrange (bool, default=None) – impose bounds and constraints concurrently.
cliprange (bool, default=None) – bounding constraints clip exterior values.
map (func, default=None) – a (parallel) map instance
y = map(f, x)
.dist (mystic.math.Distribution, default=None) – generate randomness in ensemble starting position using the given distribution.
step (bool, default=False) – if True, enable Step within the ensemble.
- Returns:
(xopt, {fopt, iter, funcalls, warnflag, allfuncalls}, {allvecs})
Notes
xopt (ndarray): the minimizer of the cost function
fopt (float): value of cost function at minimum:
fopt = cost(xopt)
iter (int): number of iterations
funcalls (int): number of function calls
- warnflag (int): warning flag:
1 : Maximum number of function evaluations
2 : Maximum number of iterations
allfuncalls (int): total function calls (for all solver instances)
allvecs (list): a list of solutions at each iteration
- lattice(cost, ndim, nbins=8, args=(), bounds=None, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using the lattice ensemble solver.
Uses a lattice ensemble algorithm to find the minimum of a function of one or more variables. Mimics the
scipy.optimize.fmin
interface. Starts N solver instances at regular intervals in parameter space, determined by nbins(N = numpy.prod(nbins); len(nbins) == ndim)
.- Parameters:
cost (func) – the function or method to be minimized:
y = cost(x)
.ndim (int) – dimensionality of the problem.
nbins (tuple(int), default=8) – total bins, or # of bins in each dimension.
args (tuple, default=()) – extra arguments for cost.
bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.
ftol (float, default=1e-4) – acceptable relative error in
cost(xopt)
for convergence.gtol (int, default=10) – maximum iterations to run without improvement.
maxiter (int, default=None) – the maximum number of iterations to perform.
maxfun (int, default=None) – the maximum number of function evaluations.
full_output (bool, default=False) – True if fval and warnflag are desired.
disp (bool, default=True) – if True, print convergence messages.
retall (bool, default=False) – True if allvecs is desired.
callback (func, default=None) – function to call after each iteration. The interface is
callback(xk)
, with xk the current parameter vector.solver (solver, default=None) – override the default nested Solver instance.
handler (bool, default=False) – if True, enable handling interrupt signals.
id (int, default=None) – the
id
of the solver used in logging.itermon (monitor, default=None) – override the default GenerationMonitor.
evalmon (monitor, default=None) – override the default EvaluationMonitor.
constraints (func, default=None) – a function
xk' = constraints(xk)
, where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints.penalty (func, default=None) – a function
y = penalty(xk)
, where xk is the current parameter vector, andy' == 0
when the encoded constraints are satisfied (andy' > 0
otherwise).tightrange (bool, default=None) – impose bounds and constraints concurrently.
cliprange (bool, default=None) – bounding constraints clip exterior values.
map (func, default=None) – a (parallel) map instance
y = map(f, x)
.dist (mystic.math.Distribution, default=None) – generate randomness in ensemble starting position using the given distribution.
step (bool, default=False) – if True, enable Step within the ensemble.
- Returns:
(xopt, {fopt, iter, funcalls, warnflag, allfuncalls}, {allvecs})
Notes
xopt (ndarray): the minimizer of the cost function
fopt (float): value of cost function at minimum:
fopt = cost(xopt)
iter (int): number of iterations
funcalls (int): number of function calls
- warnflag (int): warning flag:
1 : Maximum number of function evaluations
2 : Maximum number of iterations
allfuncalls (int): total function calls (for all solver instances)
allvecs (list): a list of solutions at each iteration
- sparsity(cost, ndim, npts=8, args=(), bounds=None, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using the sparsity ensemble solver.
Uses a sparsity ensemble algorithm to find the minimum of a function of one or more variables. Mimics the
scipy.optimize.fmin
interface. Starts npts solver instances at points in parameter space where existing points are sparse.- Parameters:
cost (func) – the function or method to be minimized:
y = cost(x)
.ndim (int) – dimensionality of the problem.
npts (int, default=8) – number of solver instances.
args (tuple, default=()) – extra arguments for cost.
bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.
ftol (float, default=1e-4) – acceptable relative error in
cost(xopt)
for convergence.gtol (int, default=10) – maximum iterations to run without improvement.
rtol (float, default=None) – minimum acceptable distance from other points.
maxiter (int, default=None) – the maximum number of iterations to perform.
maxfun (int, default=None) – the maximum number of function evaluations.
full_output (bool, default=False) – True if fval and warnflag are desired.
disp (bool, default=True) – if True, print convergence messages.
retall (bool, default=False) – True if allvecs is desired.
callback (func, default=None) – function to call after each iteration. The interface is
callback(xk)
, with xk the current parameter vector.solver (solver, default=None) – override the default nested Solver instance.
handler (bool, default=False) – if True, enable handling interrupt signals.
id (int, default=None) – the
id
of the solver used in logging.itermon (monitor, default=None) – override the default GenerationMonitor.
evalmon (monitor, default=None) – override the default EvaluationMonitor.
constraints (func, default=None) – a function
xk' = constraints(xk)
, where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints.penalty (func, default=None) – a function
y = penalty(xk)
, where xk is the current parameter vector, andy' == 0
when the encoded constraints are satisfied (andy' > 0
otherwise).tightrange (bool, default=None) – impose bounds and constraints concurrently.
cliprange (bool, default=None) – bounding constraints clip exterior values.
map (func, default=None) – a (parallel) map instance
y = map(f, x)
.dist (mystic.math.Distribution, default=None) – generate randomness in ensemble starting position using the given distribution.
step (bool, default=False) – if True, enable Step within the ensemble.
- Returns:
(xopt, {fopt, iter, funcalls, warnflag, allfuncalls}, {allvecs})
Notes
xopt (ndarray): the minimizer of the cost function
fopt (float): value of cost function at minimum:
fopt = cost(xopt)
iter (int): number of iterations
funcalls (int): number of function calls
- warnflag (int): warning flag:
1 : Maximum number of function evaluations
2 : Maximum number of iterations
allfuncalls (int): total function calls (for all solver instances)
allvecs (list): a list of solutions at each iteration
filters module
Input/output ‘filters’
- component(n, multiplier=1.0)
component filter, F, where F(x) yields x[n]
- generate_filter(mask)
generate a data filter from a data masking function
mask: masking function (built with generate_mask) or a boolean mask
returns a function that filters (x,y) or a monitor, based on the given mask x’,y’ = filter(x,y), where filter removes values where mask is False
Examples
>>> mon = Monitor() >>> mon([0.0,0.5,1.0],2) >>> mon([2.0,3.0,4.0],3) >>> mon([4.0,5.0,6.0],4) >>> mon([5.0,5.5,6.5],6) >>> >>> @impose_bounds((0,5)) ... def inputs(x): ... return x ... >>> m = generate_filter(generate_mask(inputs))(mon) >>> m._x [[0.0, 0.5, 1.0], [2.0, 3.0, 4.0]] >>> m._y [2, 3] >>> >>> @integers() ... def identity(x): ... return x ... >>> m = generate_filter(generate_mask(identity))(mon) >>> m._x [[2.0, 3.0, 4.0], [4.0, 5.0, 6.0]] >>> m._y [3, 4]
- generate_mask(cx=None, cy=None)
generate a data mask, based on constraints for x and/or y
cx: mystic.constraint function where x’ = cx(x) and x is parameter array cy: mystic.constraint function where [y’] = cy([y]) and y is cost array
returns a masking function for (x,y) data or monitors, where list[bool] = mask(x,y), with True where constraints are satisfied
Examples
>>> mon = Monitor() >>> mon([0.0,0.5,1.0],2) >>> mon([2.0,3.0,4.0],3) >>> mon([4.0,5.0,6.0],4) >>> mon([5.0,5.5,6.5],6) >>> >>> @impose_bounds((0,5)) ... def inputs(x): ... return x ... >>> generate_mask(inputs)(mon) [True, True, False, False] >>> generate_mask(y=inputs)(mon) [True, True, True, False] >>> >>> @integers() ... def identity(x): ... return x ... >>> generate_mask(identity)(mon) [False, True, True, False] >>> generate_mask(y=identity)(mon) [True, True, True, True]
- identity(x)
identity filter, F, where F(x) yields x
- null_check(params, evalpts, *args)
null validity check
forward_model module
This module contains classes that aid in constructing cost functions. Cost function can easily be created by hand; however, mystic also provides an automated method that allows the dynamic wrapping of forward models into cost function objects.
Usage
The basic usage pattern for a cost factory is to generate a cost function from a set of data points and a corresponding set of evaluation points. The cost factory requires a “model factory”, which is just a generator of model function instances from a list of coefficients. The following example uses numpy.poly1d, which provides a factory for generating polynomials. An expanded version of the following can be found in mystic.examples.example12.
>>> # get a model factory
>>> import numpy as np
>>> FunctionFactory = np.poly1d
>>>
>>> # generate some evaluation points
>>> xpts = 0.1 * np.arange(-50.,51.)
>>>
>>> # we don't have real data, so generate fake data from target and model
>>> target = [2.,-5.,3.]
>>> ydata = FunctionFactory(target)(xpts)
>>> noise = np.random.normal(0,1,size=len(ydata))
>>> ydata = ydata + noise
>>>
>>> # get a cost factory
>>> from mystic.forward_model import CostFactory
>>> C = CostFactory()
>>>
>>> # generate a cost function for the model factory
>>> metric = lambda x: np.sum(x*x)
>>> C.addModel(FunctionFactory, inputs=len(target))
>>> cost = C.getCostFunction(evalpts=xpts, observations=ydata,
... sigma=1.0, metric=metric)
>>>
>>> # pass the cost function to the optimizer
>>> initial_guess = [1.,-2.,1.]
>>> solution = fmin_powell(cost, initial_guess)
>>> print(solution)
[ 2.00495233 -5.0126248 2.72873734]
In general, a user will be required to write their own model factory. See the examples contained in mystic.models for more information.
The CostFactory can be used to couple models together into a single cost function. For an example, see mystic.examples.forward_model.
- class CostFactory
Bases:
object
A cost function generator.
CostFactory builds a list of forward model factories, and maintains a list of associated model names and number of inputs. Can be used to combine several models into a single cost function.
Takes no initial inputs.
- __repr__()
Return repr(self).
- addModel(model, inputs, name=None, outputFilter=<function identity>, inputChecker=<function null_check>)
Adds a forward model factory to the cost factory.
- Inputs:
model – a callable function factory object inputs – number of input arguments to model name – a string representing the model name
Example
>>> import numpy as np >>> C = CostFactory() >>> C.addModel(np.poly, inputs=3)
- getCostFunction(evalpts, observations, sigma=None, metric=<function CostFactory.<lambda>>)
Get a cost function that allows simultaneous evaluation of all forward models for the same set of evaluation points and observation points.
- Inputs:
evalpts – a list of evaluation points observations – a list of data points sigma – a scaling factor applied to the raw cost metric – the cost metric object
The cost metric should be a function of one parameter (possibly an array) that returns a scalar. The default is L2. When called, the “misfit” will be passed in.
NOTE: Input parameters WILL go through filters registered as inputCheckers.
Example
>>> import numpy as np >>> C = CostFactory() >>> C.addModel(np.poly, inputs=3) >>> x = np.array([-2., -1., 0., 1., 2.]) >>> y = np.array([-4., -2., 0., 2., 4.]) >>> F = C.getCostFunction(x, y, metric=lambda x: np.sum(x)) >>> F([1,0,0]) 0.0 >>> F([2,0,0]) 10.0 >>> F = C.getCostFunction(x, y) >>> F([2,0,0]) 34.0
- getCostFunctionSlow(evalpts, observations)
Get a cost function that allows simultaneous evaluation of all forward models for the same set of evaluation points and observation points.
- Parameters:
Notes
The cost metric is hard-wired to be the sum of the real part of
|x|^2
, where x is the VectorCostFunction for a given set of parameters.Input parameters do NOT go through filters registered as inputCheckers.
Examples
>>> import numpy as np >>> C = CostFactory() >>> C.addModel(np.poly, inputs=3) >>> x = np.array([-2., -1., 0., 1., 2.]) >>> y = np.array([-4., -2., 0., 2., 4.]) >>> F = C.getCostFunctionSlow(x, y) >>> F([1,0,0]) 0.0 >>> F([2,0,0]) 100.0
- getForwardEvaluator(evalpts)
Get a model factory that allows simultaneous evaluation of all forward models for the same set of evaluation points.
- Inputs:
evalpts – a list of evaluation points
Example
>>> import numpy as np >>> C = CostFactory() >>> C.addModel(np.poly, inputs=3) >>> F = C.getForwardEvaluator([1,2,3,4,5]) >>> F([1,0,0]) [array([ 1, 4, 9, 16, 25])] >>> F([0,1,0]) [array([1, 2, 3, 4, 5])]
- getParameterList()
Get a ‘pretty’ listing of the input parameters and corresponding models.
- getRandomParams()
- getVectorCostFunction(evalpts, observations)
Get a vector cost function that allows simultaneous evaluation of all forward models for the same set of evaluation points and observation points.
- Inputs:
evalpts – a list of evaluation points observations – a list of data points
The vector cost metric is hard-wired to be the sum of the difference of getForwardEvaluator(evalpts) and the observations.
NOTE: Input parameters do NOT go through filters registered as inputCheckers.
Example
>>> import numpy as np >>> C = CostFactory() >>> C.addModel(np.poly, inputs=3) >>> x = np.array([-2., -1., 0., 1., 2.]) >>> y = np.array([-4., -2., 0., 2., 4.]) >>> F = C.getVectorCostFunction(x, y) >>> F([1,0,0]) 0.0 >>> F([2,0,0]) 10.0
helputil module
Tools for prettifying help
Some of following code is taken from Ka-Ping Yee’s pydoc module
- commandfy(text)
Format a command string
- commandstring(text, BoldQ)
Bolds all lines in text that returns true by predicate BoldQ.
- paginate(text, BoldQ=<function <lambda>>)
break printed content into pages
linesearch module
local copy of scipy.optimize.linesearch
- line_search(f, myfprime, xk, pk, gfk, old_fval, old_old_fval, args=(), c1=0.0001, c2=0.9, amax=50)
mask module
- _extend_mask(condition, mask)
extend the mask in the termination condition with the given mask
- _replace_mask(condition, mask)
replace the mask in the termination condition with the given mask
- _update_masks(condition, mask, kind='', new=False)
update the termination condition with the given mask
- get_mask(condition)
get mask from termination condition
- update_mask(condition, collapse, new=False)
update the termination condition with the given collapse (dict)
- update_position_masks(condition, mask, new=False)
update all position masks in the given termination condition
- update_weight_masks(condition, mask, new=False)
update all weight masks in the given termination condition
math module
math: mathematical functions and tools for use in mystic
Functions
Mystic provides a set of mathematical functions that support various advanced optimization features such as uncertainty analysis and parameter sensitivity.
Tools
Mystic also provides a set of mathematical tools that support advanced features such as parameter space partitioning and monte carlo estimation. These mathematical tools are provided:
polyeval -- fast evaluation of an n-dimensional polynomial
poly1d -- generate a 1d polynomial instance
gridpts -- generate a set of regularly spaced points
fillpts -- generate a set of space-filling points
samplepts -- generate a set of randomly sampled points
tolerance -- absolute difference plus relative difference
almostEqual -- test if equal within some absolute or relative tolerance
Distribution -- generate a sampling distribution instance
- class Distribution(generator=None, *args, **kwds)
Bases:
object
Sampling distribution for mystic optimizers
generate a sampling distribution with interface dist(size=None)
- input::
generator: a ‘distribution’ method from scipy.stats or numpy.random
rng: a mystic.random_state object [default: random_state(‘numpy.random’)]
args: positional arguments for the distribtution object
kwds: keyword arguments for the distribution object
- note::
this method only accepts numpy.random methods with the keyword ‘size’, and only accepts random_state objects built with module=’numpy.random’
- note::
generator may be a method object or a string of ‘module.object’; similarly, rng may be a random_state object or a string of ‘module’
- note::
Distributions d1,d2 may be combined by adding data (i.e. d1(n) + d2(n)), or by adding probabilitiies as Distribution(d1,d2); the former uses the addition operator and produces a new unnormalized Distribution, while the latter produces a new Distribution which randomly chooses from the Distributions provided
- note::
a normalization factor can be incorporated through the multiplication or division operator, and is stored in the Distribution as ‘norm’
- __add__(dist)
- __call__(size=None)
generate a sample of given size (tuple) from the distribution
- __floordiv__(denom)
- __mul__(norm)
- __repr__()
Return repr(self).
- __rmul__(norm)
- __truediv__(denom)
- almostEqual(x, y, tol=1e-18, rel=1e-07)
Returns True if two arrays are element-wise equal within a tolerance.
The tolerance values are positive, typically very small numbers. The relative difference (rtol * abs(b)) and the absolute difference atol are added together to compare against the absolute difference between a and b.
- Parameters:
- Returns:
y – Returns True if the two arrays are equal within the given tolerance; False otherwise. If either array contains nan, then False is returned.
- Return type:
Notes
If the following equation is element-wise True, then almostEqual returns True.
absolute(a - b) <= (atol + rtol * absolute(b))
Examples
>>> almostEqual([1e10,1.2345678], [1e10,1.2345677]) True >>> almostEqual([1e10,1.234], [1e10,1.235]) False
- fillpts(lb, ub, npts, data=None, rtol=None, dist=None)
takes lower and upper bounds (e.g. lb = [0,3], ub = [2,4]) finds npts that are at least rtol away from legacy data produces a list of sample points s = [[1,3],[1,4],[2,3],[2,4]]
- Inputs:
lb – a list of the lower bounds ub – a list of the upper bounds npts – number of sample points data – a list of legacy sample points rtol – target radial distance from each point dist – a mystic.math.Distribution instance (or list of Distributions)
Notes: if rtol is None, use max rtol; if rtol < 0, use quick-n-dirty method
- gridpts(q, dist=None)
takes a list of lists of arbitrary length q = [[1,2],[3,4]] and produces a list of gridpoints g = [[1,3],[1,4],[2,3],[2,4]]
- Inputs:
q – a list of lists of integers denoting grid points dist – a mystic.math.Distribution instance (or list of Distributions)
Notes
if a mystic.math.Distribution is provided, use it to inject randomness
- poly1d(coeff)
generate a 1-D polynomial instance from a list of coefficients
- polyeval(coeffs, x)
evaluate the polynomial defined by coeffs at evaluation points, x
thus,
[a3, a2, a1, a0]
yieldsa3 x^3 + a2 x^2 + a1 x^1 + a0
- Parameters:
- Returns:
array of evaluations of the polynomial
Examples
>>> x = numpy.array([1, 2, 3, 4, 5]) >>> polyeval([1, 0, 0], x) array([ 1, 4, 9, 16, 25]) >>> polyeval([0, 1, -1], x) array([0, 1, 2, 3, 4])
- samplepts(lb, ub, npts, dist=None)
takes lower and upper bounds (e.g. lb = [0,3], ub = [2,4]) produces a list of sample points s = [[1,3],[1,4],[2,3],[2,4]]
- Inputs:
lb – a list of the lower bounds ub – a list of the upper bounds npts – number of sample points dist – a mystic.math.Distribution instance (or list of Distributions)
- tolerance(x, tol=1e-15, rel=1e-15)
relative plus absolute difference
metropolis module
Implements a simple version of the Metropolis-Hastings algorithm
- metropolis_hastings(proposal, target, x)
Proposal(x) -> next point. Must be symmetric. This is because otherwise the PDF of the proposal density is needed (not just a way to draw from the proposal)
models module
models: sample models and functions prepared for use in mystic
Functions
Mystic provides a set of standard fitting functions that derive from
the function API found in mystic.models.abstract_model
. These standard
functions are provided:
sphere -- De Jong's spherical function
rosen -- Sum of Rosenbrock's function
step -- De Jong's step function
quartic -- De Jong's quartic function
shekel -- Shekel's function
corana -- Corana's function
fosc3d -- Trott's fOsc3D function
nmin51 -- Champion's NMinimize test function
griewangk -- Griewangk's function
zimmermann -- Zimmermann's function
peaks -- NAG's peaks function
venkat91 -- Venkataraman's sinc function
schwefel -- Schwefel's function
ellipsoid -- Pohlheim's rotated hyper-ellipsoid function
rastrigin -- Rastrigin's function
powers -- Pohlheim's sum of different powers function
ackley -- Ackley's path function
michal -- Michalewicz's function
branins -- Branins's rcos function
easom -- Easom's function
goldstein -- Goldstein-Price's function
paviani -- Paviani's function
wavy1 -- a simple sine-based multi-minima function
wavy2 -- another simple sine-based multi-minima function
Models
Mystic also provides a set of example models that derive from the model API
found in mystic.models.abstract_model
. These standard models are provided:
poly -- 1d model representation for polynomials
circle -- 2d array representation of a circle
lorentzian -- Lorentzian peak model
br8 -- Bevington & Robinson's model of dual exponential decay
mogi -- Mogi's model of surface displacements from a point spherical
source in an elastic half space
Additionally, circle
has been extended to provide three additional models,
each with different packing densities:
dense_circle, sparse_circle, and minimal_circle
Further, poly
provides additional models for 2nd, 4th, 6th, 8th, and 16th
order Chebyshev polynomials:
chebyshev2, chebyshev4, chebyshev6, chebyshev8, chebyshev16
Also, rosen
has been modified to provide models for the 0th and 1st
derivative of the Rosenbrock function:
rosen0der, and rosen1der
- class AbstractFunction(ndim=None)
Bases:
object
Base class for mystic functions
The ‘function’ method must be overwritten, thus allowing calls to the class instance to mimic calls to the function object.
- For example, if function is overwritten with the Rosenbrock function:
>>> rosen = Rosenbrock(ndim=3) >>> rosen([1,1,1]) 0.
Provides a base class for mystic functions.
Takes optional input ‘ndim’ (number of dimensions).
- __call__(*args, **kwds)
Call self as a function.
- function(coeffs)
takes a list of coefficients x, returns f(x)
- minimizers = None
- class AbstractModel(name='dummy', metric=<function AbstractModel.<lambda>>, sigma=1.0)
Bases:
object
Base class for mystic models
The ‘evaluate’ and ‘ForwardFactory’ methods must be overwritten, thus providing a standard interface for generating a forward model factory and evaluating a forward model. Additionally, two common ways to generate a cost function are built into the model. For “standard models”, the cost function generator will work with no modifications.
See mystic.models.poly for a few basic examples.
Provides a base class for mystic models.
- Inputs::
name – a name string for the model metric – the cost metric object [default => lambda x: numpy.sum(x*x)] sigma – a scaling factor applied to the raw cost
- CostFactory(target, pts)
generates a cost function instance from list of coefficients and evaluation points
- CostFactory2(pts, datapts, nparams)
generates a cost function instance from datapoints and evaluation points
- ForwardFactory(coeffs)
generates a forward model instance from a list of coefficients
- evaluate(coeffs, x)
takes list of coefficients & evaluation points, returns f(x)
- ackley(x)
evaluates Ackley’s function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = -20 * exp(-0.2 * sqrt(1/N * sum_(i=0)^(N-1) x_(i)^(2))) and: f_1(x) = -exp(1/N * sum_(i=0)^(N-1) cos(2 * pi * x_(i))) + 20 + exp(1)
- Inspect with mystic_model_plotter using::
mystic.models.ackley -b “-10:10:.1, -10:10:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
- branins(x)
evaluates Branins’s function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = a * (x_1 - b * x_(0)^(2) + c * x_0 - d)^2 and f_1(x) = e * (1 - f) * cos(x_0) + e and a=1, b=5.1/(4*pi^2), c=5/pi, d=6, e=10, f=1/(8*pi)
- Inspect with mystic_model_plotter using::
mystic.models.branins -b “-10:20:.1, -5:25:.1” -d -x 1
The minimum is f(x)=0.397887 at x=((2 +/- (2*i)+1)*pi, 2.275 + 10*i*(i+1)/2) for all i
- corana(x)
evaluates a 4-D Corana’s parabola function for a list of coeffs
f(x) = sum_(i=0)^(3) f_0(x)
Where for abs(x_i - z_i) < 0.05: f_0(x) = 0.15*(z_i - 0.05*sign(z_i))^(2) * d_i and otherwise: f_0(x) = d_i * x_(i)^(2), with z_i = floor(abs(x_i/0.2)+0.49999)*sign(x_i)*0.2 and d_i = 1,1000,10,100.
For len(x) == 1, x = x_0,0,0,0; for len(x) == 2, x = x_0,0,x_1,0; for len(x) == 3, x = x_0,0,x_1,x_2; for len(x) >= 4, x = x_0,x_1,x_2,x_3.
- Inspect with mystic_model_plotter using::
mystic.models.corana -b “-1:1:.01, -1:1:.01” -d -x 1
The minimum is f(x)=0 for abs(x_i) < 0.05 for all i.
- easom(x)
evaluates Easom’s function for a list of coeffs
f(x) = -cos(x_0) * cos(x_1) * exp(-((x_0-pi)^2+(x_1-pi)^2))
- Inspect with mystic_model_plotter using::
mystic.models.easom -b “-5:10:.1, -5:10:.1” -d
The minimum is f(x)=-1.0 at x=(pi,pi)
- ellipsoid(x)
evaluates the rotated hyper-ellipsoid function for a list of coeffs
f(x) = sum_(i=0)^(N-1) (sum_(j=0)^(i) x_j)^2
- Inspect with mystic_model_plotter using::
mystic.models.ellipsoid -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
- fosc3d(x)
evaluates the fOsc3D function for a list of coeffs
f(x) = f_0(x) + p(x)
Where: f_0(x) = -4 * exp(-x_(0)^2 - x_(1)^2) + sin(6*x_(0)) * sin(5*x_(1)) with for x_1 < 0: p(x) = 100.*x_(1)^2 and otherwise: p(x) = 0.
- Inspect with mystic_model_plotter using::
mystic.models.fosc3d -b “-5:5:.1, 0:5:.1” -d
The minimum is f(x)=-4.501069742528923 at x=(-0.215018, 0.240356)
- goldstein(x)
evaluates Goldstein-Price’s function for a list of coeffs
f(x) = (1 + (x_0 + x_1 + 1)^2 * f_0(x)) * (30 + (2*x_0 - 3*x_1)^2 * f_1(x))
Where: f_0(x) = 19 - 14*x_0 + 3*x_(0)^2 - 14*x_1 + 6*x_(0)*x_(1) + 3*x_(1)^2 and f_1(x) = 18 - 32*x_0 + 12*x_(0)^2 + 48*x_1 - 36*x_(0)*x_(1) + 27*x_(1)^2
- Inspect with mystic_model_plotter using::
mystic.models.goldstein -b “-5:5:.1, -5:5:.1” -d -x 1
The minimum is f(x)=3.0 at x=(0,-1)
- griewangk(x)
evaluates an N-dimensional Griewangk’s function for a list of coeffs
f(x) = f_0(x) - f_1(x) + 1
Where: f_0(x) = sum_(i=0)^(N-1) x_(i)^(2) / 4000. and: f_1(x) = prod_(i=0)^(N-1) cos( x_i / (i+1)^(1/2) )
- Inspect with mystic_model_plotter using::
mystic.models.griewangk -b “-10:10:.1, -10:10:.1” -d -x 5
The minimum is f(x)=0.0 for x_i=0.0
- michal(x)
evaluates Michalewicz’s function for a list of coeffs
f(x) = -sum_(i=0)^(N-1) sin(x_i) * (sin((i+1) * (x_i)^(2) / pi))^(20)
- Inspect with mystic_model_plotter using::
mystic.models.michal -b “0:3.14:.1, 0:3.14:.1, 1.28500168, 1.92305311, 1.72047194” -d
For x=(2.20289811, 1.57078059, 1.28500168, 1.92305311, 1.72047194, …)[:N] and c=(-0.801303, -1.0, -0.959092, -0.896699, -1.030564, …)[:N], the minimum is f(x)=sum(c) for all x_i=(0,pi)
- nmin51(x)
evaluates the NMinimize51 function for a list of coeffs
f(x) = f_0(x) + f_1(x)
Where: f_0(x) = exp(sin(50*x_0)) + sin(60*exp(x_1)) + sin(70*sin(x_0)) and f_1(x) = sin(sin(80*x_1)) - sin(10*(x_0 + x_1)) + (x_(0)^2 + x_(1)^2)/4
- Inspect with mystic_model_plotter using::
mystic.models.nmin51 -b “-5:5:.1, 0:5:.1” -d
The minimum is f(x)=-3.306869 at x=(-0.02440313,0.21061247)
- paviani(x)
evaluates Paviani’s function for a list of coeffs
f(x) = f_0(x) - f_1(x)
Where: f_0(x) = sum_(i=0)^(N-1) (ln(x_i - 2)^2 + ln(10 - x_i)^2) and f_1(x) = prod_(i=0)^(N-1) x_(i)^(.2)
- Inspect with mystic_model_plotter using::
mystic.models.paviani -b “2:10:.1, 2:10:.1” -d
For N=1, the minimum is f(x)=2.133838 at x_i=8.501586, for N=3, the minimum is f(x)=7.386004 at x_i=8.589578, for N=5, the minimum is f(x)=9.730525 at x_i=8.740743, for N=8, the minimum is f(x)=-3.411859 at x_i=9.086900, for N=10, the minimum is f(x)=-45.778470 at x_i=9.350241.
- peaks(x)
evaluates an 2-dimensional peaks function for a list of coeffs
f(x) = f_0(x) - f_1(x) - f_2(x)
Where: f_0(x) = 3 * (1 - x_0)^2 * exp(-x_0^2 - (x_1 + 1)^2) and f_1(x) = 10 * (.2 * x_0 - x_0^3 - x_1^5) * exp(-x_0^2 - x_1^2) and f_2(x) = exp(-(x_0 + 1)^2 - x_1^2) / 3
- Inspect with mystic_model_plotter using::
mystic.models.peaks -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=-6.551133332835841 at x=(0.22827892, -1.62553496)
- powers(x)
evaluates the sum of different powers function for a list of coeffs
f(x) = sum_(i=0)^(N-1) abs(x_(i))^(i+2)
- Inspect with mystic_model_plotter using::
mystic.models.powers -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
- quartic(x)
evaluates an N-dimensional quartic function for a list of coeffs
f(x) = sum_(i=0)^(N-1) (x_(i)^4 * (i+1) + k_i)
Where k_i is a random variable with uniform distribution bounded by [0,1).
- Inspect with mystic_model_plotter using::
mystic.models.quartic -b “-3:3:.1, -3:3:.1” -d -x 1
The minimum is f(x)=N*E[k] for x_i=0.0, where E[k] is the expectation of k, and thus E[k]=0.5 for a uniform distribution bounded by [0,1).
- rastrigin(x)
evaluates Rastrigin’s function for a list of coeffs
f(x) = 10 * N + sum_(i=0)^(N-1) (x_(i)^2 - 10 * cos(2 * pi * x_(i)))
- Inspect with mystic_model_plotter using::
mystic.models.rastrigin -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
- rosen(x)
evaluates an N-dimensional Rosenbrock saddle for a list of coeffs
f(x) = sum_(i=0)^(N-2) 100*(x_(i+1) - x_(i)^(2))^(2) + (1 - x_(i))^(2)
- Inspect with mystic_model_plotter using::
mystic.models.rosen -b “-3:3:.1, -1:5:.1, 1” -d -x 1
The minimum is f(x)=0.0 at x_i=1.0 for all i
- rosen0der(x)
evaluates an N-dimensional Rosenbrock saddle for a list of coeffs
f(x) = sum_(i=0)^(N-2) 100*(x_(i+1) - x_(i)^(2))^(2) + (1 - x_(i))^(2)
- Inspect with mystic_model_plotter using::
mystic.models.rosen -b “-3:3:.1, -1:5:.1, 1” -d -x 1
The minimum is f(x)=0.0 at x_i=1.0 for all i
- rosen1der(x)
evaluates an N-dimensional Rosenbrock derivative for a list of coeffs
The minimum is f’(x)=[0.0]*n at x=[1.0]*n, where len(x) >= 2.
- schwefel(x)
evaluates Schwefel’s function for a list of coeffs
f(x) = sum_(i=0)^(N-1) -x_i * sin(sqrt(abs(x_i)))
Where abs(x_i) <= 500.
- Inspect with mystic_model_plotter using::
mystic.models.schwefel -b “-500:500:10, -500:500:10” -d
The minimum is f(x)=-(N+1)*418.98288727243374 at x_i=420.9687465 for all i
- shekel(x)
evaluates a 2-D Shekel’s Foxholes function for a list of coeffs
f(x) = 1 / (0.002 + f_0(x))
Where: f_0(x) = sum_(i=0)^(24) 1 / (i + sum_(j=0)^(1) (x_j - a_ij)^(6)) with a_ij=(-32,-16,0,16,32). for j=0 and i=(0,1,2,3,4), a_i0=a_k0 with k=i mod 5 also j=1 and i=(0,5,10,15,20), a_i1=a_k1 with k=i+k’ and k’=(1,2,3,4).
- Inspect with mystic_model_plotter using::
mystic.models.shekel -b “-50:50:1, -50:50:1” -d -x 1
The minimum is f(x)=0 for x=(-32,-32)
- sphere(x)
evaluates an N-dimensional spherical function for a list of coeffs
f(x) = sum_(i=0)^(N-1) x_(i)^2
- Inspect with mystic_model_plotter using::
mystic.models.sphere -b “-5:5:.1, -5:5:.1” -d
The minimum is f(x)=0.0 at x_i=0.0 for all i
- step(x)
evaluates an N-dimensional step function for a list of coeffs
f(x) = f_0(x) + p_i(x), with i=0,1
Where for abs(x_i) <= 5.12: f_0(x) = 30 + sum_(i=0)^(N-1) floor x_i and for x_i > 5.12: p_0(x) = 30 * (1 + (x_i - 5.12)) and for x_i < 5.12: p_1(x) = 30 * (1 + (5.12 - x_i)) Otherwise, f_0(x) = 0 and p_i(x)=0 for i=0,1.
- Inspect with mystic_model_plotter using::
mystic.models.step -b “-10:10:.2, -10:10:.2” -d -x 1
The minimum is f(x)=(30 - 6*N) for all x_i=[-5.12,-5)
- venkat91(x)
evaluates Venkataraman’s sinc function for a list of coeffs
f(x) = -20 * sin(r(x))/r(x)
Where: r(x) = sqrt((x_0 - 4)^2 + (x_1 - 4)^2 + 0.1)
- Inspect with mystic_model_plotter using::
mystic.models.venkat91 -b “-10:10:.1, -10:10:.1” -d
The minimum is f(x)=-19.668329370585823 at x=(4.0, 4.0)
- wavy1(x)
evaluates the wavy1 function for a list of coeffs
f(x) = abs(x + 3*sin(x + pi) + pi)
- Inspect with mystic_model_plotter using::
mystic.models.wavy1 -b “-20:20:.5, -20:20:.5” -d -r numpy.add
The minimum is f(x)=0.0 at x_i=-pi for all i
- wavy2(x)
evaluates the wavy2 function for a list of coeffs
f(x) = 4*sin(x)+sin(4*x)+sin(8*x)+sin(16*x)+sin(32*x)+sin(64*x)
- Inspect with mystic_model_plotter using::
mystic.models.wavy2 -b “-10:10:.2, -10:10:.2” -d -r numpy.add
The function has degenerate global minima of f(x)=-6.987594 at x_i = 4.489843526 + 2*k*pi for all i, and k is an integer
- zimmermann(x)
evaluates a Zimmermann function for a list of coeffs
f(x) = max(f_0(x), p_i(x)), with i = 0,1,2,3
Where: f_0(x) = 9 - x_0 - x_1 with for x_0 < 0: p_0(x) = -100 * x_0 and for x_1 < 0: p_1(x) = -100 * x_1 and for c_2(x) > 16 and c_3(x) > 14: p_i(x) = 100 * c_i(x), with i = 2,3 c_2(x) = (x_0 - 3)^2 + (x_1 - 2)^2 c_3(x) = x_0 * x_1 Otherwise, p_i(x)=0 for i=0,1,2,3 and c_i(x)=0 for i=2,3.
- Inspect with mystic_model_plotter using::
mystic.models.zimmermann -b “-5:10:.1, -5:10:.1” -d -x 1
The minimum is f(x)=0.0 at x=(7.0,2.0)
monitors module
monitors: callable class instances that record data
Monitors
Monitors provide the ability to monitor progress as the optimization is underway. Monitors also can be used to extract and prepare information for mystic’s analysis viewers. Each of mystic’s monitors are customizable, and provide the user with a different type of output. The following monitors are available:
Monitor -- the basic monitor; only writes to internal state
LoggingMonitor -- a logging monitor; also writes to a logfile
VerboseMonitor -- a verbose monitor; also writes to stdout/stderr
VerboseLoggingMonitor -- a verbose logging monitor; best of both worlds
CustomMonitor -- a customizable 'n-variable' version of Monitor
Null -- a null object, which reliably does nothing
Usage
Typically monitors are either bound to a model function by a modelFactory
,
or bound to a cost function by a Solver
.
Examples
>>> # get and configure monitors
>>> from mystic.monitors import Monitor, VerboseMonitor
>>> evalmon = Monitor()
>>> stepmon = VerboseMonitor(5)
>>>
>>> # instantiate and configure the solver
>>> from mystic.solvers import NelderMeadSimplexSolver
>>> from mystic.termination import CandidateRelativeTolerance as CRT
>>> solver = NelderMeadSimplexSolver(len(x0))
>>> solver.SetInitialPoints(x0)
>>>
>>> # associate the monitor with a solver, then solve
>>> solver.SetEvaluationMonitor(evalmon)
>>> solver.SetGenerationMonitor(stepmon)
>>> solver.Solve(rosen, CRT())
>>>
>>> # access the 'iteration' history
>>> stepmon.x # parameters after each iteration
>>> stepmon.y # cost after each iteration
>>>
>>> # access the 'evaluation' history
>>> evalmon.x # parameters after each evaluation
>>> evalmon.y # cost after each evaluation
- CustomMonitor(*args, **kwds)
generate a custom Monitor
- Parameters:
- Returns:
a customized monitor instance
Examples
>>> sow = CustomMonitor('x','y',x="Params",y="Costs",e="Error",d="Deriv") >>> sow(1,1) >>> sow(2,4,e=0) >>> sow.x [1,2] >>> sow.y [1,4] >>> sow.e [0] >>> sow.d []
- class LoggingMonitor(interval=1, filename='log.txt', new=False, all=True, info=None, **kwds)
Bases:
Monitor
A basic Monitor that writes to a file at specified intervals.
Logs output ‘y’ and input parameters ‘x’ to a file every ‘interval’.
- __call__(x, y, id=None, best=0, k=False)
Call self as a function.
- __reduce__()
Helper for pickle.
- __setstate__(state)
- info(message)
- class Monitor(**kwds)
Bases:
object
Instances of objects that can be passed as monitors. Typically, a Monitor logs a list of parameters and the corresponding costs, retrievable by accessing the Monitor’s member variables.
- example usage…
>>> sow = Monitor() >>> sow([1,2],3) >>> sow([4,5],6) >>> sow.x [[1, 2], [4, 5]] >>> sow.y [3, 6]
- __add__(monitor)
add the contents of self and the given monitor
- __call__(x, y, id=None, **kwds)
Call self as a function.
- __getitem__(y)
x.__getitem__(y) <==> x[y]
- __len__()
- __setitem__(i, y)
x.__setitem__(i, y) <==> x[i]=y
- __step()
- _get_y(monitor)
avoid double-conversion by combining k’s
- _ik(y, k=False, type=<class 'list'>)
- _k(y, type=<class 'list'>)
- property _pos
Positions
- property _step
- property _wts
Weights
- property ax
Params
- property ay
Costs
- extend(monitor)
append the contents of the given monitor
- get_ax()
- get_ay()
- get_id()
- get_info()
- get_ipos()
- get_iwts()
- get_ix()
- get_iy()
- get_pos()
- get_wts()
- get_x()
- get_y()
- property id
Id
- info(message)
- property ix
Params
- property iy
Costs
- min()
get the minimum monitor entry
- property pos
Positions
- prepend(monitor)
prepend the contents of the given monitor
- property wts
Weights
- property x
Params
- property y
Costs
- class Null(*args, **kwargs)
Bases:
object
A Null object
Null objects always and reliably “do nothing.”
- __bool__()
- __call__(*args, **kwargs)
Call self as a function.
- __delattr__(name)
Implement delattr(self, name).
- __getattr__(name)
- __getitem__(y)
- __getnewargs__()
- __len__()
- static __new__(cls, *args, **kwargs)
- __nonzero__()
- __repr__()
Return repr(self).
- __setattr__(name, value)
Implement setattr(self, name, value).
- __setitem__(i, y)
- _id = ()
- _npts = None
- _x = ()
- _y = ()
- info(message)
- k = None
- label = None
- min()
- x = ()
- y = ()
- class VerboseLoggingMonitor(interval=1, yinterval=10, xinterval=inf, filename='log.txt', new=False, all=True, info=None, **kwds)
Bases:
LoggingMonitor
A Monitor that writes to a file and the screen at specified intervals.
Logs output ‘y’ and input parameters ‘x’ to a file every ‘interval’, also print every ‘yinterval’.
- __call__(x, y, id=None, best=0, k=False)
Call self as a function.
- __reduce__()
Helper for pickle.
- __setstate__(state)
- info(message)
- class VerboseMonitor(interval=10, xinterval=inf, all=True, **kwds)
Bases:
Monitor
A verbose version of the basic Monitor.
Prints output ‘y’ every ‘interval’, and optionally prints input parameters ‘x’ every ‘xinterval’.
- __call__(x, y, id=None, best=0, k=False)
Call self as a function.
- info(message)
- _load(path, monitor=None, verbose=False)
load npts, params, and cost into monitor from file at given path
- _measures(monitor, last=None, weights=False)
return positions or weights from the last N entries in a monitor
this function requires a montor that is monitoring a product_measure
- _positions(monitor, last=None)
return positions from the last N entries in a monitor
this function requires a montor that is monitoring a product_measure
- _solutions(monitor, last=None)
return the params from the last N entries in a monitor
- _weights(monitor, last=None)
return weights from the last N entries in a monitor
this function requires a montor that is monitoring a product_measure
munge module
- __orig_write_converge_file(mon, log_file='paramlog.py')
- __orig_write_support_file(mon, log_file='paramlog.py')
- _process_ids(ids, n=None)
convert ids to list of tuples of (iterations, ids)
ids is a list of ints, an int, or None n is target length (generally used when ids is a single-value)
- _reduce_ids(ids)
convert ids from list of tuples of (iterations, ids) to a list of ids
- converge_to_support(steps, energy)
- converge_to_support_converter(file_in, file_out)
- isNull(mon)
- logfile_reader(filename, iter=False)
read a log file (e.g. written by a LoggingMonitor) in three-column format
iter is a bool, where if True, iter is also returned filename is a log file path, with format:
__iter__ __energy__ __params__
iter is a tuple of (iteration, id), with id an int or None energy is a float or tuple[float], and is the output of the cost function params is a list[float], and is the input to the cost function
If iter=True, returns tuple of (iter, params, energy), as defined above. If iter=False, returns tuple of (params, energy).
- old_to_new_support_converter(file_in, file_out)
- raw_to_converge(steps, energy)
- raw_to_converge_converter(file_in, file_out)
- raw_to_support(steps, energy)
- raw_to_support_converter(file_in, file_out)
- read_converge_file(file_in, iter=False)
- read_history(source, iter=False)
read parameter history and cost history from the given source
source is a monitor, logfile, support file, solver restart file, dataset, etc iter is a bool, where if True, iter is also returned
- Returns (iter, params, cost) if iter is True, else returns (params, cost), where
iter is a list of tuples of ints (iteration, id), where id may be None
params is input to cost; a list of lists of floats
cost is output of cost; a list of floats, or a list of tuples of floats
- read_import(file, *targets)
import the targets; targets are name strings
- read_monitor(mon, id=False)
read trajectories from a monitor instance
mon is a monitor instance id is a bool, where if True, id is also returned
Returns tuple of (mon.x, mon.y) or (mon.x, mon.y, mon.id)
- read_old_support_file(file_in)
- read_raw_file(file_in, iter=False)
read parameter and solution trajectory log file in ‘raw’ format
file_in is a str log file path, with format:
# header id = … params = … cost = …
id is a tuple of (iteration, id), with id an int or None params is a list[float], and is the input to the cost function cost is a float or tuple[float], and is the output of the cost function
if iter is True, return (id,params,cost) otherwise return (params,cost)
NOTE: params are stored how they are stored in monitor.x
- read_support_file(file_in, iter=False)
read parameter and solution trajectory log file in ‘support’ format
file_in is a str log file path, with format:
# header id = … params = … cost = …
id is a tuple of (iteration, id), with id an int or None params is a list[float], and is the input to the cost function cost is a float or tuple[float], and is the output of the cost function
if iter is True, return (id,params,cost) otherwise return (params,cost)
NOTE: params are the transpose of how they are stored in monitor.x
- read_trajectories(source, iter=False)
read trajectories from a monitor instance or three-column format log file
iter is a bool, where if True, iter is also returned source is a monitor instance or a log file path, with format:
__iter__ __energy__ __params__
iter is a tuple of (iteration, id), with id an int or None energy is a float or tuple[float], and is the output of the cost function params is a list[float], and is the input to the cost function
If iter=True, returns tuple of (iter, params, energy), as defined above. If iter=False, returns tuple of (params, energy).
- sequence(x)
True if x is a list, tuple, or a ndarray
- write_converge_file(mon, log_file='paramlog.py', **kwds)
- write_monitor(steps, energy, id=None, k=None)
write trajectories to a monitor instance
steps is a list[float], and is the input to the cost function energy is a float or tuple[float], and is the output of the cost function id is a list[int], and is the id of the monitored object k is float multiplier on energy (e.g. k=-1 inverts the cost function)
Returns a mystic.monitors.Monitor instance
- write_raw_file(mon, log_file='paramlog.py', **kwds)
write parameter and solution trajectory to a log file in ‘raw’ format
mon is a monitor; log_file is a str log file path, with format:
# header id = … params = … cost = …
id is a tuple of (iteration, id), with id an int or None params is a list[float], and is the input to the cost function cost is a float or tuple[float], and is the output of the cost function
if header is provided, then the given string is written as the file header all other kwds are written as file entries
- write_support_file(mon, log_file='paramlog.py', **kwds)
write parameter and solution trajectory to a log file in ‘support’ format
mon is a monitor; log_file is a str log file path, with format:
# header id = … params = … cost = …
id is a tuple of (iteration, id), with id an int or None params is a list[float], and is the input to the cost function cost is a float or tuple[float], and is the output of the cost function
if header is provided, then the given string is written as the file header all other kwds are written as file entries
NOTE: params are the transpose of how they are stored in monitor.x
penalty module
penalty methods: methods used to convert a function into a penalty function
Suppose a given condition f(x)
is satisfied when f(x) == 0.0
for equality constraints, and f(x) <= 0.0
for inequality constraints.
This condition f(x)
can be used as the basis for a mystic.penalty
function.
Examples
>>> def penalty_mean(x, target):
... return mean(x) - target
...
>>> @quadratic_equality(condition=penalty_mean, kwds={'target':5.0})
... def penalty(x):
... return 0.0
...
>>> penalty([1,2,3,4,5])
400.0
>>> penalty([3,4,5,6,7])
7.8886090522101181e-29
References
“Applied Optimization with MATLAB Programming”, by Venkataraman, Wiley, 2nd edition, 2009.
http://www.srl.gatech.edu/education/ME6103/Penalty-Barrier.ppt
“An Augmented Lagrange Multiplier Based Method for Mixed Integer Discrete Continuous Optimization and Its Applications to Mechanical Design”, by Kannan and Kramer, 1994.
- barrier_inequality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)
apply a infinite barrier if the given inequality constraint is violated, and a logarithmic penalty if the inequality constraint is satisfied
penalty is p(x) = inf if constraint is violated, otherwise penalty is p(x) = -1/pk*log(-f(x)), with pk = 2k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) <= 0.0
- lagrange_equality(condition=<function <lambda>>, args=None, kwds=None, k=20, h=5)
apply a quadratic penalty if the given equality constraint is violated
penalty is p(x) = pk*f(x)**2 + lam*f(x), with pk = k*pow(h,n) and n=0 also lagrange multiplier lam = 2k*f(x) where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) = 0.0
- lagrange_inequality(condition=<function <lambda>>, args=None, kwds=None, k=20, h=5)
apply a quadratic penalty if the given inequality constraint is violated
penalty is p(x) = pk*mpf**2 + beta*mpf, with pk = k*pow(h,n) and n=0 also mpf = max(-beta/2k, f(x)) and lagrange multiplier beta = 2k*mpf where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) <= 0.0
- linear_equality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)
apply a linear penalty if the given equality constraint is violated
penalty is p(x) = pk*abs(f(x)), with pk = k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) == 0.0
- linear_inequality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)
apply a linear penalty if the given inequality constraint is violated
penalty is p(x) = pk*abs(f(x)), with pk = 2k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) <= 0.0
- quadratic_equality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)
apply a quadratic penalty if the given equality constraint is violated
penalty is p(x) = pk*f(x)**2, with pk = k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) == 0.0
- quadratic_inequality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)
apply a quadratic penalty if the given inequality constraint is violated
penalty is p(x) = pk*f(x)**2, with pk = 2k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) <= 0.0
- uniform_equality(condition=<function <lambda>>, args=None, kwds=None, k=inf, h=5)
apply a uniform penalty if the given equality constraint is violated
penalty is p(x) = pk, with pk = k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) == 0.0
- uniform_inequality(condition=<function <lambda>>, args=None, kwds=None, k=inf, h=5)
apply a uniform penalty if the given inequality constraint is violated
penalty is p(x) = pk, with pk = k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1
the condition f(x) is satisfied when f(x) <= 0.0
pools module
This module contains map and pipe interfaces to standard (i.e. serial) python.
- Pipe methods provided:
pipe - blocking communication pipe [returns: value]
- Map methods provided:
map - blocking and ordered worker pool [returns: list] imap - non-blocking and ordered worker pool [returns: iterator]
Usage
A typical call to a pathos python map will roughly follow this example:
>>> # instantiate and configure the worker pool
>>> from pathos.serial import SerialPool
>>> pool = SerialPool()
>>>
>>> # do a blocking map on the chosen function
>>> print(pool.map(pow, [1,2,3,4], [5,6,7,8]))
>>>
>>> # do a non-blocking map, then extract the results from the iterator
>>> results = pool.imap(pow, [1,2,3,4], [5,6,7,8])
>>> print("...")
>>> print(list(results))
>>>
>>> # do one item at a time, using a pipe
>>> print(pool.pipe(pow, 1, 5))
>>> print(pool.pipe(pow, 2, 6))
Notes
This worker pool leverages the built-in python maps, and thus does not have limitations due to serialization of the function f or the sequences in args. The maps in this worker pool have full functionality whether run from a script or in the python interpreter, and work reliably for both imported and interactively-defined functions.
- class SerialPool(*args, **kwds)
Bases:
AbstractWorkerPool
Mapper that leverages standard (i.e. serial) python maps.
- Important class members:
nodes - number (and potentially description) of workers ncpus - number of worker processors servers - list of worker servers scheduler - the associated scheduler workdir - associated $WORKDIR for scratch calculations/files
- Other class members:
scatter - True, if uses ‘scatter-gather’ (instead of ‘worker-pool’) source - False, if minimal use of TemporaryFiles is desired timeout - number of seconds to wait for return value from scheduler
- __get_nodes()
get the number of nodes in the pool
- __set_nodes(nodes)
set the number of nodes in the pool
- __state__ = {}
- _exiting = False
- _is_alive(negate=False, run=True)
- clear()
hard restart
- close()
close the pool to any new jobs
- imap(f, *args, **kwds)
run a batch of jobs with a non-blocking and ordered map
Returns a list iterator of results of applying the function f to the items of the argument sequence(s). If more than one sequence is given, the function is called with an argument list consisting of the corresponding item of each sequence. Some maps accept the chunksize keyword, which causes the sequence to be split into tasks of approximately the given size.
- join()
cleanup the closed worker processes
- map(f, *args, **kwds)
run a batch of jobs with a blocking and ordered map
Returns a list of results of applying the function f to the items of the argument sequence(s). If more than one sequence is given, the function is called with an argument list consisting of the corresponding item of each sequence. Some maps accept the chunksize keyword, which causes the sequence to be split into tasks of approximately the given size.
- property nodes
get the number of nodes in the pool
- pipe(f, *args, **kwds)
submit a job and block until results are available
Returns result of calling the function f on a selected worker. This function will block until results are available.
- restart(force=False)
restart a closed pool
- terminate()
a more abrupt close
- __get_nodes__(self)
get the number of nodes in the pool
- __set_nodes__(self, nodes)
set the number of nodes in the pool
python_map module
Defaults for mapper and launcher. These should be available as a minimal
(dependency-free) pure-python install from pathos
:
serial_launcher -- syntax for standard python execution
python_map -- wrapper around the standard python map
worker_pool -- the worker_pool map strategy
- python_map(func, *arglist, **kwds)
maps function func across arguments arglist.
Provides the standard python map interface as a function; however returns returns a list instead of a map iterator and accepts (and ignores) kwds.
- serial_launcher(kdict={})
prepare launch for standard execution syntax: (python) (program) (progargs)
Notes
run non-python shell commands by setting
python
to a null string:kdict = {'python':'', ...}
- worker_pool()
use the ‘worker pool’ strategy; hence one job is allocated to each worker, and the next new work item is provided when a node completes its work
samplers module
samplers: optimizer-guided directed sampling
- class BuckshotSampler(bounds, model, npts=None, **kwds)
Bases:
AbstractSampler
optimizer-directed sampling starting at N randomly sampled points
- Parameters:
bounds (list[tuples]) – (lower,upper) bound for each model input
model (function) –
y = model(x)
, wherex
is an input parameter vectornpts (int, default=None) – number of points to sample the model
**kwds (dict, default={}) – keywords for the underlying ensemble of solvers;
(evalmon, stepmon, maxiter, maxfun, dist, saveiter, state, termination, constraints, penalty, reducer, solver, id, map, tightrange, cliprange)
are available for use. Seemystic.ensemble
for more details.
- _init_solver()
initialize the ensemble solver
- class LatticeSampler(bounds, model, npts=None, **kwds)
Bases:
AbstractSampler
optimizer-directed sampling starting at the centers of N grid points
- Parameters:
bounds (list[tuples]) – (lower,upper) bound for each model input
model (function) –
y = model(x)
, wherex
is an input parameter vectornpts (int, default=None) – number of points to sample the model
**kwds (dict, default={}) – keywords for the underlying ensemble of solvers;
(evalmon, stepmon, maxiter, maxfun, dist, saveiter, state, termination, constraints, penalty, reducer, solver, id, map, tightrange, cliprange)
are available for use. Seemystic.ensemble
for more details.
- _init_solver()
initialize the ensemble solver
- class SparsitySampler(bounds, model, npts=None, **kwds)
Bases:
AbstractSampler
optimizer-directed sampling starting at N points sampled in sparse reigons
- Parameters:
bounds (list[tuples]) – (lower,upper) bound for each model input
model (function) –
y = model(x)
, wherex
is an input parameter vectornpts (int, default=None) – number of points to sample the model
**kwds (dict, default={}) – keywords for the underlying ensemble of solvers;
(evalmon, stepmon, maxiter, maxfun, dist, saveiter, state, termination, constraints, penalty, reducer, solver, id, map, tightrange, cliprange)
are available for use. Seemystic.ensemble
for more details.
- _init_solver()
initialize the ensemble solver
scemtools module
Implements the “Shuffled Complex Evolution Metropolis” Algoritm of Vrugt et al. [1]
Reference:
[1] Jasper A. Vrugt, Hoshin V. Gupta, Willem Bouten, and Soroosh Sorooshian A Shuffled Complex Evolution Metropolis algorithm for optimization and uncertainty assessment of hydrologic model parameters, WATER RESOURCES RESEARCH, VOL. 39, NO. 8, 1201, doi:10.1029/2002WR001642, 2003 http://www.agu.org/pubs/crossref/2003/2002WR001642.shtml
[2] Vrugt JA, Nuallain , Robinson BA, Bouten W, Dekker SC, Sloot PM Application of parallel computing to stochastic parameter estimation in environmental models, Computers & Geosciences, Vol. 32, No. 8. (October 2006), pp. 1139-1155. http://www.science.uva.nl/research/scs/papers/archive/Vrugt2006b.pdf
- multinormal_pdf(mean, var)
var must be symmetric positive definite
- myinsert(a, x)
- remix(Cs, As)
Mixing and dealing the complexes. The types of Cs and As are very important….
- scem(Ck, ak, Sk, Sak, target, cn)
This is the SCEM algorithm starting from line [35] of the reference [1].
[inout] Ck is the kth ‘complex’ with m points. This should be an m by n array n being the dimensionality of the density function. i.e., the data are arranged in rows.
Ck is assumed to be sorted according to the target density.
[inout] ak, the density of the points of Ck.
[inout] Sk, the entire chain. (should be a list)
[inout] Sak, the cost of the entire chain (should be a list)
Sak would be more convenient to use if it is a numpy array, but we need to append to it frequently.
[in] target: target density function
[in] cn: jumprate. (see Paragraph 37 of [1.]
The invariants: ak is always aligned with Ck, and are the cost of Ck
Similarly, Sak is always aligned with Sk in the same way.
On return… sort order in Ck/ak is destroyed. but see sort_complex2
- sequential_deal(inarray, n)
inarray: should be a set of N objects (the objects can be vectors themselves, but inarray should be index-able like a list. It is coerced into a numpy array because the last operations requires that it is also indexable by a ‘list.’
it should have a length divisble by n, otherwise the reshape will fail (this is a feature !)
sequential_deal(range(20), 5) wil return a 5 element list, each element being a 4-list of index. (see below)
>>> for l in sequential_deal(range(20),5): ... print(l) ... [ 0 5 10 15] [ 1 6 11 16] [ 2 7 12 17] [ 3 8 13 18] [ 4 9 14 19]
- sort_ab_with_b(a, b, ord=-1)
default is descending…
- sort_and_deal(cards, target, nplayers)
- sort_complex(c, a)
- sort_complex0(c, a)
- sort_complex2(c, a)
c and a are partially sorted (either the last one is bad, or the first one)
- pos0 (first one out of order)
-1 (last one out of order)
- update_complex(Ck, ak, c, a, pos)
ak is sorted (descending)
Ck[pos] and ak[pos] will be removed, and then c and a spliced in at the proper place
pos is 0, or -1
scipy_optimize module
Solvers
This module contains a collection of optimization routines adapted from scipy.optimize. The minimal scipy interface has been preserved, and functionality from the mystic solver API has been added with reasonable defaults.
- Minimal function interface to optimization routines::
- fmin – Nelder-Mead Simplex algorithm
(uses only function calls)
- fmin_powell – Powell’s (modified) level set method
(uses only function calls)
- The corresponding solvers built on mystic’s AbstractSolver are::
NelderMeadSimplexSolver – Nelder-Mead Simplex algorithm PowellDirectionalSolver – Powell’s (modified) level set method
- Mystic solver behavior activated in fmin::
EvaluationMonitor = Monitor()
StepMonitor = Monitor()
termination = CandidateRelativeTolerance(xtol,ftol)
- Mystic solver behavior activated in fmin_powell::
EvaluationMonitor = Monitor()
StepMonitor = Monitor()
termination = NormalizedChangeOverGeneration(ftol)
Usage
See mystic.examples.test_rosenbrock2 for an example of using NelderMeadSimplexSolver. See mystic.examples.test_rosenbrock3 or an example of using PowellDirectionalSolver.
All solvers included in this module provide the standard signal handling. For more information, see mystic.mystic.abstract_solver.
References
Nelder, J.A. and Mead, R. (1965), “A simplex method for function minimization”, The Computer Journal, 7, pp. 308-313.
Wright, M.H. (1996), “Direct Search Methods: Once Scorned, Now Respectable”, in Numerical Analysis 1995, Proceedings of the 1995 Dundee Biennial Conference in Numerical Analysis, D.F. Griffiths and G.A. Watson (Eds.), Addison Wesley Longman, Harlow, UK, pp. 191-208.
Gao, F. and Han, L. (2012), “Implementing the Nelder-Mead simplex algorithm with adaptive parameters”, Computational Optimization and Applications. 51:1, pp. 259-277.
Powell M.J.D. (1964) An efficient method for finding the minimum of a function of several variables without calculating derivatives, Computer Journal, 7 (2):155-162.
Press W., Teukolsky S.A., Vetterling W.T., and Flannery B.P.: Numerical Recipes (any edition), Cambridge University Press
- class NelderMeadSimplexSolver(dim)
Bases:
AbstractSolver
Nelder Mead Simplex optimization adapted from scipy.optimize.fmin.
- Takes one initial input:
dim – dimensionality of the problem
The size of the simplex is dim+1.
- Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using the downhill simplex algorithm.
Uses a Nelder-Mead simplex algorithm to find the minimum of a function of one or more variables.
- Parameters:
cost (func, default=None) – the function to be minimized:
y = cost(x)
.termination (termination, default=None) – termination conditions.
ExtraArgs (tuple, default=None) – extra arguments for cost.
sigint_callback (func, default=None) – callback function for signal handler.
callback (func, default=None) – function to call after each iteration. The interface is
callback(xk)
, with xk the current parameter vector.disp (bool, default=False) – if True, print convergence messages.
radius (float, default=0.05) – percentage change for initial simplex values.
adaptive (bool, default=False) – adapt algorithm parameters to the dimensionality of the initial parameter vector
x
.
- Returns:
None
- _SetEvaluationLimits(iterscale=200, evalscale=200)
set the evaluation limits
- Parameters:
Notes
iterscale
andevalscale
are used to set the maximum iteration and evaluation limits, respectively. The new limit is defined aslimit = (nDim * nPop * scale) + count
, wherecount
is the number of existing iterations or evaluations, respectively.
- _Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration
- Parameters:
cost (func, default=None) – objective, of form
y = cost(x, *ExtraArgs)
, wherex
is a candidate solution vectorExtraArgs (tuple, default=None) – tuple of positional arguments required to evaluate the objective
- Returns:
None
Notes
This method accepts additional
kwds
that are specific for the current solver, as detailed in the_process_inputs
method.
- _decorate_objective(cost, ExtraArgs=None)
decorate the cost function with bounds, penalties, monitors, etc
- Parameters:
cost (func) – objective, of form
y = cost(x, *ExtraArgs)
, wherex
is a candidate solution vectorExtraArgs (tuple, default=None) – tuple of positional arguments required to evaluate the objective
- Returns:
decorated objective function
- _process_inputs(kwds)
process and activate input settings
- Parameters:
callback (func, default=None) – function to call after each iteration. The interface is
callback(xk)
, withxk
the current parameter vector.disp (bool, default=False) – if True, print convergence messages.
EvaluationMonitor (monitor, default=None) – a monitor instance to capture each evaluation of cost.
StepMonitor (monitor, default=None) – a monitor instance to capture each iteration’s best results.
penalty (penalty, default=None) – function of the form:
y' = penalty(xk)
, withy = cost(xk) + y'
andxk
is the current parameter vector.constraints (constraint, default=None) – function of the form:
xk' = constraints(xk)
, wherexk
is the current parameter vector.radius (float, default=0.05) – percentage change for initial simplex values.
adaptive (bool, default=False) – adapt algorithm parameters to the dimensionality of the initial parameter vector
x
.
Notes
callback
anddisp
are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the other inputs are not sticky, and are thus set for a one-time use.
- class PowellDirectionalSolver(dim)
Bases:
AbstractSolver
Powell Direction Search optimization, adapted from scipy.optimize.fmin_powell.
- Takes one initial input:
dim – dimensionality of the problem
- Finalize()
cleanup upon exiting the main optimization loop
- Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using modified Powell’s method.
Uses a modified Powell Directional Search algorithm to find the minimum of a function of one or more variables.
- Parameters:
cost (func, default=None) – the function to be minimized:
y = cost(x)
.termination (termination, default=None) – termination conditions.
ExtraArgs (tuple, default=None) – extra arguments for cost.
sigint_callback (func, default=None) – callback function for signal handler.
callback (func, default=None) – function to call after each iteration. The interface is
callback(xk)
, with xk the current parameter vector.direc (tuple, default=None) – the initial direction set.
xtol (float, default=1e-4) – line-search error tolerance.
imax (float, default=500) – line-search maximum iterations.
disp (bool, default=False) – if True, print convergence messages.
- Returns:
None
- _SetEvaluationLimits(iterscale=1000, evalscale=1000)
set the evaluation limits
- Parameters:
Notes
iterscale
andevalscale
are used to set the maximum iteration and evaluation limits, respectively. The new limit is defined aslimit = (nDim * nPop * scale) + count
, wherecount
is the number of existing iterations or evaluations, respectively.
- _Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration
- Parameters:
cost (func, default=None) – objective, of form
y = cost(x, *ExtraArgs)
, wherex
is a candidate solution vectorExtraArgs (tuple, default=None) – tuple of positional arguments required to evaluate the objective
- Returns:
None
Notes
This method accepts additional
kwds
that are specific for the current solver, as detailed in the_process_inputs
method.
- __generations()
get the number of iterations
- _process_inputs(kwds)
process and activate input settings
- Parameters:
callback (func, default=None) – function to call after each iteration. The interface is
callback(xk)
, withxk
the current parameter vector.disp (bool, default=False) – if True, print convergence messages.
EvaluationMonitor (monitor, default=None) – a monitor instance to capture each evaluation of cost.
StepMonitor (monitor, default=None) – a monitor instance to capture each iteration’s best results.
penalty (penalty, default=None) – function of the form:
y' = penalty(xk)
, withy = cost(xk) + y'
andxk
is the current parameter vector.constraints (constraint, default=None) – function of the form:
xk' = constraints(xk)
, wherexk
is the current parameter vector.direc (tuple, default=None) – the initial direction set.
xtol (float, default=1e-4) – line-search error tolerance.
imax (float, default=500) – line-search maximum iterations.
Notes
callback
anddisp
are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the other inputs are not sticky, and are thus set for a one-time use.
- property generations
get the number of iterations
- fmin(cost, x0, args=(), bounds=None, xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, **kwds)
Minimize a function using the downhill simplex algorithm.
Uses a Nelder-Mead simplex algorithm to find the minimum of a function of one or more variables. This algorithm only uses function values, not derivatives or second derivatives. Mimics the
scipy.optimize.fmin
interface.This algorithm has a long history of successful use in applications. It will usually be slower than an algorithm that uses first or second derivative information. In practice it can have poor performance in high-dimensional problems and is not robust to minimizing complicated functions. Additionally, there currently is no complete theory describing when the algorithm will successfully converge to the minimum, or how fast it will if it does. Both the ftol and xtol criteria must be met for convergence.
- Parameters:
cost (func) – the function or method to be minimized:
y = cost(x)
.x0 (ndarray) – the initial guess parameter vector
x
.args (tuple, default=()) – extra arguments for cost.
bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.
xtol (float, default=1e-4) – acceptable absolute error in
xopt
for convergence.ftol (float, default=1e-4) – acceptable absolute error in
cost(xopt)
for convergence.maxiter (int, default=None) – the maximum number of iterations to perform.
maxfun (int, default=None) – the maximum number of function evaluations.
full_output (bool, default=False) – True if fval and warnflag are desired.
disp (bool, default=True) – if True, print convergence messages.
retall (bool, default=False) – True if allvecs is desired.
callback (func, default=None) – function to call after each iteration. The interface is
callback(xk)
, with xk the current parameter vector.id (int, default=None) – the
id
of the solver used in logging.handler (bool, default=False) – if True, enable handling interrupt signals.
itermon (monitor, default=None) – override the default GenerationMonitor.
evalmon (monitor, default=None) – override the default EvaluationMonitor.
constraints (func, default=None) – a function
xk' = constraints(xk)
, where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints.penalty (func, default=None) – a function
y = penalty(xk)
, where xk is the current parameter vector, andy' == 0
when the encoded constraints are satisfied (andy' > 0
otherwise).tightrange (bool, default=None) – impose bounds and constraints concurrently.
cliprange (bool, default=None) – bounding constraints clip exterior values.
- Returns:
(xopt, {fopt, iter, funcalls, warnflag}, {allvecs})
Notes
xopt (ndarray): the minimizer of the cost function
fopt (float): value of cost function at minimum:
fopt = cost(xopt)
iter (int): number of iterations
funcalls (int): number of function calls
- warnflag (int): warning flag
1 : Maximum number of function evaluations
2 : Maximum number of iterations
allvecs (list): a list of solutions at each iteration
- fmin_powell(cost, x0, args=(), bounds=None, xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, direc=None, **kwds)
Minimize a function using modified Powell’s method.
Uses a modified Powell Directional Search algorithm to find the minimum of a function of one or more variables. This method only uses function values, not derivatives. Mimics the
scipy.optimize.fmin_powell
interface.Powell’s method is a conjugate direction method that has two loops. The outer loop simply iterates over the inner loop, while the inner loop minimizes over each current direction in the direction set. At the end of the inner loop, if certain conditions are met, the direction that gave the largest decrease is dropped and replaced with the difference between the current estimated x and the estimated x from the beginning of the inner-loop. The conditions for replacing the direction of largest increase is that: (a) no further gain can be made along the direction of greatest increase in the iteration, and (b) the direction of greatest increase accounted for a large sufficient fraction of the decrease in the function value from the current iteration of the inner loop.
- Parameters:
cost (func) – the function or method to be minimized:
y = cost(x)
.x0 (ndarray) – the initial guess parameter vector
x
.args (tuple, default=()) – extra arguments for cost.
bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.
xtol (float, default=1e-4) – acceptable relative error in
xopt
for convergence.ftol (float, default=1e-4) – acceptable relative error in
cost(xopt)
for convergence.gtol (int, default=2) – maximum iterations to run without improvement.
maxiter (int, default=None) – the maximum number of iterations to perform.
maxfun (int, default=None) – the maximum number of function evaluations.
full_output (bool, default=False) – True if fval and warnflag are desired.
disp (bool, default=True) – if True, print convergence messages.
retall (bool, default=False) – True if allvecs is desired.
callback (func, default=None) – function to call after each iteration. The interface is
callback(xk)
, with xk the current parameter vector.direc (tuple, default=None) – the initial direction set.
id (int, default=None) – the
id
of the solver used in logging.handler (bool, default=False) – if True, enable handling interrupt signals.
itermon (monitor, default=None) – override the default GenerationMonitor.
evalmon (monitor, default=None) – override the default EvaluationMonitor.
constraints (func, default=None) – a function
xk' = constraints(xk)
, where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints.penalty (func, default=None) – a function
y = penalty(xk)
, where xk is the current parameter vector, andy' == 0
when the encoded constraints are satisfied (andy' > 0
otherwise).tightrange (bool, default=None) – impose bounds and constraints concurrently.
cliprange (bool, default=None) – bounding constraints clip exterior values.
- Returns:
(xopt, {fopt, iter, funcalls, warnflag, direc}, {allvecs})
Notes
xopt (ndarray): the minimizer of the cost function
fopt (float): value of cost function at minimum:
fopt = cost(xopt)
iter (int): number of iterations
funcalls (int): number of function calls
- warnflag (int): warning flag
1 : Maximum number of function evaluations
2 : Maximum number of iterations
direc (tuple): the current direction set
allvecs (list): a list of solutions at each iteration
functional interfaces for mystic’s visual analytics scripts
- collapse_plotter(filename, **kwds)
generate convergence plots of | cost - cost_min | from convergence logfile
Available from the command shell as:
mystic_collapse_plotter filename [options]
or as a function call:
mystic.collapse_plotter(filename, **options)
- Parameters:
filename (str) – name of the convergence logfile (e.g
paramlog.py
).- Returns:
None
Notes
The option dots takes a boolean, and will show data points in the plot.
The option linear takes a boolean, and will plot in a linear scale.
The option out takes a string of the filepath for the generated plot. If
out = True
, return the Figure object instead of generating a plot.The option iter takes an integer of the largest iteration to plot.
The option legend takes a boolean, and will display the legend.
The option nid takes an integer of the nth simultaneous points to plot.
The option label takes a label string. For example,
label = "y"
will label the plot with a ‘y’, whilelabel = " log-cost, $ log_{10}(\hat{P} - \hat{P}_{max})$"
will label the y-axis with standard LaTeX math formatting. Note that the leading space is required, and that the text is aligned along the axis.The option col takes a string of comma-separated integers indicating iteration numbers where parameter collapse has occurred. If a second set of integers is provided (delineated by a semicolon), the additional set of integers will be plotted with a different linestyle (to indicate a different type of collapse).
- log_converter(readpath, writepath=None, **kwds)
convert between cached archives, convergence logfiles, and support logfiles
Available from the command shell as:
mystic_log_converter readpath (writepath) [options]
or as a function call:
mystic.log_converter(readpath, writepath=None, **options)
- Parameters:
- Returns:
None
Notes
If writepath is None, write file with derived name to current directory.
The option format takes a string name of the file format at writepath. Available formats are (‘logfile’, ‘support’, or a klepto.archive type).
- log_reader(filename, **kwds)
generate parameter convergence plot from convergence logfile
Available from the command shell as:
mystic_log_reader filename [options]
or as a function call:
mystic.log_reader(filename, **options)
- Parameters:
filename (str) – name of the convergence logfile (e.g
log.txt
).- Returns:
None
Notes
The option out takes a string of the filepath for the generated plot. If
out = True
, return the Figure object instead of generating a plot.The option dots takes a boolean, and will show data points in the plot.
The option line takes a boolean, and will connect the data with a line.
The option iter takes an integer of the largest iteration to plot.
The option legend takes a boolean, and will display the legend.
The option nid takes an integer of the nth simultaneous points to plot.
The option param takes an indicator string. The indicator string is built from comma-separated array slices. For example,
param = ":"
will plot all parameters. Alternatively,param = ":2, 3:"
will plot all parameters except for the third parameter, whileparam = "0"
will only plot the first parameter.
- model_plotter(model, logfile=None, **kwds)
generate surface contour plots for model, specified by full import path; and generate model trajectory from logfile (or solver restart file), if provided
Available from the command shell as:
mystic_model_plotter model (logfile) [options]
or as a function call:
mystic.model_plotter(model, logfile=None, **options)
- Parameters:
- Returns:
None
Notes
The option out takes a string of the filepath for the generated plot. If
out = True
, return the Figure object instead of generating a plot.The option bounds takes an indicator string, where bounds are given as comma-separated slices. For example, using
bounds = "-1:10, 0:20"
will set lower and upper bounds for x to be (-1,10) and y to be (0,20). The “step” can also be given, to control the number of lines plotted in the grid. Thus"-1:10:.1, 0:20"
sets the bounds as above, but uses increments of .1 along x and the default step along y. For models > 2D, the bounds can be used to specify 2 dimensions plus fixed values for remaining dimensions. Thus,"-1:10, 0:20, 1.0"
plots the 2D surface where the z-axis is fixed at z=1.0. When called from a script, slice objects can be used instead of a string, thus"-1:10:.1, 0:20, 1.0"
becomes(slice(-1,10,.1), slice(20), 1.0)
.The option label takes comma-separated strings. For example,
label = "x,y,"
will place ‘x’ on the x-axis, ‘y’ on the y-axis, and nothing on the z-axis. LaTeX is also accepted. For example,label = "$ h $, $ {\alpha}$, $ v$"
will label the axes with standard LaTeX math formatting. Note that the leading space is required, while a trailing space aligns the text with the axis instead of the plot frame.The option nid takes an integer of the nth simultaneous points to plot.
The option iter takes an integer of the largest iteration to plot.
The option reduce can be given to reduce the output of a model to a scalar, thus converting
model(params)
toreduce(model(params))
. A reducer is given by the import path (e.g.numpy.add
).The option scale will convert the plot to log-scale, and scale the cost by
z=log(4*z*scale+1)+2
. This is useful for visualizing small contour changes around the minimium.If using log-scale produces negative numbers, the option shift can be used to shift the cost by
z=z+shift
. Both shift and scale are intended to help visualize contours.The option fill takes a boolean, to plot using filled contours.
The option depth takes a boolean, to plot contours in 3D.
The option dots takes a boolean, to show trajectory points in the plot.
The option join takes a boolean, to connect trajectory points.
The option verb takes a boolean, to print the model documentation.
The option kernel can be given to transform the input of a model from nD to 2D, where
params' = model(params)
withparams'
being 2D. A kernel is given by the import path (e.g.mymodule.kernel
). Using kernel can be slow, as it may calcuate inverse transform at each point.The option tol takes a float of max distance of dots from surface. For finer control, provide an array[float] the same length as
params
.
search module
a global searcher
- class Searcher(npts=4, retry=1, tol=8, memtol=1, memsize=None, map=None, archive=None, cache=None, sprayer=None, seeker=None, traj=False, disp=False, repeat=0)
Bases:
object
searcher, which searches for all minima of a response surface
- Input:
npts - number of solvers in the ensemble retry - max consectutive retries w/o a cache ‘miss’ tol - rounding precision for the minima comparator memtol - rounding precision for memoization memsize - maximum size of cache to hold in memory map - map used for spawning solvers in the ensemble archive - the sampled point archive(s) cache - the trajectory cache(s) sprayer - the mystic.ensemble instance seeker - the mystic.solvers instance traj - if True, save the parameter trajectories disp - if True, be verbose repeat - number of times to repeat the search
- Coordinates(unique=False, all=False)
return the sequence of stored model parameter input values
- Input:
unique: if True, only return unique values all: if True, return all sampled inputs (not just trajectory inputs)
- Returns:
a list of parameter trajectories
- Minima(tol=None)
return a dict of (coordinates,values) of all discovered minima
- Input:
tol: tolerance within which to consider a point a minima
- Returns:
a dict of (coordinates,values) of all discovered minima
- Reset(cache=None, inv=None)
clear the trajectory cache of sampled points
- Input:
cache - the trajectory cache(s) inv - if True, reset the cache for the inverse of the objective
- Samples(all=False)
return array of (coordinates, cost) for all trajectories
- Search(model, bounds, stop=None, traj=None, disp=None, **kwds)
use an ensemble of optimizers to search for all minima
- Input:
model - function z=f(x) to be used as the objective of the Searcher bounds - tuple of floats (min,max), bounds on the search region stop - termination condition traj - klepto.archive to store sampled points disp - if True, be verbose monitor - mystic.monitor instance to store parameter trajectories evalmon - mystic.monitor instance to store parameter evaluations penalty - mystic.penalty instance of the form y’ = k*p(x) constraints - mystic.constraints instance of the form x’ = c(x) tightrange - if True, apply bounds concurrent with other constraints cliprange - if True, bounding constraints will clip exterior values
- Trajectories(all=False)
return tuple (iteration, coordinates, cost) of all trajectories
- UseTrajectories(traj=True)
save all sprayers, thus save all trajectories
- Values(unique=False, all=False)
return the sequence of stored response surface outputs
- Input:
unique: if True, only return unique values all: if True, return all sampled values (not just trajectory values)
- Returns:
a list of stored response surface outputs
- Verbose(disp=True)
be verbose
- _configure(model, bounds, stop=None, **kwds)
configure ensemble solver from objective and other solver options
- _memoize(solver, tol=1, all=False, size=None)
apply caching to ensemble solver instance
- _print(solver, tol=8)
print bestSolution and bestEnergy for each sprayer
- _search(sid)
run the solver, store the trajectory, and cache to the archive
- _solve(id=None, disp=None)
run the solver (i.e. search for the minima)
- _summarize()
provide a summary of the search results
solvers module
solvers: minimal and expanded interfaces for optimization algorithms
Standard Interface
All of mystic’s optimizers derive from the solver API, which provides
each optimizer with a standard, but highly-customizable interface.
A description of the solver API is found in mystic.models.abstract_model
,
and in each derived optimizer. Mystic’s optimizers are:
** Global Optimizers **
DifferentialEvolutionSolver -- Differential Evolution algorithm
DifferentialEvolutionSolver2 -- Price & Storn's Differential Evolution
** Pseudo-Global Optimizers **
SparsitySolver -- N Solvers sampled where point density is low
BuckshotSolver -- Uniform Random Distribution of N Solvers
LatticeSolver -- Distribution of N Solvers on a Regular Grid
** Local-Search Optimizers **
NelderMeadSimplexSolver -- Nelder-Mead Simplex algorithm
PowellDirectionalSolver -- Powell's (modified) Level Set algorithm
Minimal Interface
Most of mystic’s optimizers can be called from a minimal (i.e. one-line) interface. The collection of arguments is often unique to the optimizer, and if the underlying solver derives from a third-party package, the original interface is reproduced. Minimal interfaces to these optimizers are provided:
** Global Optimizers **
diffev -- DifferentialEvolutionSolver
diffev2 -- DifferentialEvolutionSolver2
** Pseudo-Global Optimizers **
sparsity -- SparsitySolver
buckshot -- BuckshotSolver
lattice -- LatticeSolver
** Local-Search Optimizers **
fmin -- NelderMeadSimplexSolver
fmin_powell -- PowellDirectionalSolver
More Information
- For more information, please see the solver documentation found here:
mystic.differential_evolution
[differential evolution solvers]mystic.scipy_optimize
[scipy local-search solvers]mystic.ensemble
[pseudo-global solvers]
- or the API documentation found here:
mystic.abstract_solver
[the solver API definition]mystic.abstract_map_solver
[the parallel solver API]mystic.abstract_ensemble_solver
[the ensemble solver API]
- class BuckshotSolver(dim, npts=8)
Bases:
AbstractEnsembleSolver
parallel mapped optimization starting from N uniform randomly sampled points
- Takes two initial inputs:
dim – dimensionality of the problem npts – number of parallel solver instances
All important class members are inherited from AbstractEnsembleSolver.
- _InitialPoints()
Generate a grid of starting points for the ensemble of optimizers
- class DifferentialEvolutionSolver(dim, NP=4)
Bases:
AbstractSolver
Differential Evolution optimization.
- Takes two initial inputs:
dim – dimensionality of the problem NP – size of the trial solution population. [requires: NP >= 4]
All important class members are inherited from AbstractSolver.
- SetConstraints(constraints)
apply a constraints function to the optimization
- Parameters:
constraints (function) – function of the form:
xk' = constraints(xk)
, wherexk
is the current parameter vector. Ideally, this function is constructed so the parameter vector it passes to the cost function will satisfy the desired (i.e. encoded) constraints.
- Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using differential evolution.
Uses a differential evolution algorithm to find the minimum of a function of one or more variables.
- Parameters:
cost (function, default=None) – function to be minimized:
y = cost(x)
.termination (termination, default=None) – termination conditions.
ExtraArgs (tuple, default=None) – extra arguments for cost.
strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial solutions.
CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.
ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.
sigint_callback (function, default=None) – signal handler callback function.
callback (function, default=None) – function called after each iteration. The interface is
callback(xk)
, withxk
the current parameter vector.disp (bool, default=False) – if True, print convergence messages.
- Returns:
None
- UpdateGenealogyRecords(id, newchild)
create an in-memory log of the genealogy of the population
- _Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration
- Parameters:
cost (func, default=None) – objective, of form
y = cost(x, *ExtraArgs)
, wherex
is a candidate solution vectorExtraArgs (tuple, default=None) – tuple of positional arguments required to evaluate the objective
- Returns:
None
Notes
This method accepts additional
kwds
that are specific for the current solver, as detailed in the_process_inputs
method.
- _decorate_objective(cost, ExtraArgs=None)
decorate the cost function with bounds, penalties, monitors, etc
- Parameters:
cost (func) – objective, of form
y = cost(x, *ExtraArgs)
, wherex
is a candidate solution vectorExtraArgs (tuple, default=None) – tuple of positional arguments required to evaluate the objective
- Returns:
decorated objective function
- _process_inputs(kwds)
process and activate input settings
- Parameters:
callback (function, default=None) – function called after each iteration. The interface is
callback(xk)
, withxk
the current parameter vector.disp (bool, default=False) – if True, print convergence messages.
EvaluationMonitor (monitor, default=None) – a monitor instance to capture each evaluation of cost.
StepMonitor (monitor, default=None) – a monitor instance to capture each iteration’s best results.
penalty (penalty, default=None) – function of the form:
y' = penalty(xk)
, withy = cost(xk) + y'
andxk
is the current parameter vector.constraints (constraint, default=None) – function of the form:
xk' = constraints(xk)
, wherexk
is the current parameter vector.strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial solutions.
CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.
ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.
Notes
callback
anddisp
are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the other inputs are not sticky, and are thus set for a one-time use.
- class DifferentialEvolutionSolver2(dim, NP=4)
Bases:
AbstractMapSolver
Differential Evolution optimization, using Storn and Price’s algorithm.
- Alternate implementation:
utilizes a map-reduce interface, extensible to parallel computing
both a current and a next generation are kept, while the current generation is invariant during the main DE logic
- Parameters:
- SetConstraints(constraints)
apply a constraints function to the optimization
- Parameters:
constraints (function) – function of the form:
xk' = constraints(xk)
, wherexk
is the current parameter vector. Ideally, this function is constructed so the parameter vector it passes to the cost function will satisfy the desired (i.e. encoded) constraints.
- Solve(cost=None, termination=None, ExtraArgs=None, **kwds)
Minimize a function using differential evolution.
Uses a differential evolution algorithm to find the minimum of a function of one or more variables. This implementation holds the current generation invariant until the end of each iteration.
- Parameters:
cost (function, default=None) – function to be minimized:
y = cost(x)
.termination (termination, default=None) – termination conditions.
ExtraArgs (tuple, default=None) – extra arguments for cost.
strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial solutions.
CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.
ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.
sigint_callback (function, default=None) – signal handler callback function.
callback (function, default=None) – function called after each iteration. The interface is
callback(xk)
, withxk
the current parameter vector.disp (bool, default=False) – if True, print convergence messages.
- Returns:
None
- UpdateGenealogyRecords(id, newchild)
create an in-memory log of the genealogy of the population
- _Step(cost=None, ExtraArgs=None, **kwds)
perform a single optimization iteration
- Parameters:
cost (func, default=None) – objective, of form
y = cost(x, *ExtraArgs)
, wherex
is a candidate solution vectorExtraArgs (tuple, default=None) – tuple of positional arguments required to evaluate the objective
- Returns:
None
Notes
This method accepts additional
kwds
that are specific for the current solver, as detailed in the_process_inputs
method.
- _decorate_objective(cost, ExtraArgs=None)
decorate the cost function with bounds, penalties, monitors, etc
- Parameters:
cost (func) – objective, of form
y = cost(x, *ExtraArgs)
, wherex
is a candidate solution vectorExtraArgs (tuple, default=None) – tuple of positional arguments required to evaluate the objective
- Returns:
decorated objective function
- _process_inputs(kwds)
process and activate input settings
- Parameters:
callback (function, default=None) – function called after each iteration. The interface is
callback(xk)
, withxk
the current parameter vector.disp (bool, default=False) – if True, print convergence messages.
EvaluationMonitor (monitor, default=None) – a monitor instance to capture each evaluation of cost.
StepMonitor (monitor, default=None) – a monitor instance to capture each iteration’s best results.
penalty (penalty, default=None) – function of the form:
y' = penalty(xk)
, withy = cost(xk) + y'
andxk
is the current parameter vector.constraints (constraint, default=None) – function of the form:
xk' = constraints(xk)
, wherexk
is the current parameter vector.strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial solutions.
CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.
ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.
Notes
callback
anddisp
are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the other inputs are not sticky, and are thus set for a one-time use.
- class LatticeSolver(dim, nbins=8)
Bases:
AbstractEnsembleSolver
parallel mapped optimization starting from the centers of N grid points
- Takes two initial inputs:
dim – dimensionality of the problem nbins – tuple of number of bins in each dimension
All important class members are inherited from AbstractEnsembleSolver.
- _InitialPoints()
Generate a grid of starting points for the ensemble of optimizers
- LoadSolver(filename=None, **kwds)
load solver state from a restart file
- class NelderMeadSimplexSolver(dim)
Bases:
AbstractSolver
Nelder Mead Simplex optimization adapted from scipy.optimize.fmin.
- Takes one initial input:
dim – dimensionality of the problem
The size of the simplex is dim+1.
- Solve(cost=None, termination=None, ExtraArgs=None, **kwds) <