mystic module documentation¶

abstract_ensemble_solver module¶

This module contains the base class for launching several mystic solvers instances – utilizing a parallel map function to enable parallel computing. This module describes the ensemble solver interface. As with the AbstractSolver, the _Step method must be overwritten with the derived solver’s optimization algorithm. Similar to AbstractMapSolver, a call to map is required. In addition to the class interface, a simple function interface for a derived solver class is often provided. For an example, see the following.

The default map API settings are provided within mystic, while distributed and parallel computing maps can be obtained from the pathos package (http://dev.danse.us/trac/pathos).

Examples

A typical call to a ‘ensemble’ solver will roughly follow this example:

>>> # the function to be minimized and the initial values
>>> from mystic.models import rosen
>>> lb = [0.0, 0.0, 0.0]
>>> ub = [2.0, 2.0, 2.0]
>>>
>>> # get monitors and termination condition objects
>>> from mystic.monitors import Monitor
>>> stepmon = Monitor()
>>> from mystic.termination import CandidateRelativeTolerance as CRT
>>>
>>> # select the parallel launch configuration
>>> from pyina.launchers import Mpi as Pool
>>> NNODES = 4
>>> nbins = [4,4,4]
>>>
>>> # instantiate and configure the solver
>>> from mystic.solvers import LatticeSolver
>>> solver = LatticeSolver(len(nbins), nbins)
>>> solver.SetStrictRanges(lb, ub)
>>> solver.SetMapper(Pool(NNODES).map)
>>> solver.SetGenerationMonitor(stepmon)
>>> solver.SetTermination(CRT())
>>> solver.Solve(rosen)
>>>
>>> # obtain the solution
>>> solution = solver.Solution()


Handler¶

All solvers packaged with mystic include a signal handler that provides the following options:

sol: Print current best solution.
cont: Continue calculation.
call: Executes sigint_callback, if provided.
exit: Exits with current best solution.


Handlers are enabled with the enable_signal_handler method, and are configured through the solver’s Solve method. Handlers trigger when a signal interrupt (usually, Ctrl-C) is given while the solver is running.

Notes

The handler is currently disabled when the solver is run in parallel.

class AbstractEnsembleSolver(dim, **kwds)

AbstractEnsembleSolver base class for mystic optimizers that are called within a parallel map. This allows pseudo-global coverage of parameter space using non-global optimizers.

Takes one initial input:

dim      -- dimensionality of the problem.


npop     -- size of the trial solution population.      [default = 1]
nbins    -- tuple of number of bins in each dimension.  [default = [1]*dim]
npts     -- number of solver instances.                 [default = 1]
rtol     -- size of radial tolerance for sparsity.      [default = None]


Important class members:

nDim, nPop       = dim, npop
generations      - an iteration counter.
evaluations      - an evaluation counter.
bestEnergy       - current best energy.
bestSolution     - current best parameter set.           [size = dim]
popEnergy        - set of all trial energy solutions.    [size = npop]
population       - set of all trial parameter solutions. [size = dim*npop]
solution_history - history of bestSolution status.       [StepMonitor.x]
energy_history   - history of bestEnergy status.         [StepMonitor.y]
signal_handler   - catches the interrupt signal.         [***disabled***]

Collapse(disp=False)
if solver has terminated by collapse, apply the collapse

(unless both collapse and “stop” are simultaneously satisfied)

input::
• disp: if True, print details about the solver state at collapse

note::

updates the solver’s termination conditions and constraints

Collapsed(disp=False, info=False, all=None)

check if the solver meets the given collapse conditions

Input::
• disp = if True, print details about the solver state at collapse

• info = if True, return collapsed state (instead of boolean)

• all = if True, get results for all solvers; if False, only check ‘best’

SetDistribution(dist=None)

Set the distribution used for determining solver starting points

Inputs:
• dist: a mystic.math.Distribution instance

Set Initial Points with Guess (x0)

input::
• x0: must be a sequence of length self.nDim

for i!=0 when a simplex-type initial guess in required

* this method must be overwritten *

SetMultinormalInitialPoints(mean, var=None)

Generate Initial Points from Multivariate Normal.

input::
• mean must be a sequence of length self.nDim

• var can be…

None: -> it becomes the identity scalar: -> var becomes scalar * I matrix: -> the variance matrix. must be the right size!

* this method must be overwritten *

SetNestedSolver(solver)

set the nested solver

input::
• solver: a mystic solver instance (e.g. NelderMeadSimplexSolver(3) )

SetRandomInitialPoints(min=None, max=None)

Generate Random Initial Points within given Bounds

input::
• min, max: must be a sequence of length self.nDim

• each min[i] should be <= the corresponding max[i]

* this method must be overwritten *

SetSampledInitialPoints(dist=None)

Generate Random Initial Points from Distribution (dist)

input::
• dist: a mystic.math.Distribution instance

* this method must be overwritten *

Terminated(disp=False, info=False, termination=None, all=None)

check if the solver meets the given termination conditions

Input::
• disp = if True, print termination statistics and/or warnings

• info = if True, return termination message (instead of boolean)

• termination = termination conditions to check against

• all = if True, get results for all solvers; if False, only check ‘best’

Notes::

If no termination conditions are given, the solver’s stored termination conditions will be used.

_InitialPoints()

Generate a grid of starting points for the ensemble of optimizers

* this method must be overwritten *

_Solve(cost, ExtraArgs, **settings)

Run the optimizer to termination, using the given settings.

Parameters
• cost (func) – the function to be minimized: y = cost(x).

• ExtraArgs (tuple) – tuple of extra arguments for cost.

• settings (dict) – optimizer settings (produced by _process_inputs)

Returns

None

Note

Optimizer settings for ensemble solvers include the step keyword, which enables the Step menthod within the ensemble. By default, ensemble solvers run to completion (i.e. Solve), for efficiency. Using Step enables the ensemble to communicate state between members of the ensemble at each iteration.

_Step(cost=None, ExtraArgs=None, **kwds)

perform a single optimization iteration

input::
• cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.

note::

ExtraArgs needs to be a tuple of extra arguments.

This method accepts additional args that are specific for the current solver, as detailed in the _process_inputs method.

__all_bestEnergy()

get bestEnergy from all solvers

__all_bestSolution()

get bestSolution from all solvers

__all_evals()

count of all function calls

__all_iters()

count of all iterations

__get_solver_instance(reset=False)

ensure the solver is a solver instance

input::
• reset: if reset is True, reset the monitor instances

__init__(dim, **kwds)

Takes one initial input:

dim      -- dimensionality of the problem.


npop     -- size of the trial solution population.      [default = 1]
nbins    -- tuple of number of bins in each dimension.  [default = [1]*dim]
npts     -- number of solver instances.                 [default = 1]
rtol     -- size of radial tolerance for sparsity.      [default = None]


Important class members:

nDim, nPop       = dim, npop
generations      - an iteration counter.
evaluations      - an evaluation counter.
bestEnergy       - current best energy.
bestSolution     - current best parameter set.           [size = dim]
popEnergy        - set of all trial energy solutions.    [size = npop]
population       - set of all trial parameter solutions. [size = dim*npop]
solution_history - history of bestSolution status.       [StepMonitor.x]
energy_history   - history of bestEnergy status.         [StepMonitor.y]
signal_handler   - catches the interrupt signal.         [***disabled***]

__init_allSolvers(reset=False)

populate NestedSolver state to allSolvers

input::
• reset: if reset is True, reset the monitor instances

__module__ = 'mystic.abstract_ensemble_solver'
__total_evals()

total number of function calls

__total_iters()

total number of iterations

__update_allSolvers(results)

replace allSolvers with solvers found in results

__update_bestSolver()

update _bestSolver from _allSolvers

__update_state()

update solver state from _bestSolver

property _all_bestEnergy

get bestEnergy from all solvers

property _all_bestSolution

get bestSolution from all solvers

property _all_evals

count of all function calls

property _all_iters

count of all iterations

_is_best()

get the id of the bestSolver

_is_new()

determine if solver has been run or not

_process_inputs(kwds)

process and activate input settings

Parameters
• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• disp (bool, default=False) – if True, print convergence messages.

EvaluationMonitor: a monitor instance to capture each evaluation of cost. StepMonitor: a monitor instance to capture each iteration’s best results. penalty: a function of the form: y’ = penalty(xk), with y = cost(xk) + y’,

where xk is the current parameter vector.

constraints: a function of the form: xk’ = constraints(xk), where xk is

the current parameter vector.

step (bool, default=False): if True, enable Step within the ensemble.

Note

The additional args are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the args are not sticky, and are thus set for a one-time use.

property _total_evals

total number of function calls

property _total_iters

total number of iterations

abstract_launcher module¶

This module contains the base classes for pathos pool and pipe objects, and describes the map and pipe interfaces. A pipe is defined as a connection between two ‘nodes’, where a node is something that does work. A pipe may be a one-way or two-way connection. A map is defined as a one-to-many connection between nodes. In both map and pipe connections, results from the connected nodes can be returned to the calling node. There are several variants of pipe and map, such as whether the connection is blocking, or ordered, or asynchronous. For pipes, derived methods must overwrite the ‘pipe’ method, while maps must overwrite the ‘map’ method. Pipes and maps are available from worker pool objects, where the work is done by any of the workers in the pool. For more specific point-to-point connections (such as a pipe between two specific compute nodes), use the pipe object directly.

Usage¶

A typical call to a pathos map will roughly follow this example:

>>> # instantiate and configure the worker pool
>>> from pathos.pools import ProcessPool
>>> pool = ProcessPool(nodes=4)
>>>
>>> # do a blocking map on the chosen function
>>> results = pool.map(pow, [1,2,3,4], [5,6,7,8])
>>>
>>> # do a non-blocking map, then extract the results from the iterator
>>> results = pool.imap(pow, [1,2,3,4], [5,6,7,8])
>>> print("...")
>>> results = list(results)
>>>
>>> # do an asynchronous map, then get the results
>>> results = pool.amap(pow, [1,2,3,4], [5,6,7,8])
...     time.sleep(5); print(".", end=' ')
...
>>> results = results.get()


Notes

Each of the pathos worker pools rely on a different transport protocol (e.g. threads, multiprocessing, etc), where the use of each pool comes with a few caveats. See the usage documentation and examples for each worker pool for more information.

class AbstractPipeConnection(*args, **kwds)

Bases: object

AbstractPipeConnection base class for pathos pipes.

Required input:

???

???

Important class members:

???

Other class members:

???

__dict__ = mappingproxy({'__module__': 'mystic.abstract_launcher', '__doc__': '\nAbstractPipeConnection base class for pathos pipes.\n    ', '__init__': <function AbstractPipeConnection.__init__>, '__repr__': <function AbstractPipeConnection.__repr__>, '__dict__': <attribute '__dict__' of 'AbstractPipeConnection' objects>, '__weakref__': <attribute '__weakref__' of 'AbstractPipeConnection' objects>, '__annotations__': {}})
__init__(*args, **kwds)
Required input:

???

???

Important class members:

???

Other class members:

???

__module__ = 'mystic.abstract_launcher'
__repr__()

Return repr(self).

__weakref__

list of weak references to the object (if defined)

class AbstractWorkerPool(*args, **kwds)

Bases: object

AbstractWorkerPool base class for pathos pools.

Important class members:

nodes - number (and potentially description) of workers ncpus - number of worker processors servers - list of worker servers scheduler - the associated scheduler workdir - associated $WORKDIR for scratch calculations/files Other class members: scatter - True, if uses ‘scatter-gather’ (instead of ‘worker-pool’) source - False, if minimal use of TemporaryFiles is desired timeout - number of seconds to wait for return value from scheduler __dict__ = mappingproxy({'__module__': 'mystic.abstract_launcher', '__doc__': '\nAbstractWorkerPool base class for pathos pools.\n ', '_AbstractWorkerPool__nodes': 1, '__init__': <function AbstractWorkerPool.__init__>, '__enter__': <function AbstractWorkerPool.__enter__>, '__exit__': <function AbstractWorkerPool.__exit__>, '_AbstractWorkerPool__init': <function AbstractWorkerPool.__init>, '_AbstractWorkerPool__map': <function AbstractWorkerPool.__map>, '_AbstractWorkerPool__imap': <function AbstractWorkerPool.__imap>, '_AbstractWorkerPool__pipe': <function AbstractWorkerPool.__pipe>, '_serve': <function AbstractWorkerPool._serve>, 'clear': <function AbstractWorkerPool.clear>, 'map': <function AbstractWorkerPool.map>, 'imap': <function AbstractWorkerPool.imap>, 'uimap': <function AbstractWorkerPool.uimap>, 'amap': <function AbstractWorkerPool.amap>, 'pipe': <function AbstractWorkerPool.pipe>, 'apipe': <function AbstractWorkerPool.apipe>, '__repr__': <function AbstractWorkerPool.__repr__>, '_AbstractWorkerPool__get_nodes': <function AbstractWorkerPool.__get_nodes>, '_AbstractWorkerPool__set_nodes': <function AbstractWorkerPool.__set_nodes>, '__dict__': <attribute '__dict__' of 'AbstractWorkerPool' objects>, '__weakref__': <attribute '__weakref__' of 'AbstractWorkerPool' objects>, '__annotations__': {}}) __enter__() __exit__(*args) __get_nodes() get the number of nodes in the pool __imap(f, *args, **kwds) default filter for imap inputs __init(*args, **kwds) default filter for __init__ inputs __init__(*args, **kwds) Important class members: nodes - number (and potentially description) of workers ncpus - number of worker processors servers - list of worker servers scheduler - the associated scheduler workdir - associated$WORKDIR for scratch calculations/files

Other class members:

scatter - True, if uses ‘scatter-gather’ (instead of ‘worker-pool’) source - False, if minimal use of TemporaryFiles is desired timeout - number of seconds to wait for return value from scheduler

__map(f, *args, **kwds)

default filter for map inputs

__module__ = 'mystic.abstract_launcher'
__nodes = 1
__pipe(f, *args, **kwds)

default filter for pipe inputs

__repr__()

Return repr(self).

__set_nodes(nodes)

set the number of nodes in the pool

__weakref__

list of weak references to the object (if defined)

_serve(*args, **kwds)

Create a new server if one isn’t already initialized

amap(f, *args, **kwds)

run a batch of jobs with an asynchronous map

Returns a results object which containts the results of applying the function f to the items of the argument sequence(s). If more than one sequence is given, the function is called with an argument list consisting of the corresponding item of each sequence. To retrieve the results, call the get() method on the returned results object. The call to get() is blocking, until all results are retrieved. Use the ready() method on the result object to check if all results are ready.

apipe(f, *args, **kwds)

submit a job asynchronously to a queue

Returns a results object which containts the result of calling the function f on a selected worker. To retrieve the results, call the get() method on the returned results object. The call to get() is blocking, until the result is available. Use the ready() method on the results object to check if the result is ready.

clear()

Remove server with matching state

imap(f, *args, **kwds)

run a batch of jobs with a non-blocking and ordered map

Returns a list iterator of results of applying the function f to the items of the argument sequence(s). If more than one sequence is given, the function is called with an argument list consisting of the corresponding item of each sequence.

map(f, *args, **kwds)

run a batch of jobs with a blocking and ordered map

Returns a list of results of applying the function f to the items of the argument sequence(s). If more than one sequence is given, the function is called with an argument list consisting of the corresponding item of each sequence.

pipe(f, *args, **kwds)

submit a job and block until results are available

Returns result of calling the function f on a selected worker. This function will block until results are available.

uimap(f, *args, **kwds)

run a batch of jobs with a non-blocking and unordered map

Returns a list iterator of results of applying the function f to the items of the argument sequence(s). If more than one sequence is given, the function is called with an argument list consisting of the corresponding item of each sequence. The order of the resulting sequence is not guaranteed.

abstract_map_solver module¶

This module contains the base class for mystic solvers that utilize a parallel map function to enable parallel computing. This module describes the map solver interface. As with the AbstractSolver, the _Step method must be overwritten with the derived solver’s optimization algorithm. Additionally, for the AbstractMapSolver, a call to map is required. In addition to the class interface, a simple function interface for a derived solver class is often provided. For an example, see the following.

The default map API settings are provided within mystic, while distributed and parallel computing maps can be obtained from the pathos package (http://dev.danse.us/trac/pathos).

Examples

A typical call to a ‘map’ solver will roughly follow this example:

>>> # the function to be minimized and the initial values
>>> from mystic.models import rosen
>>> lb = [0.0, 0.0, 0.0]
>>> ub = [2.0, 2.0, 2.0]
>>>
>>> # get monitors and termination condition objects
>>> from mystic.monitors import Monitor
>>> stepmon = Monitor()
>>> from mystic.termination import CandidateRelativeTolerance as CRT
>>>
>>> # select the parallel launch configuration
>>> from pyina.launchers import Mpi as Pool
>>> NNODES = 4
>>> npts = 20
>>>
>>> # instantiate and configure the solver
>>> from mystic.solvers import BuckshotSolver
>>> solver = BuckshotSolver(len(lb), npts)
>>> solver.SetMapper(Pool(NNODES).map)
>>> solver.SetGenerationMonitor(stepmon)
>>> solver.SetTermination(CRT())
>>> solver.Solve(rosen)
>>>
>>> # obtain the solution
>>> solution = solver.Solution()


Handler¶

All solvers packaged with mystic include a signal handler that provides the following options:

sol: Print current best solution.
cont: Continue calculation.
call: Executes sigint_callback, if provided.
exit: Exits with current best solution.


Handlers are enabled with the enable_signal_handler method, and are configured through the solver’s Solve method. Handlers trigger when a signal interrupt (usually, Ctrl-C) is given while the solver is running.

Notes

The handler is currently disabled when the solver is run in parallel.

class AbstractMapSolver(dim, **kwds)

AbstractMapSolver base class for mystic optimizers that utilize parallel map.

Takes one initial input:

dim      -- dimensionality of the problem.


npop     -- size of the trial solution population.       [default = 1]


Important class members:

nDim, nPop       = dim, npop
generations      - an iteration counter.
evaluations      - an evaluation counter.
bestEnergy       - current best energy.
bestSolution     - current best parameter set.           [size = dim]
popEnergy        - set of all trial energy solutions.    [size = npop]
population       - set of all trial parameter solutions. [size = dim*npop]
solution_history - history of bestSolution status.       [StepMonitor.x]
energy_history   - history of bestEnergy status.         [StepMonitor.y]
signal_handler   - catches the interrupt signal.         [***disabled***]

SelectScheduler(scheduler, queue, timelimit=None)

Select scheduler and queue (and optionally) timelimit.

Takes a scheduler function and a string queue name to submit the optimization job. Additionally takes string time limit for scheduled job.

Example: scheduler, queue=’normal’, timelimit=’00:02’

Inputs:

scheduler – scheduler function (see pyina.launchers) [DEFAULT: None] queue – queue name string (see pyina.launchers) [DEFAULT: None]

timelimit – time string HH:MM:SS format [DEFAULT: ‘00:05:00’]

SelectServers(servers, ncpus=None)

Select the compute server.

Accepts a tuple of (‘hostname:port’,), listing each available computing server.

If ncpus=None, then ‘autodetect’; or if ncpus=0, then ‘no local’. If servers=(‘*’,), then ‘autodetect’; or if servers=(), then ‘no remote’.

Inputs:

servers – tuple of compute servers [DEFAULT: autodetect]

ncpus – number of local processors [DEFAULT: autodetect]

SetLauncher(launcher, nnodes=None)

Set launcher and (optionally) number of nodes.

Uses a launcher to provide the solver with the syntax to configure and launch optimization jobs on the selected resource.

Inputs:

launcher – launcher function (see pyina.launchers) [DEFAULT: serial_launcher]

nnodes – number of parallel compute nodes [DEFAULT: 1]

SetMapper(map, strategy=None)

Set the map function and the mapping strategy.

Sets a mapping function to perform the map-reduce algorithm. Uses a mapping strategy to provide the algorithm for distributing the work list of optimization jobs across available resources.

Inputs:

map – the mapping function [DEFAULT: python_map] strategy – map strategy (see pyina.mappers) [DEFAULT: worker_pool]

__init__(dim, **kwds)

Takes one initial input:

dim      -- dimensionality of the problem.


npop     -- size of the trial solution population.       [default = 1]


Important class members:

nDim, nPop       = dim, npop
generations      - an iteration counter.
evaluations      - an evaluation counter.
bestEnergy       - current best energy.
bestSolution     - current best parameter set.           [size = dim]
popEnergy        - set of all trial energy solutions.    [size = npop]
population       - set of all trial parameter solutions. [size = dim*npop]
solution_history - history of bestSolution status.       [StepMonitor.x]
energy_history   - history of bestEnergy status.         [StepMonitor.y]
signal_handler   - catches the interrupt signal.         [***disabled***]

__module__ = 'mystic.abstract_map_solver'

abstract_sampler module¶

This module contains the base class for mystic samplers, and describes the mystic sampler interface.

class AbstractSampler(bounds, model, npts=None, **kwds)

Bases: object

AbstractSampler base class for optimizer-directed samplers

Inputs:

bounds – list[tuples]: (lower,upper) bound for each model input model – function: y = model(x), where x is an input parameter vector npts – int: number of points to sample the model

Note

additional keywords (evalmon, stepmon, maxiter, maxfun, dist, tight, saveiter, state, termination, constraints, penalty, reducer, solver) are available for use. See mystic.ensemble for more details.

__dict__ = mappingproxy({'__module__': 'mystic.abstract_sampler', '__doc__': '\nAbstractSampler base class for optimizer-directed samplers\n    ', '__init__': <function AbstractSampler.__init__>, '_init_solver': <function AbstractSampler._init_solver>, '_reset_sampler': <function AbstractSampler._reset_sampler>, '_reset_solved': <function AbstractSampler._reset_solved>, '_sample': <function AbstractSampler._sample>, 'sample': <function AbstractSampler.sample>, 'sample_until': <function AbstractSampler.sample_until>, 'evals': <function AbstractSampler.evals>, 'iters': <function AbstractSampler.iters>, 'terminated': <function AbstractSampler.terminated>, '__dict__': <attribute '__dict__' of 'AbstractSampler' objects>, '__weakref__': <attribute '__weakref__' of 'AbstractSampler' objects>, '__annotations__': {}})
__init__(bounds, model, npts=None, **kwds)
Inputs:

bounds – list[tuples]: (lower,upper) bound for each model input model – function: y = model(x), where x is an input parameter vector npts – int: number of points to sample the model

Note

additional keywords (evalmon, stepmon, maxiter, maxfun, dist, tight, saveiter, state, termination, constraints, penalty, reducer, solver) are available for use. See mystic.ensemble for more details.

__module__ = 'mystic.abstract_sampler'
__weakref__

list of weak references to the object (if defined)

_init_solver()

initialize the ensemble solver

_reset_sampler()

reinitialize the ensemble solver/sampler

_reset_solved()

reinitialize all terminated solvers in the ensemble

_sample(reset=False)

collect a sample for each member in the ensemble

Inputs:

reset: if True, reset all before sampling; if None, reset terminated

evals(all=False)

get the number of function evaluations

Input:

all – bool: if True, get evals from the entire ensemble

iters(all=False)

get the number of solver iterations

Input:

all – bool: if True, get iters from the entire ensemble

sample(if_terminated=None, reset_all=True)

sample npts using vectorized solvers

Input:

if_terminated: the amount of termination [all,True,any,False,None] reset_all: action to take when if_terminated is met [True,False,None]

Note

When if_terminated is all, reset if all solvers are terminated. When if_terminated is any, reset if any solvers are terminated. When if_terminated is True, reset if the best solver is terminated. When if_terminated is False, reset if no solvers are terminated. When if_terminated is None, reset regardless of termination [default].

Note

If reset_all is True, reset all solvers if if_terminated is satisfied. If reset_all is False, similarly reset only the terminated solvers. If reset_all is None, never reset.

sample_until(iters=None, evals=None, terminated=None, **kwds)

sample until one of the three given conditions is met: (iters() >= iters) or (evals() >= evals) or (# terminated > terminated)

Input:

iters – int: maximum number of iterations [default = inf] evals – int: maximum number of evaluations [default = inf] terminated – int: maximum number of stopped solvers [default = inf]

if_terminated: the amount of termination [all,True,any,False,None] reset_all: action to take when if_terminated is met [True,False,None]

Note

The default sampler configuration is to always reset (reset_all=True). If termination != None, the default is never reset (reset_all=None). A limit has to be provided for either iters, evals, or termination.

Note

terminated - {all: all, True: best, any: 1, False: 0, None: inf, N: N} if_terminated - {all: all, True: best, any: 1, False: 0, None: always} reset_all - {True: reset all, False: reset solved, None: never reset}

terminated(*args, **kwds)
Input:

all, disp, info

Note

all - {True: show all, False: best terminated, None: all terminated} disp - {True: show details, False: show less, ‘verbose’: show more} info - {True: return info string, False: return bool}

__all__ = ['AbstractSampler']

# utility functions _reset_sampler: reset all _reset_solved: reset all solved _sample(reset): reset all/stop/none, then step

# use cases to manage evals/iters _restart_all_without_step: _reset_sampler _restart_solved_without_step: _reset_solved _restart_all_then_step: _sample(reset=True) _restart_solved_then_step: _sample(reset=None) _step_without_restart: _sample(reset=False)

# options _restart_if_solved(solved): (* = ‘all’ if all else ‘solved’)

. all: if all stop, _restart_*_then_step . True: if best stop, _restart_*_then_step . any: if any stop, _restart_*_then_step . False: if none stop, _restart_*_then_step . None: _step_without_restart

abstract_solver module¶

This module contains the base class for mystic solvers, and describes the mystic solver interface. The _Step method must be overwritten with the derived solver’s optimization algorithm. In addition to the class interface, a simple function interface for a derived solver class is often provided. For an example, see mystic.scipy_optimize, and the following.

Examples

A typical call to a solver will roughly follow this example:

>>> # the function to be minimized and the initial values
>>> from mystic.models import rosen
>>> x0 = [0.8, 1.2, 0.7]
>>>
>>> # get monitors and termination condition objects
>>> from mystic.monitors import Monitor
>>> stepmon = Monitor()
>>> evalmon = Monitor()
>>> from mystic.termination import CandidateRelativeTolerance as CRT
>>>
>>> # instantiate and configure the solver
>>> solver.SetInitialPoints(x0)
>>> solver.SetEvaluationMonitor(evalmon)
>>> solver.SetGenerationMonitor(stepmon)
>>> solver.enable_signal_handler()
>>> solver.SetTermination(CRT())
>>> solver.Solve(rosen)
>>>
>>> # obtain the solution
>>> solution = solver.Solution()


An equivalent, but less flexible, call using the function interface is:

>>> # the function to be minimized and the initial values
>>> from mystic.models import rosen
>>> x0 = [0.8, 1.2, 0.7]
>>>
>>> # configure the solver and obtain the solution
>>> from mystic.solvers import fmin
>>> solution = fmin(rosen,x0)


Handler¶

All solvers packaged with mystic include a signal handler that provides the following options:

sol: Print current best solution.
cont: Continue calculation.
call: Executes sigint_callback, if provided.
exit: Exits with current best solution.


Handlers are enabled with the enable_signal_handler method, and are configured through the solver’s Solve method. Handlers trigger when a signal interrupt (usually, Ctrl-C) is given while the solver is running.

class AbstractSolver(dim, **kwds)

Bases: object

AbstractSolver base class for mystic optimizers.

Takes one initial input:

dim      -- dimensionality of the problem.


npop     -- size of the trial solution population.       [default = 1]


Important class members:

nDim, nPop       = dim, npop
generations      - an iteration counter.
evaluations      - an evaluation counter.
bestEnergy       - current best energy.
bestSolution     - current best parameter set.           [size = dim]
popEnergy        - set of all trial energy solutions.    [size = npop]
population       - set of all trial parameter solutions. [size = dim*npop]
solution_history - history of bestSolution status.       [StepMonitor.x]
energy_history   - history of bestEnergy status.         [StepMonitor.y]
signal_handler   - catches the interrupt signal.

Collapse(disp=False)
if solver has terminated by collapse, apply the collapse

(unless both collapse and “stop” are simultaneously satisfied)

input::
• disp: if True, print details about the solver state at collapse

note::

updates the solver’s termination conditions and constraints

Collapsed(disp=False, info=False)

check if the solver meets the given collapse conditions

Input::
• disp: if True, print details about the solver state at collapse

• info: if True, return collapsed state (instead of boolean)

Finalize()

cleanup upon exiting the main optimization loop

SaveSolver(filename=None, **kwds)

save solver state to a restart file

input::
• filename: string of full filepath for the restart file

note::

any additional keyword arguments are passed to dill.dump

SetConstraints(constraints)

apply a constraints function to the optimization

input::
• a constraints function of the form: xk’ = constraints(xk), where xk is the current parameter vector. Ideally, this function is constructed so the parameter vector it passes to the cost function will satisfy the desired (i.e. encoded) constraints.

SetEvaluationLimits(generations=None, evaluations=None, new=False, **kwds)

set limits for generations and/or evaluations

input::
• generations: maximum number of solver iterations (i.e. steps)

• evaluations: maximum number of function evaluations

• new: bool, if True, the above limit the new evaluations and iterations; otherwise, the limits refer to total evaluations and iterations.

SetEvaluationMonitor(monitor, new=False)

select a callable to monitor (x, f(x)) after each cost function evaluation

input::
• a monitor instance or monitor type used to track (x, f(x)). Any data collected in an existing evaluation monitor will be prepended, unless new is True.

SetGenerationMonitor(monitor, new=False)

select a callable to monitor (x, f(x)) after each solver iteration

input::
• a monitor instance or monitor type used to track (x, f(x)). Any data collected in an existing generation monitor will be prepended, unless new is True.

Set Initial Points with Guess (x0)

input::
• x0: must be a sequence of length self.nDim

for i!=0 when a simplex-type initial guess in required

SetMultinormalInitialPoints(mean, var=None)

Generate Initial Points from Multivariate Normal.

input::
• mean must be a sequence of length self.nDim

• var can be…

None: -> it becomes the identity scalar: -> var becomes scalar * I matrix: -> the variance matrix. must be the right size!

SetObjective(cost, ExtraArgs=None)

set the cost function for the optimization

input::
• cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.

note::

this method decorates the objective with bounds, penalties, monitors, etc

SetPenalty(penalty)

apply a penalty function to the optimization

input::
• a penalty function of the form: y’ = penalty(xk), with y = cost(xk) + y’, where xk is the current parameter vector. Ideally, this function is constructed so a penalty is applied when the desired (i.e. encoded) constraints are violated. Equality constraints should be considered satisfied when the penalty condition evaluates to zero, while inequality constraints are satisfied when the penalty condition evaluates to a non-positive number.

SetRandomInitialPoints(min=None, max=None)

Generate Random Initial Points within given Bounds

input::
• min, max: must be a sequence of length self.nDim

• each min[i] should be <= the corresponding max[i]

SetReducer(reducer, arraylike=False)

apply a reducer function to the cost function

input::
• a reducer function of the form: y’ = reducer(yk), where yk is a results vector and y’ is a single value. Ideally, this method is applied to a cost function with a multi-value return, to reduce the output to a single value. If arraylike, the reducer provided should take a single array as input and produce a scalar; otherwise, the reducer provided should meet the requirements of the python’s builtin ‘reduce’ method (e.g. lambda x,y: x+y), taking two scalars and producing a scalar.

SetSampledInitialPoints(dist=None)

Generate Random Initial Points from Distribution (dist)

input::
• dist: a mystic.math.Distribution instance

SetSaveFrequency(generations=None, filename=None)

set frequency for saving solver restart file

input::
• generations = number of solver iterations before next save of state

• filename = name of file in which to save solver state

note::

SetSaveFrequency(None) will disable saving solver restart file

SetStrictRanges(min=None, max=None, **kwds)

ensure solution is within bounds

input::
• min, max: must be a sequence of length self.nDim

• each min[i] should be <= the corresponding max[i]

• tight (bool): if True, apply bounds concurrent with other constraints

• clip (bool): if True, bounding constraints will clip exterior values

note::

SetStrictRanges(None) will remove strict range constraints

notes::

By default, the bounds are coupled to the other constraints with a coupler (e.g. mystic.coupler.outer), and not applied concurrently (i.e. with mystic.constraints.and_). Using a coupler favors speed over robustness, and relies on the user to formulate the constraints so they do not conflict with imposing the bounds.

note::

The keyword clip controls the clipping behavior for the bounding constraints. The default is to rely on _clipGuessWithinRangeBoundary when ensuring the bounds are respected, and to not take action when the other constraints are being imposed. However when tight=True, the default is that the bounds constraints clip at the bounds. By default, bounds constraints are applied with a symbolic solver, as the symbolic solver is generally faster than mystic.constraints.impose_bounds. All of the above default behaviors are active when clip=None.

note::

If clip=False, impose_bounds will be used to map the candidate solution inside the bounds, while clip=True will use impose_bounds to clip the candidate solution at the bounds. Note that clip=True is not the same as the default (clip=None, which uses a symbolic solver). If clip is specified while tight is not, then tight will be set to True.

SetTermination(termination)

set the termination conditions

input::
• termination = termination conditions to check against

Solution()

return the best solution

Solve(cost=None, termination=None, ExtraArgs=None, **kwds)

Minimize a ‘cost’ function with given termination conditions.

Uses an optimization algorithm to find the minimum of a function of one or more variables.

Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).

• termination (termination, default=None) – termination conditions.

• ExtraArgs (tuple, default=None) – extra arguments for cost.

• sigint_callback (func, default=None) – callback function for signal handler.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• disp (bool, default=False) – if True, print convergence messages.

Returns

None

Step(cost=None, termination=None, ExtraArgs=None, **kwds)

Take a single optimization step using the given ‘cost’ function.

Uses an optimization algorithm to take one ‘step’ toward the minimum of a function of one or more variables.

Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).

• termination (termination, default=None) – termination conditions.

• ExtraArgs (tuple, default=None) – extra arguments for cost.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• disp (bool, default=False) – if True, print convergence messages.

Returns

None

Notes

To run the solver until termination, call Solve(). Alternately, use Terminated() as the stop condition in a while loop over Step.

If the algorithm does not meet the given termination conditions after the call to Step, the solver may be left in an “out-of-sync” state. When abandoning an non-terminated solver, one should call Finalize() to make sure the solver is fully returned to a “synchronized” state.

This method accepts additional args that are specific for the current solver, as detailed in the _process_inputs method.

Terminated(disp=False, info=False, termination=None, **kwds)

check if the solver meets the given termination conditions

Input::
• disp = if True, print termination statistics and/or warnings

• info = if True, return termination message (instead of boolean)

• termination = termination conditions to check against

Notes::

If no termination conditions are given, the solver’s stored termination conditions will be used.

_SetEvaluationLimits(iterscale=None, evalscale=None)

set the evaluation limits

input::
• iterscale and evalscale are integers used to set the maximum iteration and evaluation limits, respectively. The new limit is defined as limit = (nDim * nPop * scale) + count, where count is the number of existing iterations or evaluations, respectively. The default for iterscale is 10, while the default for evalscale is 1000.

_Solve(cost, ExtraArgs, **settings)

Run the optimizer to termination, using the given settings.

Parameters
• cost (func) – the function to be minimized: y = cost(x).

• ExtraArgs (tuple) – tuple of extra arguments for cost.

• settings (dict) – optimizer settings (produced by _process_inputs)

Returns

None

_Step(cost=None, ExtraArgs=None, **kwds)

perform a single optimization iteration

input::
• cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.

* this method must be overwritten *

__bestEnergy()

__bestSolution()

__collapse_constraints(state, collapses)

get updated constraints for the given state and collapses

__collapse_termination(collapses)

get (initial state, resulting termination) for the give collapses

__copy__()

return a shallow copy of the solver

__deepcopy__(memo)

return a deep copy of the solver

__dict__ = mappingproxy({'__module__': 'mystic.abstract_solver', '__doc__': 'AbstractSolver base class for mystic optimizers.\n    ', '__init__': <function AbstractSolver.__init__>, 'Solution': <function AbstractSolver.Solution>, '_AbstractSolver__evaluations': <function AbstractSolver.__evaluations>, '_AbstractSolver__generations': <function AbstractSolver.__generations>, '_AbstractSolver__energy_history': <function AbstractSolver.__energy_history>, '_AbstractSolver__set_energy_history': <function AbstractSolver.__set_energy_history>, '_AbstractSolver__solution_history': <function AbstractSolver.__solution_history>, '_AbstractSolver__set_solution_history': <function AbstractSolver.__set_solution_history>, '_AbstractSolver__bestSolution': <function AbstractSolver.__bestSolution>, '_AbstractSolver__set_bestSolution': <function AbstractSolver.__set_bestSolution>, '_AbstractSolver__bestEnergy': <function AbstractSolver.__bestEnergy>, '_AbstractSolver__set_bestEnergy': <function AbstractSolver.__set_bestEnergy>, 'SetReducer': <function AbstractSolver.SetReducer>, 'SetPenalty': <function AbstractSolver.SetPenalty>, 'SetConstraints': <function AbstractSolver.SetConstraints>, 'SetGenerationMonitor': <function AbstractSolver.SetGenerationMonitor>, 'SetEvaluationMonitor': <function AbstractSolver.SetEvaluationMonitor>, 'SetStrictRanges': <function AbstractSolver.SetStrictRanges>, '_boundsconstraints': <function AbstractSolver._boundsconstraints>, '_clipGuessWithinRangeBoundary': <function AbstractSolver._clipGuessWithinRangeBoundary>, 'SetInitialPoints': <function AbstractSolver.SetInitialPoints>, 'SetRandomInitialPoints': <function AbstractSolver.SetRandomInitialPoints>, 'SetMultinormalInitialPoints': <function AbstractSolver.SetMultinormalInitialPoints>, 'SetSampledInitialPoints': <function AbstractSolver.SetSampledInitialPoints>, 'enable_signal_handler': <function AbstractSolver.enable_signal_handler>, 'disable_signal_handler': <function AbstractSolver.disable_signal_handler>, 'SetSaveFrequency': <function AbstractSolver.SetSaveFrequency>, 'SetEvaluationLimits': <function AbstractSolver.SetEvaluationLimits>, '_SetEvaluationLimits': <function AbstractSolver._SetEvaluationLimits>, 'Terminated': <function AbstractSolver.Terminated>, 'SetTermination': <function AbstractSolver.SetTermination>, 'SetObjective': <function AbstractSolver.SetObjective>, 'Collapsed': <function AbstractSolver.Collapsed>, '_AbstractSolver__get_collapses': <function AbstractSolver.__get_collapses>, '_AbstractSolver__collapse_termination': <function AbstractSolver.__collapse_termination>, '_AbstractSolver__collapse_constraints': <function AbstractSolver.__collapse_constraints>, 'Collapse': <function AbstractSolver.Collapse>, '_update_objective': <function AbstractSolver._update_objective>, '_decorate_objective': <function AbstractSolver._decorate_objective>, '_bootstrap_objective': <function AbstractSolver._bootstrap_objective>, '_Step': <function AbstractSolver._Step>, 'SaveSolver': <function AbstractSolver.SaveSolver>, '_AbstractSolver__save_state': <function AbstractSolver.__save_state>, '_AbstractSolver__load_state': <function AbstractSolver.__load_state>, 'Finalize': <function AbstractSolver.Finalize>, '_process_inputs': <function AbstractSolver._process_inputs>, 'Step': <function AbstractSolver.Step>, '_Solve': <function AbstractSolver._Solve>, 'Solve': <function AbstractSolver.Solve>, '__copy__': <function AbstractSolver.__copy__>, '__deepcopy__': <function AbstractSolver.__deepcopy__>, '_is_new': <function AbstractSolver._is_new>, 'evaluations': <property object>, 'generations': <property object>, 'energy_history': <property object>, 'solution_history': <property object>, 'bestEnergy': <property object>, 'bestSolution': <property object>, '__dict__': <attribute '__dict__' of 'AbstractSolver' objects>, '__weakref__': <attribute '__weakref__' of 'AbstractSolver' objects>, '__annotations__': {}})
__energy_history()

get the energy_history (default: energy_history = _stepmon._y)

__evaluations()

get the number of function calls

__generations()

get the number of iterations

__get_collapses(disp=False)

get dict of {collapse termination info: collapse}

input::
• disp: if True, print details about the solver state at collapse

__init__(dim, **kwds)

Takes one initial input:

dim      -- dimensionality of the problem.


npop     -- size of the trial solution population.       [default = 1]


Important class members:

nDim, nPop       = dim, npop
generations      - an iteration counter.
evaluations      - an evaluation counter.
bestEnergy       - current best energy.
bestSolution     - current best parameter set.           [size = dim]
popEnergy        - set of all trial energy solutions.    [size = npop]
population       - set of all trial parameter solutions. [size = dim*npop]
solution_history - history of bestSolution status.       [StepMonitor.x]
energy_history   - history of bestEnergy status.         [StepMonitor.y]
signal_handler   - catches the interrupt signal.


load solver.__dict__ into self.__dict__; override with kwds

input::
• solver is a solver instance, while kwds are a dict of solver state

__module__ = 'mystic.abstract_solver'
__save_state(force=False)

save the solver state, if chosen save frequency is met

input::
• if force is True, save the solver state regardless of save frequency

__set_bestEnergy(energy)

set the bestEnergy (energy=None will sync with popEnergy[0])

__set_bestSolution(params)

set the bestSolution (params=None will sync with population[0])

__set_energy_history(energy)

set the energy_history (energy=None will sync with _stepmon._y)

__set_solution_history(params)

set the solution_history (params=None will sync with _stepmon.x)

__solution_history()

get the solution_history (default: solution_history = _stepmon.x)

__weakref__

list of weak references to the object (if defined)

_bootstrap_objective(cost=None, ExtraArgs=None)

HACK to enable not explicitly calling _decorate_objective

input::
• cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.

_boundsconstraints(**kwds)

if _useStrictRange, build a constraint from (_strictMin,strictMax)

symbolic: bool, if True, use symbolic constraints [default: None] clip: bool, if True, clip exterior values to the bounds [default: None]

NOTE: By default, the bounds and constraints are imposed sequentially with a coupler. Using a coupler chooses speed over robustness, and relies on the user to formulate the constraints so that they do not conflict with imposing the bounds. Hence, if no keywords are provided, the bounds and constraints are applied sequentially.

NOTE: If any of the keyword arguments are used, then the bounds and constraints are imposed concurrently. This is slower but more robust than applying the bounds and constraints sequentially (the default). When the bounds and constraints are applied concurrently, the defaults for the keywords (symbolic and clip) are set to True, unless otherwise specified.

NOTE: If symbolic=True, use symbolic constraints to impose the bounds; otherwise use mystic.constraints.impose_bounds. Using clip=False will set symbolic=False unless symbolic is specified otherwise.

_clipGuessWithinRangeBoundary(x0, at=True)

ensure that initial guess is set within bounds

input::
• x0: must be a sequence of length self.nDim

• at: bool, if True, then clip at the bounds

_decorate_objective(cost, ExtraArgs=None)

decorate the cost function with bounds, penalties, monitors, etc

input::
• cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.

_is_new()

determine if solver has been run or not

_process_inputs(kwds)

process and activate input settings

Parameters
• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• disp (bool, default=False) – if True, print convergence messages.

EvaluationMonitor: a monitor instance to capture each evaluation of cost. StepMonitor: a monitor instance to capture each iteration’s best results. penalty: a function of the form: y’ = penalty(xk), with y = cost(xk) + y’,

where xk is the current parameter vector.

constraints: a function of the form: xk’ = constraints(xk), where xk is

the current parameter vector.

Note

The additional args are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the args are not sticky, and are thus set for a one-time use.

_update_objective()

decorate the cost function with bounds, penalties, monitors, etc

property bestEnergy

bestEnergy = popEnergy[0])

Type

property bestSolution

bestSolution = population[0])

Type

disable_signal_handler()

disable workflow interrupt handler while solver is running

enable_signal_handler()

enable workflow interrupt handler while solver is running

property energy_history

energy_history = _stepmon._y)

Type

get the energy_history (default

property evaluations

get the number of function calls

property generations

get the number of iterations

property solution_history

solution_history = _stepmon.x)

Type

get the solution_history (default

bounds module¶

cartesian bounds and measure bounds instances

class Bounds(*bounds, **kwds)

Bases: object

create a bounds instance

bounds is a tuple of (lower, upper) bound

• xlb: lower bound

• xub: upper bound

• n: repeat

For example:
>>> b = Bounds(7, 8, n=2)
>>> b.xlb, b.xub
(7, 8)
>>> b()
[(7, 8), (7, 8)]
>>> b.lower
[7, 7]
>>> b.upper
[8, 8]
>>>
>>> c = Bounds((0,1),(3,4), n=2)
>>> c()
[(0, 3), (0, 3), (1, 4), (1, 4)]

__call__()

get list of tuples of (lower, upper) bounds

__dict__ = mappingproxy({'__module__': 'mystic.bounds', '__init__': <function Bounds.__init__>, '_Bounds__lower': <function Bounds.__lower>, '_Bounds__upper': <function Bounds.__upper>, '__call__': <function Bounds.__call__>, '__add__': <function Bounds.__add__>, '_Bounds__set_lower': <function Bounds.__set_lower>, '_Bounds__set_upper': <function Bounds.__set_upper>, '__repr__': <function Bounds.__repr__>, 'lower': <property object>, 'upper': <property object>, '__dict__': <attribute '__dict__' of 'Bounds' objects>, '__weakref__': <attribute '__weakref__' of 'Bounds' objects>, '__doc__': None, '__annotations__': {}})
__init__(*bounds, **kwds)

create a bounds instance

bounds is a tuple of (lower, upper) bound

• xlb: lower bound

• xub: upper bound

• n: repeat

For example:
>>> b = Bounds(7, 8, n=2)
>>> b.xlb, b.xub
(7, 8)
>>> b()
[(7, 8), (7, 8)]
>>> b.lower
[7, 7]
>>> b.upper
[8, 8]
>>>
>>> c = Bounds((0,1),(3,4), n=2)
>>> c()
[(0, 3), (0, 3), (1, 4), (1, 4)]

__lower()

get list of lower bounds

__module__ = 'mystic.bounds'
__repr__()

Return repr(self).

__set_lower(lb)
__set_upper(ub)
__upper()

get list of upper bounds

__weakref__

list of weak references to the object (if defined)

property lower

get list of lower bounds

property upper

get list of upper bounds

class MeasureBounds(*bounds, **kwds)

create a measure bounds instance

bounds is a tuple of (lower, upper) bound

• wlb: weight lower bound

• wub: weight upper bound

• xlb: lower bound

• xub: upper bound

• n: repeat

For example:
>>> b = MeasureBounds(7, 8, n=2)
>>> b.wlb, b.wub
(0, 1)
>>> b.xlb, b.xub
(7, 8)
>>> b()
[(0, 1), (0, 1), (7, 8), (7, 8)]
>>> b.lower
[0, 0, 7, 7]
>>> b.upper
[1, 1, 8, 8]
>>>
>>> c = MeasureBounds((0,1),(4,5), n=1, wlb=(0,1), wub=(2,3))
>>> c.lower
[0, 0, 1, 1]
>>> c.upper
[2, 4, 3, 5]
>>> c()
[(0, 2), (0, 4), (1, 3), (1, 5)]
>>>
>>> c = MeasureBounds((0,1),(4,5), n=2)
>>> c()
[(0, 1), (0, 1), (0, 4), (0, 4), (0, 1), (0, 1), (1, 5), (1, 5)]

__init__(*bounds, **kwds)

create a measure bounds instance

bounds is a tuple of (lower, upper) bound

• wlb: weight lower bound

• wub: weight upper bound

• xlb: lower bound

• xub: upper bound

• n: repeat

For example:
>>> b = MeasureBounds(7, 8, n=2)
>>> b.wlb, b.wub
(0, 1)
>>> b.xlb, b.xub
(7, 8)
>>> b()
[(0, 1), (0, 1), (7, 8), (7, 8)]
>>> b.lower
[0, 0, 7, 7]
>>> b.upper
[1, 1, 8, 8]
>>>
>>> c = MeasureBounds((0,1),(4,5), n=1, wlb=(0,1), wub=(2,3))
>>> c.lower
[0, 0, 1, 1]
>>> c.upper
[2, 4, 3, 5]
>>> c()
[(0, 2), (0, 4), (1, 3), (1, 5)]
>>>
>>> c = MeasureBounds((0,1),(4,5), n=2)
>>> c()
[(0, 1), (0, 1), (0, 4), (0, 4), (0, 1), (0, 1), (1, 5), (1, 5)]

__lower()
__module__ = 'mystic.bounds'
__repr__()

Return repr(self).

__set_lower(lb)
__set_upper(ub)
__upper()
property lower

get list of lower bounds

property upper

get list of upper bounds

cache module¶

decorators for caching function outputs, with function inputs as the keys, and interactors for reading and writing to databases of functions and data.

several klepto.cache strategies are included here for backward compatability; please also see the klepto package for available archives (i.e. dir_archive, file_archive, hdf_archive, sql_archive), cache strategies, filters, encodings, serialization, and other configuration options

cached(**kwds)

build a caching archive for an objective function

Input:

type: the type of klepto.cache [default: inf_cache] archive: the archive (str name for a new archive, or archive instance) maxsize: maximum cache size [default: None] keymap: cache key encoder [default: klepto.keymap.keymap()] ignore: function argument names to ignore [default: ‘**’] tol: int tolerance for rounding [default: None] deep: bool rounding depth (default: False] purge: bool for purging cache to archive [default: False] multivalued: bool if multivalued return of objective [default: False]

Returns

cached objective function

Notes

inverse (y = -objective(x)) is at objective.__inverse__ cache of objective is at objective.__cache__ inverse and objective cache to the same archive

class inf_cache(maxsize=None, cache=None, keymap=None, ignore=None, tol=None, deep=False, purge=False)

Bases: object

infinitely-growing (INF) cache decorator.

This decorator memoizes a function’s return value each time it is called. If called later with the same arguments, the cached value is returned, and not re-evaluated. This cache will grow without bound. To avoid memory issues, it is suggested to frequently dump and clear the cache. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.

maxsize = maximum cache size [fixed at maxsize=None] cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.hashmap(flat=True)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (fixed at False)

If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap requires arguments to the cached function to be hashable.

If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’.

If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).

View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().

__call__(user_function)

Call self as a function.

__dict__ = mappingproxy({'__module__': 'klepto._cache', '__doc__': "infinitely-growing (INF) cache decorator.\n\n    This decorator memoizes a function's return value each time it is called.\n    If called later with the same arguments, the cached value is returned, and\n    not re-evaluated.  This cache will grow without bound.  To avoid memory\n    issues, it is suggested to frequently dump and clear the cache.  This\n    decorator takes an integer tolerance 'tol', equal to the number of decimal\n    places to which it will round off floats, and a bool 'deep' for whether the\n    rounding on inputs will be 'shallow' or 'deep'.  Note that rounding is not\n    applied to the calculation of new results, but rather as a simple form of\n    cache interpolation.  For example, with tol=0 and a cached value for f(3.0),\n    f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value;\n    however if tol=1, both f(3.1) and f(3.6) will store new values.\n\n    maxsize = maximum cache size [fixed at maxsize=None]\n    cache = storage hashmap (default is {})\n    keymap = cache key encoder (default is keymaps.hashmap(flat=True))\n    ignore = function argument names and indicies to 'ignore' (default is None)\n    tol = integer tolerance for rounding (default is None)\n    deep = boolean for rounding depth (default is False, i.e. 'shallow')\n    purge = boolean for purge cache to archive at maxsize (fixed at False)\n\n    If *keymap* is given, it will replace the hashing algorithm for generating\n    cache keys.  Several hashing algorithms are available in 'keymaps'. The\n    default keymap requires arguments to the cached function to be hashable.\n\n    If the keymap retains type information, then arguments of different types\n    will be cached separately.  For example, f(3.0) and f(3) will be treated\n    as distinct calls with distinct results.  Cache typing has a memory penalty,\n    and may also be ignored by some 'keymaps'.\n\n    If *ignore* is given, the keymap will ignore the arguments with the names\n    and/or positional indicies provided. For example, if ignore=(0,), then\n    the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2).\n    If ignore=('y',), then the key generated for f(x=3,y=4) will be identical\n    to that of f(x=3,y=0) or f(x=3,y=10). If ignore=('*','**'), all varargs\n    and varkwds will be 'ignored'.  Ignored arguments never trigger a\n    recalculation (they only trigger cache lookups), and thus are 'ignored'.\n    When caching class methods, it may be useful to ignore=('self',).\n\n    View cache statistics (hit, miss, load, maxsize, size) with f.info().\n    Clear the cache and statistics with f.clear().  Replace the cache archive\n    with f.archive(obj).  Load from the archive with f.load(), and dump from\n    the cache to the archive with f.dump().\n    ", '__init__': <function inf_cache.__init__>, '__call__': <function inf_cache.__call__>, '__get__': <function inf_cache.__get__>, '__reduce__': <function inf_cache.__reduce__>, '__dict__': <attribute '__dict__' of 'inf_cache' objects>, '__weakref__': <attribute '__weakref__' of 'inf_cache' objects>, '__annotations__': {}})
__get__(obj, objtype)

support instance methods

__init__(maxsize=None, cache=None, keymap=None, ignore=None, tol=None, deep=False, purge=False)
__module__ = 'klepto._cache'
__reduce__()

Helper for pickle.

__weakref__

list of weak references to the object (if defined)

class lfu_cache(*args, **kwds)

Bases: object

least-frequenty-used (LFU) cache decorator.

This decorator memoizes a function’s return value each time it is called. If called later with the same arguments, the cached value is returned, and not re-evaluated. To avoid memory issues, a maximum cache size is imposed. For caches with an archive, the full cache dumps to archive upon reaching maxsize. For caches without an archive, the LFU algorithm manages the cache. Caches with an archive will use the latter behavior when ‘purge’ is False. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.

maxsize = maximum cache size cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.hashmap(flat=True)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (default is False)

If maxsize is None, this cache will grow without bound.

If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap requires arguments to the cached function to be hashable.

If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’.

If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).

View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().

__call__(user_function)

Call self as a function.

__dict__ = mappingproxy({'__module__': 'klepto._cache', '__doc__': "least-frequenty-used (LFU) cache decorator.\n\n    This decorator memoizes a function's return value each time it is called.\n    If called later with the same arguments, the cached value is returned, and\n    not re-evaluated.  To avoid memory issues, a maximum cache size is imposed.\n    For caches with an archive, the full cache dumps to archive upon reaching\n    maxsize. For caches without an archive, the LFU algorithm manages the cache.\n    Caches with an archive will use the latter behavior when 'purge' is False.\n    This decorator takes an integer tolerance 'tol', equal to the number of\n    decimal places to which it will round off floats, and a bool 'deep' for\n    whether the rounding on inputs will be 'shallow' or 'deep'.  Note that\n    rounding is not applied to the calculation of new results, but rather as a\n    simple form of cache interpolation.  For example, with tol=0 and a cached\n    value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will\n    store a new value; however if tol=1, both f(3.1) and f(3.6) will store\n    new values.\n\n    maxsize = maximum cache size\n    cache = storage hashmap (default is {})\n    keymap = cache key encoder (default is keymaps.hashmap(flat=True))\n    ignore = function argument names and indicies to 'ignore' (default is None)\n    tol = integer tolerance for rounding (default is None)\n    deep = boolean for rounding depth (default is False, i.e. 'shallow')\n    purge = boolean for purge cache to archive at maxsize (default is False)\n\n    If *maxsize* is None, this cache will grow without bound.\n\n    If *keymap* is given, it will replace the hashing algorithm for generating\n    cache keys.  Several hashing algorithms are available in 'keymaps'. The\n    default keymap requires arguments to the cached function to be hashable.\n\n    If the keymap retains type information, then arguments of different types\n    will be cached separately.  For example, f(3.0) and f(3) will be treated\n    as distinct calls with distinct results.  Cache typing has a memory penalty,\n    and may also be ignored by some 'keymaps'.\n\n    If *ignore* is given, the keymap will ignore the arguments with the names\n    and/or positional indicies provided. For example, if ignore=(0,), then\n    the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2).\n    If ignore=('y',), then the key generated for f(x=3,y=4) will be identical\n    to that of f(x=3,y=0) or f(x=3,y=10). If ignore=('*','**'), all varargs\n    and varkwds will be 'ignored'.  Ignored arguments never trigger a\n    recalculation (they only trigger cache lookups), and thus are 'ignored'.\n    When caching class methods, it may be useful to ignore=('self',).\n\n    View cache statistics (hit, miss, load, maxsize, size) with f.info().\n    Clear the cache and statistics with f.clear().  Replace the cache archive\n    with f.archive(obj).  Load from the archive with f.load(), and dump from\n    the cache to the archive with f.dump().\n\n    See: http://en.wikipedia.org/wiki/Cache_algorithms#Least_Frequently_Used\n    ", '__new__': <staticmethod object>, '__init__': <function lfu_cache.__init__>, '__call__': <function lfu_cache.__call__>, '__get__': <function lfu_cache.__get__>, '__reduce__': <function lfu_cache.__reduce__>, '__dict__': <attribute '__dict__' of 'lfu_cache' objects>, '__weakref__': <attribute '__weakref__' of 'lfu_cache' objects>, '__annotations__': {}})
__get__(obj, objtype)

support instance methods

__init__(maxsize=100, cache=None, keymap=None, ignore=None, tol=None, deep=False, purge=False)
__module__ = 'klepto._cache'
static __new__(cls, *args, **kwds)
__reduce__()

Helper for pickle.

__weakref__

list of weak references to the object (if defined)

class lru_cache(*args, **kwds)

Bases: object

least-recently-used (LRU) cache decorator.

This decorator memoizes a function’s return value each time it is called. If called later with the same arguments, the cached value is returned, and not re-evaluated. To avoid memory issues, a maximum cache size is imposed. For caches with an archive, the full cache dumps to archive upon reaching maxsize. For caches without an archive, the LRU algorithm manages the cache. Caches with an archive will use the latter behavior when ‘purge’ is False. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.

maxsize = maximum cache size cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.hashmap(flat=True)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (default is False)

If maxsize is None, this cache will grow without bound.

If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap requires arguments to the cached function to be hashable.

If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’.

If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).

View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().

__call__(user_function)

Call self as a function.

__dict__ = mappingproxy({'__module__': 'klepto._cache', '__doc__': "least-recently-used (LRU) cache decorator.\n\n    This decorator memoizes a function's return value each time it is called.\n    If called later with the same arguments, the cached value is returned, and\n    not re-evaluated.  To avoid memory issues, a maximum cache size is imposed.\n    For caches with an archive, the full cache dumps to archive upon reaching\n    maxsize. For caches without an archive, the LRU algorithm manages the cache.\n    Caches with an archive will use the latter behavior when 'purge' is False.\n    This decorator takes an integer tolerance 'tol', equal to the number of\n    decimal places to which it will round off floats, and a bool 'deep' for\n    whether the rounding on inputs will be 'shallow' or 'deep'.  Note that\n    rounding is not applied to the calculation of new results, but rather as a\n    simple form of cache interpolation.  For example, with tol=0 and a cached\n    value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will\n    store a new value; however if tol=1, both f(3.1) and f(3.6) will store\n    new values.\n\n    maxsize = maximum cache size\n    cache = storage hashmap (default is {})\n    keymap = cache key encoder (default is keymaps.hashmap(flat=True))\n    ignore = function argument names and indicies to 'ignore' (default is None)\n    tol = integer tolerance for rounding (default is None)\n    deep = boolean for rounding depth (default is False, i.e. 'shallow')\n    purge = boolean for purge cache to archive at maxsize (default is False)\n\n    If *maxsize* is None, this cache will grow without bound.\n\n    If *keymap* is given, it will replace the hashing algorithm for generating\n    cache keys.  Several hashing algorithms are available in 'keymaps'. The\n    default keymap requires arguments to the cached function to be hashable.\n\n    If the keymap retains type information, then arguments of different types\n    will be cached separately.  For example, f(3.0) and f(3) will be treated\n    as distinct calls with distinct results.  Cache typing has a memory penalty,\n    and may also be ignored by some 'keymaps'.\n\n    If *ignore* is given, the keymap will ignore the arguments with the names\n    and/or positional indicies provided. For example, if ignore=(0,), then\n    the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2).\n    If ignore=('y',), then the key generated for f(x=3,y=4) will be identical\n    to that of f(x=3,y=0) or f(x=3,y=10). If ignore=('*','**'), all varargs\n    and varkwds will be 'ignored'.  Ignored arguments never trigger a\n    recalculation (they only trigger cache lookups), and thus are 'ignored'.\n    When caching class methods, it may be useful to ignore=('self',).\n\n    View cache statistics (hit, miss, load, maxsize, size) with f.info().\n    Clear the cache and statistics with f.clear().  Replace the cache archive\n    with f.archive(obj).  Load from the archive with f.load(), and dump from\n    the cache to the archive with f.dump().\n\n    See: http://en.wikipedia.org/wiki/Cache_algorithms#Least_Recently_Used\n    ", '__new__': <staticmethod object>, '__init__': <function lru_cache.__init__>, '__call__': <function lru_cache.__call__>, '__get__': <function lru_cache.__get__>, '__reduce__': <function lru_cache.__reduce__>, '__dict__': <attribute '__dict__' of 'lru_cache' objects>, '__weakref__': <attribute '__weakref__' of 'lru_cache' objects>, '__annotations__': {}})
__get__(obj, objtype)

support instance methods

__init__(maxsize=100, cache=None, keymap=None, ignore=None, tol=None, deep=False, purge=False)
__module__ = 'klepto._cache'
static __new__(cls, *args, **kwds)
__reduce__()

Helper for pickle.

__weakref__

list of weak references to the object (if defined)

class mru_cache(*args, **kwds)

Bases: object

most-recently-used (MRU) cache decorator.

This decorator memoizes a function’s return value each time it is called. If called later with the same arguments, the cached value is returned, and not re-evaluated. To avoid memory issues, a maximum cache size is imposed. For caches with an archive, the full cache dumps to archive upon reaching maxsize. For caches without an archive, the MRU algorithm manages the cache. Caches with an archive will use the latter behavior when ‘purge’ is False. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.

maxsize = maximum cache size cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.hashmap(flat=True)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (default is False)

If maxsize is None, this cache will grow without bound.

If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap requires arguments to the cached function to be hashable.

If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’.

If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).

View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().

__call__(user_function)

Call self as a function.

__dict__ = mappingproxy({'__module__': 'klepto._cache', '__doc__': "most-recently-used (MRU) cache decorator.\n\n    This decorator memoizes a function's return value each time it is called.\n    If called later with the same arguments, the cached value is returned, and\n    not re-evaluated.  To avoid memory issues, a maximum cache size is imposed.\n    For caches with an archive, the full cache dumps to archive upon reaching\n    maxsize. For caches without an archive, the MRU algorithm manages the cache.\n    Caches with an archive will use the latter behavior when 'purge' is False.\n    This decorator takes an integer tolerance 'tol', equal to the number of\n    decimal places to which it will round off floats, and a bool 'deep' for\n    whether the rounding on inputs will be 'shallow' or 'deep'.  Note that\n    rounding is not applied to the calculation of new results, but rather as a\n    simple form of cache interpolation.  For example, with tol=0 and a cached\n    value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will\n    store a new value; however if tol=1, both f(3.1) and f(3.6) will store\n    new values.\n\n    maxsize = maximum cache size\n    cache = storage hashmap (default is {})\n    keymap = cache key encoder (default is keymaps.hashmap(flat=True))\n    ignore = function argument names and indicies to 'ignore' (default is None)\n    tol = integer tolerance for rounding (default is None)\n    deep = boolean for rounding depth (default is False, i.e. 'shallow')\n    purge = boolean for purge cache to archive at maxsize (default is False)\n\n    If *maxsize* is None, this cache will grow without bound.\n\n    If *keymap* is given, it will replace the hashing algorithm for generating\n    cache keys.  Several hashing algorithms are available in 'keymaps'. The\n    default keymap requires arguments to the cached function to be hashable.\n\n    If the keymap retains type information, then arguments of different types\n    will be cached separately.  For example, f(3.0) and f(3) will be treated\n    as distinct calls with distinct results.  Cache typing has a memory penalty,\n    and may also be ignored by some 'keymaps'.\n\n    If *ignore* is given, the keymap will ignore the arguments with the names\n    and/or positional indicies provided. For example, if ignore=(0,), then\n    the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2).\n    If ignore=('y',), then the key generated for f(x=3,y=4) will be identical\n    to that of f(x=3,y=0) or f(x=3,y=10). If ignore=('*','**'), all varargs\n    and varkwds will be 'ignored'.  Ignored arguments never trigger a\n    recalculation (they only trigger cache lookups), and thus are 'ignored'.\n    When caching class methods, it may be useful to ignore=('self',).\n\n    View cache statistics (hit, miss, load, maxsize, size) with f.info().\n    Clear the cache and statistics with f.clear().  Replace the cache archive\n    with f.archive(obj).  Load from the archive with f.load(), and dump from\n    the cache to the archive with f.dump().\n\n    See: http://en.wikipedia.org/wiki/Cache_algorithms#Most_Recently_Used\n    ", '__new__': <staticmethod object>, '__init__': <function mru_cache.__init__>, '__call__': <function mru_cache.__call__>, '__get__': <function mru_cache.__get__>, '__reduce__': <function mru_cache.__reduce__>, '__dict__': <attribute '__dict__' of 'mru_cache' objects>, '__weakref__': <attribute '__weakref__' of 'mru_cache' objects>, '__annotations__': {}})
__get__(obj, objtype)

support instance methods

__init__(maxsize=100, cache=None, keymap=None, ignore=None, tol=None, deep=False, purge=False)
__module__ = 'klepto._cache'
static __new__(cls, *args, **kwds)
__reduce__()

Helper for pickle.

__weakref__

list of weak references to the object (if defined)

class no_cache(maxsize=0, cache=None, keymap=None, ignore=None, tol=None, deep=False, purge=True)

Bases: object

empty (NO) cache decorator.

Unlike other cache decorators, this decorator does not cache. It is a dummy that collects statistics and conforms to the caching interface. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.

maxsize = maximum cache size [fixed at maxsize=0] cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.hashmap(flat=True)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (fixed at True)

If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap requires arguments to the cached function to be hashable.

If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’. Here, the keymap is only used to look up keys in an associated archive.

If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).

View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().

__call__(user_function)

Call self as a function.

__dict__ = mappingproxy({'__module__': 'klepto._cache', '__doc__': "empty (NO) cache decorator.\n\n    Unlike other cache decorators, this decorator does not cache.  It is a\n    dummy that collects statistics and conforms to the caching interface.  This\n    decorator takes an integer tolerance 'tol', equal to the number of decimal\n    places to which it will round off floats, and a bool 'deep' for whether the\n    rounding on inputs will be 'shallow' or 'deep'.  Note that rounding is not\n    applied to the calculation of new results, but rather as a simple form of\n    cache interpolation.  For example, with tol=0 and a cached value for f(3.0),\n    f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value;\n    however if tol=1, both f(3.1) and f(3.6) will store new values.\n\n    maxsize = maximum cache size [fixed at maxsize=0]\n    cache = storage hashmap (default is {})\n    keymap = cache key encoder (default is keymaps.hashmap(flat=True))\n    ignore = function argument names and indicies to 'ignore' (default is None)\n    tol = integer tolerance for rounding (default is None)\n    deep = boolean for rounding depth (default is False, i.e. 'shallow')\n    purge = boolean for purge cache to archive at maxsize (fixed at True)\n\n    If *keymap* is given, it will replace the hashing algorithm for generating\n    cache keys.  Several hashing algorithms are available in 'keymaps'. The\n    default keymap requires arguments to the cached function to be hashable.\n\n    If the keymap retains type information, then arguments of different types\n    will be cached separately.  For example, f(3.0) and f(3) will be treated\n    as distinct calls with distinct results.  Cache typing has a memory penalty,\n    and may also be ignored by some 'keymaps'.  Here, the keymap is only used\n    to look up keys in an associated archive.\n\n    If *ignore* is given, the keymap will ignore the arguments with the names\n    and/or positional indicies provided. For example, if ignore=(0,), then\n    the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2).\n    If ignore=('y',), then the key generated for f(x=3,y=4) will be identical\n    to that of f(x=3,y=0) or f(x=3,y=10). If ignore=('*','**'), all varargs\n    and varkwds will be 'ignored'.  Ignored arguments never trigger a\n    recalculation (they only trigger cache lookups), and thus are 'ignored'.\n    When caching class methods, it may be useful to ignore=('self',).\n\n    View cache statistics (hit, miss, load, maxsize, size) with f.info().\n    Clear the cache and statistics with f.clear().  Replace the cache archive\n    with f.archive(obj).  Load from the archive with f.load(), and dump from\n    the cache to the archive with f.dump().\n    ", '__init__': <function no_cache.__init__>, '__call__': <function no_cache.__call__>, '__get__': <function no_cache.__get__>, '__reduce__': <function no_cache.__reduce__>, '__dict__': <attribute '__dict__' of 'no_cache' objects>, '__weakref__': <attribute '__weakref__' of 'no_cache' objects>, '__annotations__': {}})
__get__(obj, objtype)

support instance methods

__init__(maxsize=0, cache=None, keymap=None, ignore=None, tol=None, deep=False, purge=True)
__module__ = 'klepto._cache'
__reduce__()

Helper for pickle.

__weakref__

list of weak references to the object (if defined)

class rr_cache(*args, **kwds)

Bases: object

random-replacement (RR) cache decorator.

This decorator memoizes a function’s return value each time it is called. If called later with the same arguments, the cached value is returned, and not re-evaluated. To avoid memory issues, a maximum cache size is imposed. For caches with an archive, the full cache dumps to archive upon reaching maxsize. For caches without an archive, the RR algorithm manages the cache. Caches with an archive will use the latter behavior when ‘purge’ is False. This decorator takes an integer tolerance ‘tol’, equal to the number of decimal places to which it will round off floats, and a bool ‘deep’ for whether the rounding on inputs will be ‘shallow’ or ‘deep’. Note that rounding is not applied to the calculation of new results, but rather as a simple form of cache interpolation. For example, with tol=0 and a cached value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will store a new value; however if tol=1, both f(3.1) and f(3.6) will store new values.

maxsize = maximum cache size cache = storage hashmap (default is {}) keymap = cache key encoder (default is keymaps.hashmap(flat=True)) ignore = function argument names and indicies to ‘ignore’ (default is None) tol = integer tolerance for rounding (default is None) deep = boolean for rounding depth (default is False, i.e. ‘shallow’) purge = boolean for purge cache to archive at maxsize (default is False)

If maxsize is None, this cache will grow without bound.

If keymap is given, it will replace the hashing algorithm for generating cache keys. Several hashing algorithms are available in ‘keymaps’. The default keymap requires arguments to the cached function to be hashable.

If the keymap retains type information, then arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Cache typing has a memory penalty, and may also be ignored by some ‘keymaps’.

If ignore is given, the keymap will ignore the arguments with the names and/or positional indicies provided. For example, if ignore=(0,), then the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2). If ignore=(‘y’,), then the key generated for f(x=3,y=4) will be identical to that of f(x=3,y=0) or f(x=3,y=10). If ignore=(‘*’,’**’), all varargs and varkwds will be ‘ignored’. Ignored arguments never trigger a recalculation (they only trigger cache lookups), and thus are ‘ignored’. When caching class methods, it may be useful to ignore=(‘self’,).

View cache statistics (hit, miss, load, maxsize, size) with f.info(). Clear the cache and statistics with f.clear(). Replace the cache archive with f.archive(obj). Load from the archive with f.load(), and dump from the cache to the archive with f.dump().

http://en.wikipedia.org/wiki/Cache_algorithms#Random_Replacement

__call__(user_function)

Call self as a function.

__dict__ = mappingproxy({'__module__': 'klepto._cache', '__doc__': "random-replacement (RR) cache decorator.\n\n    This decorator memoizes a function's return value each time it is called.\n    If called later with the same arguments, the cached value is returned, and\n    not re-evaluated.  To avoid memory issues, a maximum cache size is imposed.\n    For caches with an archive, the full cache dumps to archive upon reaching\n    maxsize. For caches without an archive, the RR algorithm manages the cache.\n    Caches with an archive will use the latter behavior when 'purge' is False.\n    This decorator takes an integer tolerance 'tol', equal to the number of\n    decimal places to which it will round off floats, and a bool 'deep' for\n    whether the rounding on inputs will be 'shallow' or 'deep'.  Note that\n    rounding is not applied to the calculation of new results, but rather as a\n    simple form of cache interpolation.  For example, with tol=0 and a cached\n    value for f(3.0), f(3.1) will lookup f(3.0) in the cache while f(3.6) will\n    store a new value; however if tol=1, both f(3.1) and f(3.6) will store\n    new values.\n\n    maxsize = maximum cache size\n    cache = storage hashmap (default is {})\n    keymap = cache key encoder (default is keymaps.hashmap(flat=True))\n    ignore = function argument names and indicies to 'ignore' (default is None)\n    tol = integer tolerance for rounding (default is None)\n    deep = boolean for rounding depth (default is False, i.e. 'shallow')\n    purge = boolean for purge cache to archive at maxsize (default is False)\n\n    If *maxsize* is None, this cache will grow without bound.\n\n    If *keymap* is given, it will replace the hashing algorithm for generating\n    cache keys.  Several hashing algorithms are available in 'keymaps'. The\n    default keymap requires arguments to the cached function to be hashable.\n\n    If the keymap retains type information, then arguments of different types\n    will be cached separately.  For example, f(3.0) and f(3) will be treated\n    as distinct calls with distinct results.  Cache typing has a memory penalty,\n    and may also be ignored by some 'keymaps'.\n\n    If *ignore* is given, the keymap will ignore the arguments with the names\n    and/or positional indicies provided. For example, if ignore=(0,), then\n    the key generated for f(1,2) will be identical to that of f(3,2) or f(4,2).\n    If ignore=('y',), then the key generated for f(x=3,y=4) will be identical\n    to that of f(x=3,y=0) or f(x=3,y=10). If ignore=('*','**'), all varargs\n    and varkwds will be 'ignored'.  Ignored arguments never trigger a\n    recalculation (they only trigger cache lookups), and thus are 'ignored'.\n    When caching class methods, it may be useful to ignore=('self',).\n\n    View cache statistics (hit, miss, load, maxsize, size) with f.info().\n    Clear the cache and statistics with f.clear().  Replace the cache archive\n    with f.archive(obj).  Load from the archive with f.load(), and dump from\n    the cache to the archive with f.dump().\n\n    http://en.wikipedia.org/wiki/Cache_algorithms#Random_Replacement\n    ", '__new__': <staticmethod object>, '__init__': <function rr_cache.__init__>, '__call__': <function rr_cache.__call__>, '__get__': <function rr_cache.__get__>, '__reduce__': <function rr_cache.__reduce__>, '__dict__': <attribute '__dict__' of 'rr_cache' objects>, '__weakref__': <attribute '__weakref__' of 'rr_cache' objects>, '__annotations__': {}})
__get__(obj, objtype)

support instance methods

__init__(maxsize=100, cache=None, keymap=None, ignore=None, tol=None, deep=False, purge=False)
__module__ = 'klepto._cache'
static __new__(cls, *args, **kwds)
__reduce__()

Helper for pickle.

__weakref__

list of weak references to the object (if defined)

collapse module¶

generate a selector for a mask of indices

generate a selector for a mask of tuples (pairs)

generate a filter for a position mask (dict, set, or where)

separate a mask into a list of ints and list of tuples (pairs). mask should be composed of indices and pairs of indices

generate a filter for a weight mask (dict, set, or where)

return a set of pairs of indices where the parameters exhibit a dimensional collapse. Dimensional collapse is defined by: max(pairwise(parameters)) <= tolerance over N generations (offset=False), ptp(pairwise(parameters)) <= tolerance over N generations (offset=True).

collapse will be ignored at any pairs of indices specififed in the mask. If single indices are provided, ignore all pairs with the given indices.

return a set of indices where the parameters exhibit a dimensional collapse at the specified target. Dimensional collapse is defined by: change(param[i]) <= tolerance over N generations, where: change(param[i]) = max(param[i]) - min(param[i]) if target = None, or change(param[i]) = abs(param[i] - target) otherwise.

target can be None, a single value, or a list of values of param length

collapse will be ignored at any indices specififed in the mask

return a dict of {index:bounds} where the parameters exhibit a collapse in bounds for regions of parameter space with a comparably high cost value. Bounds are provided by an interval (min,max), or a list of intervals. Bounds collapse will occur when: cost(param) - min(cost) >= limit, for all N samples within an interval.

if clip is True, then clip beyond the space sampled by stepmon

if mask is provided, the intersection of bounds and mask is returned. mask is a dict of {index:bounds}, formatted same as the return value.

return a dict of {measure: pairs_of_indices} where the product_measure exhibits a dimensional collapse in position. Dimensional collapse in position is defined by:

collapse will be ignored at (measure,pairs) as specified in the mask. Format of mask will determine the return value for this function. Default mask format is dict of {measure: pairs_of_indices}, with alternate formatting available as a set of tuples of (measure,pair).

return a dict of {measure:indices} where the product_measure exhibits a dimensional collapse in weight. Dimensional collapse in weight is defined by: max(weight[i]) <= tolerance over N generations.

collapse will be ignored at (measure,indices) as specified in the mask. Format of mask will determine the return value for this function. Default mask format is dict of {measure: indices}, with alternate formatting available as a set of tuples of (measure,index).

collapsed(message)

extract collapse result from collapse message

generate a selector for a mask of pairs and/or indices

constraints module¶

Tools for building and applying constraints and penalties.

and_(*constraints, **settings)

combine several constraints into a single constraint

Inputs:

constraints – constraint functions

maxiter – maximum number of iterations to attempt to solve [default: 100] onexit – function x’ = f(x) to call on success [default: None] onfail – function x’ = f(x) to call on failure [default: None]

Note

If a repeating cycle is detected, some of the inputs may be randomized.

as_constraint(penalty, *args, **kwds)

Convert a penalty function to a constraints solver.

Inputs:

penalty – a penalty function

lower_bounds – list of lower bounds on solution values. upper_bounds – list of upper bounds on solution values. nvars – number of parameter values. solver – the mystic solver to use in the optimization termination – the mystic termination to use in the optimization

NOTE: The default solver is ‘diffev’, with npop=min(40, ndim*5). The default

termination is ChangeOverGeneration(), and the default guess is randomly selected points between the upper and lower bounds.

as_penalty(constraint, ptype=None, *args, **kwds)

Convert a constraints solver to a penalty function.

Inputs:

constraint – a constraints solver ptype – penalty function type [default: quadratic_equality]

args – arguments for the constraints solver [default: ()] kwds – keyword arguments for the constraints solver [default: {}] k – penalty multiplier h – iterative multiplier

bounded(seq, bounds, index=None, clip=True, nearest=True)

bound a sequence by bounds = [min,max]

For example: >>> sequence = [0.123, 1.244, -4.755, 10.731, 6.207] >>> >>> bounded(sequence, (0,5)) array([0.123, 1.244, 0. , 5. , 5. ]) >>> >>> bounded(sequence, (0,5), index=(0,2,4)) array([ 0.123, 1.244, 0. , 10.731, 5. ]) >>> >>> bounded(sequence, (0,5), clip=False) array([0.123 , 1.244 , 3.46621839, 1.44469038, 4.88937466]) >>> >>> bounds = [(0,5),(7,10)] >>> my.constraints.bounded(sequence, bounds) array([ 0.123, 1.244, 0. , 10. , 7. ]) >>> my.constraints.bounded(sequence, bounds, nearest=False) array([ 0.123, 1.244, 7. , 10. , 5. ]) >>> my.constraints.bounded(sequence, bounds, nearest=False, clip=False) array([0.123 , 1.244 , 0.37617154, 8.79013111, 7.40864242]) >>> my.constraints.bounded(sequence, bounds, clip=False) array([0.123 , 1.244 , 2.38186577, 7.41374049, 9.14662911]) >>>

boundsconstrain(min, max, **kwds)

build a constraint from the bounds (min,max)

Input:

min: list of floats specifying the lower bounds max: list of floats specifying the upper bounds

symbolic: bool, if True, use symbolic constraints [default: True] clip: bool, if True, clip exterior values to the bounds [default: True]

Notes

Prepares a constraint function that imposes the bounds. The intent is so that the bounds and other constraints can be imposed concurrently (e.g. using mystic.constraints.and_). This is slower but more robust than applying the bounds sequential with the other constraints (the default for a solver).

For entries where there is no upper bound, use either inf or None. Similarly for entries where there is no lower bound. When no upper bound exists for any of the entries, max should be an iterable with all entries of inf or None. Again, similarly for lower bounds.

If symbolic=True, use symbolic constraints to impose the bounds; otherwise use mystic.constraints.impose_bounds. Using clip=False requires symbolic=False.

discrete(samples, index=None)

impose a discrete set of input values for the selected function

The function’s input will be mapped to the given discrete set

>>> @discrete([1.0, 2.0])
... def identity(x):
...     return x

>>> identity([0.123, 1.789, 4.000])
[1.0, 2.0, 2.0]

>>> @discrete([1,3,5,7], index=(0,3))
... def squared(x):
....    return [i**2 for i in x]

>>> squared([0,2,4,6,8,10])
[1, 4, 16, 25, 64, 100]

has_unique(x)

check for uniqueness of the members of x

generate a function, where some input tracks another input

mask should be a set of tuples of positional index and tracked index, where the tuple should contain two different integers. The mask will be applied to the input, before the decorated function is called.

The offset is applied to the second member of the tuple, and can accumulate.

For example,
>>> @impose_as([(0,1),(3,1),(4,5),(5,6),(5,7)])
... def same(x):
...   return x
...
>>> same([9,8,7,6,5,4,3,2,1])
[9, 9, 7, 9, 5, 5, 5, 5, 1]
>>> same([0,1,0,1])
[0, 0, 0, 0]
>>> same([-1,-2,-3,-4,-5,-6,-7])
[-1, -1, -3, -1, -5, -5, -5]
>>>
>>> @impose_as([(0,1),(3,1),(4,5),(5,6),(5,7)], 10)
... def doit(x):
...   return x
...
>>> doit([9,8,7,6,5,4,3,2,1])
[9, 19, 7, 9, 5, 15, 25, 25, 1]
>>> doit([0,1,0,1])
[0, 10, 0, 0]
>>> doit([-1,-2,-3,-4,-5,-6])
[-1, 9, -3, -1, -5, 5]
>>> doit([-1,-2,-3,-4,-5,-6,-7])
[-1, 9, -3, -1, -5, 5, 15]

impose_at(index, target=0.0)

generate a function, where some input is set to the target

index should be a set of indices to be fixed at the target. The target can either be a single value (e.g. float), or a list of values.

For example,
>>> @impose_at([1,3,4,5,7], -99)
... def same(x):
...   return x
...
>>> same([1,1,1,1,1,1,1])
[1, -99, 1, -99, -99, -99, 1]
>>> same([1,1,1,1])
[1, -99, 1, -99]
>>> same([1,1])
[1, -99]
>>>
>>> @impose_at([1,3,4,5,7], [0,2,4,6])
... def doit(x):
...   return x
...
>>> doit([1,1,1,1,1,1,1])
[1, 0, 1, 2, 4, 6, 1]
>>> doit([1,1,1,1])
[1, 0, 1, 2]
>>> doit([1,1])
[1, 0]

impose_bounds(bounds, index=None, clip=True, nearest=True)

generate a function where bounds=[min,max] on a sequence are imposed

For example: >>> sequence = [0.123, 1.244, -4.755, 10.731, 6.207] >>> >>> @impose_bounds((0,5)) … def simple(x): … return x … >>> simple(sequence) [0.123, 1.244, 0.0, 5.0, 5.0] >>> >>> @impose_bounds((0,5), index=(0,2,4)) … def double(x): … return [i*2 for i in x] … >>> double(sequence) [0.246, 2.488, 0.0, 21.462, 10.0] >>> >>> @impose_bounds((0,5), index=(0,2,4), clip=False) … def square(x): … return [i*i for i in x] … >>> square(sequence) [0.015129, 1.547536, 14.675791119810688, 115.154361, 1.399551896073788] >>> >>> @impose_bounds([(0,5),(7,10)]) … def simple(x): … return x … >>> simple(sequence) [0.123, 1.244, 0.0, 10.0, 7.0] >>> >>> @impose_bounds([(0,5),(7,10)], nearest=False) … def simple(x): … return x … >>> simple(sequence) [0.123, 1.244, 0.0, 5.0, 5.0] >>> simple(sequence) [0.123, 1.244, 7.0, 10.0, 5.0] >>> >>> @impose_bounds({0:(0,5), 2:(0,5), 4:[(0,5),(7,10)]}) … def simple(x): … return x … >>> simple(sequence) [0.123, 1.244, 0.0, 10.731, 7.0] >>> >>> @impose_bounds({0:(0,5), 2:(0,5), 4:[(0,5),(7,10)]}, index=(0,2)) … def simple(x): … return x … >>> simple(sequence) [0.123, 1.244, 0.0, 10.731, 6.207]

impose_measure(npts, tracking={}, noweight={})

generate a function, that constrains measure positions and weights

npts is a tuple of the product_measure dimensionality

tracking is a dict of collapses, or a tuple of dicts of collapses. a tracking collapse is a dict of {measure: {pairs of indices}}, where the pairs of indices are where the positions will be constrained to have the same value, and the weight from the second index in the pair will be removed and added to the weight of the first index

noweight is a dict of collapses, or a tuple of dicts of collapses. a noweight collapse is a dict of {measure: {indices}), where the indices are where the measure will be constrained to have zero weight

For example,
>>> pos = {0: {(0,1)}, 1:{(0,2)}}
>>> wts = {0: {1}, 1: {1, 2}}
>>> npts = (3,3)
>>>
>>> @impose_measure(npts, pos)
... def same(x):
...   return x
...
>>> same([.5, 0., .5, 2., 4., 6., .25, .5, .25, 6., 4., 2.])
[0.5, 0.0, 0.5, 2.0, 2.0, 6.0, 0.5, 0.5, 0.0, 5.0, 3.0, 5.0]
>>> same([1./3, 1./3, 1./3, 1., 2., 3., 1./3, 1./3, 1./3, 1., 2., 3.])
[0.6666666666666666, 0.0, 0.3333333333333333, 1.3333333333333335, 1.3333333333333335, 3.3333333333333335, 0.6666666666666666, 0.3333333333333333, 0.0, 1.6666666666666667, 2.666666666666667, 1.6666666666666667]
>>>
>>> @impose_measure(npts, {}, wts)
... def doit(x):
...   return x
...
>>> doit([.5, 0., .5, 2., 4., 6., .25, .5, .25, 6., 4., 2.])
[0.5, 0.0, 0.5, 2.0, 4.0, 6.0, 1.0, 0.0, 0.0, 4.0, 2.0, 0.0]
>>> doit([1./3, 1./3, 1./3, 1., 2., 3., 1./3, 1./3, 1./3, 1., 2., 3.])
[0.5, 0.0, 0.5, 1.0, 2.0, 3.0, 1.0, 0.0, 0.0, 2.0, 3.0, 4.0]
>>>
>>> @impose_measure(npts, pos, wts)
... def both(x):
...   return x
...
>>> both([1./3, 1./3, 1./3, 1., 2., 3., 1./3, 1./3, 1./3, 1., 2., 3.])
[0.66666666666666663, 0.0, 0.33333333333333331, 1.3333333333333335, 1.3333333333333335, 3.3333333333333335, 1.0, 0.0, 0.0, 2.0, 3.0, 2.0]
>>>

impose_position(npts, tracking)

generate a function, that constrains measure positions

npts is a tuple of the product_measure dimensionality

tracking is a dict of collapses, or a tuple of dicts of collapses. a tracking collapse is a dict of {measure: {pairs of indices}}, where the pairs of indices are where the positions will be constrained to have the same value, and the weight from the second index in the pair will be removed and added to the weight of the first index

For example,
>>> pos = {0: {(0,1)}, 1:{(0,2)}}
>>> npts = (3,3)
>>>
>>> @impose_position(npts, pos)
... def same(x):
...   return x
...
>>> same([.5, 0., .5, 2., 4., 6., .25, .5, .25, 6., 4., 2.])
[0.5, 0.0, 0.5, 2.0, 2.0, 6.0, 0.5, 0.5, 0.0, 5.0, 3.0, 5.0]
>>> same([1./3, 1./3, 1./3, 1., 2., 3., 1./3, 1./3, 1./3, 1., 2., 3.])
[0.6666666666666666, 0.0, 0.3333333333333333, 1.3333333333333335, 1.3333333333333335, 3.3333333333333335, 0.6666666666666666, 0.3333333333333333, 0.0, 1.6666666666666667, 2.666666666666667, 1.6666666666666667]
>>>

impose_unique(seq=None)

ensure all values are unique and found in the given set

For example: >>> @impose_unique(range(11)) … def doit(x): … return x … >>> doit([1,2,3,1,2,4]) [1, 2, 3, 9, 8, 4] >>> doit([1,2,3,1,2,10]) [1, 2, 3, 8, 5, 10] >>> try: … doit([1,2,3,1,2,13]) … except ValueError: … print(“Bad Input”) … Bad Input

impose_weight(npts, noweight)

generate a function, that constrains measure weights

npts is a tuple of the product_measure dimensionality

noweight is a dict of collapses, or a tuple of dicts of collapses. a noweight collapse is a dict of {measure: {indices}), where the indices are where the measure will be constrained to have zero weight

For example,
>>> wts = {0: {1}, 1: {1, 2}}
>>> npts = (3,3)
>>>
>>> @impose_weight(npts, wts)
... def doit(x):
...   return x
...
>>> doit([.5, 0., .5, 2., 4., 6., .25, .5, .25, 6., 4., 2.])
[0.5, 0.0, 0.5, 2.0, 4.0, 6.0, 1.0, 0.0, 0.0, 4.0, 2.0, 0.0]
>>> doit([1./3, 1./3, 1./3, 1., 2., 3., 1./3, 1./3, 1./3, 1., 2., 3.])
[0.5, 0.0, 0.5, 1.0, 2.0, 3.0, 1.0, 0.0, 0.0, 2.0, 3.0, 4.0]
>>>

integers(ints=True, index=None)

impose the set of integers (by rounding) for the given function

The function’s input will be mapped to the ints, where:
• if ints is True, return results as ints; otherwise, use floats

• if index tuple provided, only round at the given indices

>>> @integers()
... def identity(x):
...     return x

>>> identity([0.123, 1.789, 4.000])
[0, 2, 4]

>>> @integers(ints=float, index=(0,3,4))
... def squared(x):
....    return [i**2 for i in x]

>>> squared([0.12, 0.12, 4.01, 4.01, 8, 8])
[0.0, 0.0144, 16.080099999999998, 16.0, 64.0, 64.0]

issolution(constraints, guess, tol=0.001)

Returns whether the guess is a solution to the constraints

Input:

constraints – a constraints solver function or a penalty function guess – list of parameter values proposed to solve the constraints tol – residual error magnitude for which constraints are considered solved

For example: >>> @normalized() … def constraint(x): … return x … >>> constraint([.5,.5]) [0.5, 0.5] >>> issolution(constraint, [.5,.5]) True >>> >>> from mystic.penalty import quadratic_inequality >>> @quadratic_inequality(lambda x: x[0] - x[1] + 10) … def penalty(x): … return 0.0 … >>> penalty([-10,.5]) 0.0 >>> issolution(penalty, [-10,.5]) True

near_integers(x)

the sum of all deviations from int values

normalized(mass=1.0)

bind a normalization constraint to a given constraints function.

Inputs:

mass – the target sum of normalized weights

A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling of “normalize” onto another constraints function c(x), such that: x’ = normalize(c(x), mass).

For example: >>> @normalized() … def constraint(x): … return x … >>> constraint([1,2,3]) [0.16666666666666666, 0.33333333333333331, 0.5]

not_(constraint, **settings)

invert the region where the given constraints are valid, then solve

Inputs:

constraint – constraint function

maxiter – maximum number of iterations to attempt to solve [default: 100] onexit – function x’ = f(x) to call on success [default: None] onfail – function x’ = f(x) to call on failure [default: None]

Note

If a repeating cycle is detected, some of the inputs may be randomized.

or_(*constraints, **settings)

create a constraint that is satisfied if any constraints are satisfied

Inputs:

constraints – constraint functions

maxiter – maximum number of iterations to attempt to solve [default: 100] onexit – function x’ = f(x) to call on success [default: None] onfail – function x’ = f(x) to call on failure [default: None]

Note

If a repeating cycle is detected, some of the inputs may be randomized.

solve(constraints, guess=None, nvars=None, solver=None, lower_bounds=None, upper_bounds=None, termination=None)

Use optimization to find a solution to a set of constraints.

Inputs:

constraints – a constraints solver function or a penalty function

guess – list of parameter values proposed to solve the constraints. lower_bounds – list of lower bounds on solution values. upper_bounds – list of upper bounds on solution values. nvars – number of parameter values. solver – the mystic solver to use in the optimization termination – the mystic termination to use in the optimization

NOTE: The resulting constraints will likely be more expensive to evaluate

and less accurate than writing the constraints solver from scratch.

NOTE: The ensemble solvers are available, using the default NestedSolver,

where the keyword ‘guess’ can be used to set the number of solvers.

NOTE: The default solver is ‘diffev’, with npop=min(40, ndim*5). The default

termination is ChangeOverGeneration(), and the default guess is randomly selected points between the upper and lower bounds.

unique(seq, full=None)

replace the duplicate values with unique values in ‘full’

If full is a type (int or float), then unique values of the given type are selected from range(min(seq),max(seq)). If full is a dict of {‘min’:min, ‘max’:max}, then unique floats are selected from range(min(seq),max(seq)). If full is a sequence (list or set), then unique values are selected from the given sequence.

For example: >>> unique([1,2,3,1,2,4], range(11)) [1, 2, 3, 9, 8, 4] >>> unique([1,2,3,1,2,9], range(11)) [1, 2, 3, 8, 5, 9] >>> try: … unique([1,2,3,1,2,13], range(11)) … except ValueError: … pass … >>> >>> unique([1,2,3,1,2,4], {‘min’:0, ‘max’:11}) [1, 2, 3, 4.175187820357143, 2.5407265707465716, 4] >>> unique([1,2,3,1,2,4], {‘min’:0, ‘max’:11, ‘type’:int}) [1, 2, 3, 6, 8, 4] >>> unique([1,2,3,1,2,4], float) [1, 2, 3, 1.012375036824941, 3.9821250727509905, 4] >>> unique([1,2,3,1,2,10], int) [1, 2, 3, 9, 6, 10] >>> try: … unique([1,2,3,1,2,4], int) … except ValueError: … pass …

vectorize(constraint, axis=1)

vectorize a constraint for 2D input, x’ = k(x) where x is 2D

Input:

constraint – a mystic constraint, c, where x’ = c(x), x is a list axis – axis to apply constraints to, must be 0 or 1 (default is 1)

Output:

transform – a transform function, k, where x’ = k(x), x is 2D array

Notes

Produces a constraints function that is of the form required by sklearn.preprocessing.FunctionTransformer(func=transform). Input to the tranform is a 2D numpy array of shape (samples, features).

For example:
>>> from mystic.constraints import (impose_bounds, integers,
...                                 with_mean, and_)
>>> cons = and_(impose_bounds([(0,5),(7,10)])(lambda x:x),
...             integers()(lambda x:x), with_mean(6.0)(lambda x:x))
>>> import numpy as np
>>> data = np.random.randn(6,4)
>>> c = vectorize(cons, axis=0)
>>> c(data)
array([[ 3,  9,  3, 10],
[ 7,  8,  7,  4],
[ 9,  3,  7,  7],
[ 7,  3,  8,  7],
[ 3,  5,  4,  4],
[ 7,  8,  7,  4]])
>>> _.mean(axis=0)
array([6., 6., 6., 6.])
>>> c = vectorize(cons, axis=1)
>>> c(data)
array([[ 3,  3,  9,  9],
[ 8,  3,  4,  9],
[ 5, 10,  4,  5],
[ 7,  8,  7,  2],
[ 2,  4,  8, 10],
[ 7, 10,  5,  2]])
>>> _.mean(axis=1)
array([6., 6., 6., 6., 6., 6.])
>>> k = FunctionTransformer(func=c)
>>> k.fit(data).transform(data).mean(axis=1)
array([6., 6., 6., 6., 6., 6.])

with_constraint(ctype, *args, **kwds)

convert a set transformation to a constraints solver of the chosen type

transformation f(x) is a mapping between x and x’, where x’ = f(x). ctype is a mystic.coupler type [inner, outer, inner_proxy, outer_proxy].

For example: >>> @with_constraint(inner, kwds={‘target’:5.0}) … def constraint(x, target): … return impose_mean(target, x) … >>> x = constraint([1,2,3,4,5]) >>> print(x) [3.0, 4.0, 5.0, 6.0, 7.0] >>> mean(x) 5.0

with_mean(target)

bind a mean constraint to a given constraints function.

Inputs:

target – the target mean

A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling of “impose_mean” onto another constraints function c(x), such that: x’ = impose_mean(target, c(x)).

For example: >>> @with_mean(5.0) … def constraint(x): … x[-1] = x[0] … return x … >>> x = constraint([1,2,3,4]) >>> print(x) [4.25, 5.25, 6.25, 4.25] >>> mean(x) 5.0

with_penalty(ptype, *args, **kwds)

convert a condition to a penalty function of the chosen type

condition f(x) is satisfied when f(x) == 0.0 for equality constraints and f(x) <= 0.0 for inequality constraints. ptype is a mystic.penalty type.

For example: >>> @with_penalty(quadratic_equality, kwds={‘target’:5.0}) … def penalty(x, target): … return mean(x) - target >>> >>> penalty([1,2,3,4,5]) 400.0 >>> penalty([3,4,5,6,7]) 7.8886090522101181e-29

bind a range constraint to a given constraints function.

Inputs:

target – the target range

A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling of “impose_spread” onto another constraints function c(x), such that: x’ = impose_spread(target, c(x)).

For example: >>> @with_spread(10.0) … def constraint(x): … return [i**2 for i in x] … >>> x = constraint([1,2,3,4]) >>> print(x) [3.1666666666666665, 5.1666666666666661, 8.5, 13.166666666666666] >>> spread(x) 10.0

with_std(target)

bind a standard deviation constraint to a given constraints function.

Inputs:

target – the target standard deviation

A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling of “impose_std” onto another constraints function c(x), such that: x’ = impose_std(target, c(x)).

For example: >>> @with_std(1.0) … def constraint(x): … x[-1] = x[0] … return x … >>> x = constraint([1,2,3]) >>> print(x) [0.6262265521467858, 2.747546895706428, 0.6262265521467858] >>> std(x) 0.99999999999999956

with_variance(target)

bind a variance constraint to a given constraints function.

Inputs:

target – the target variance

A constraints function takes an iterable x as input, returning a modified x. This function is an “outer” coupling of “impose_variance” onto another constraints function c(x), such that: x’ = impose_variance(target, c(x)).

For example: >>> @with_variance(1.0) … def constraint(x): … x[-1] = x[0] … return x … >>> x = constraint([1,2,3]) >>> print(x) [0.6262265521467858, 2.747546895706428, 0.6262265521467858] >>> variance(x) 0.99999999999999956

coupler module¶

Function Couplers

These methods can be used to couple two functions together, and represent some common patterns found in applying constraints and penalty methods.

For example, the “outer” method called on y = f(x), with outer=c(x), will convert y = f(x) to y’ = c(f(x)). Similarly, the “inner” method called on y = f(x), with inner=c(x), will convert y = f(x) to y’ = f(c(x)).

penalize a function with another function: y = f(x) to y’ = f(x) + p(x)

This is useful, for example, in penalizing a cost function where the constraints are violated; thus, the satisfying the constraints will be preferred at every cost function evaluation.

For example: >>> def squared(x): … return x**2 … >>> # equivalent to: (x+1) + (x**2) >>> @additive(squared) … def constrain(x): … return x+1 … >>> from numpy import array >>> x = array([1,2,3,4,5]) >>> constrain(x) array([ 3, 7, 13, 21, 31])

penalize a function with another function: y = f(x) to y’ = f(x) + p(x)

This is useful, for example, in penalizing a cost function where the constraints are violated; thus, the satisfying the constraints will be preferred at every cost function evaluation.

This function does not preserve decorated function signature, but passes args and kwds to the penalty function.

and_(*penalties, **settings)

combine several penalties into a single penalty function by summation

Inputs:

penalties – penalty functions

ptype – penalty function type [default: linear_equality] args – arguments for the penalty function [default: ()] kwds – keyword arguments for the penalty function [default: {}] k – penalty multiplier [default: 1] h – iterative multiplier [default: 5]

NOTE: The defaults provide a linear combination of the individual penalties

without any scaling. A different ptype (from ‘mystic.penalty’) will apply a nonlinear scaling to the combined penalty, while a different k will apply a linear scaling.

NOTE: This function is also useful for combining constraints solvers

into a single constraints solver, however can not do so directly. Constraints solvers must first be converted to penalty functions (i.e. with ‘as_penalty’), then combined, then can be converted to a constraints solver (i.e. with ‘as_constraint’). The resulting constraints will likely be more expensive to evaluate and less accurate than writing the constraints solver from scratch.

inner(inner=<function <lambda>>, args=None, kwds=None)

nest a function within another function: convert y = f(x) to y’ = f(c(x))

This is a useful function for nesting one constraint in another constraint. A constraints function takes an iterable x as input, returning a modified x. The “inner” coupler is utilized by mystic.solvers to bind constraints to a cost function; thus the constraints are imposed every cost function evaluation.

For example: >>> def squared(x): … return x**2 … >>> # equivalent to: ((x**2)+1) >>> @inner(squared) … def constrain(x): … return x+1 … >>> from numpy import array >>> x = array([1,2,3,4,5]) >>> constrain(x) array([ 2, 5, 10, 17, 26])

inner_proxy(inner=<function <lambda>>, args=None, kwds=None)

nest a function within another function: convert y = f(x) to y’ = f(c(x))

This is a useful function for nesting one constraint in another constraint. A constraints function takes an iterable x as input, returning a modified x.

This function applies the “inner” coupler pattern. However, it does not preserve decorated function signature – it passes args and kwds to the inner function instead of the decorated function.

not_(penalty, **settings)

invert, so penalizes the region where the given penalty is valid

Inputs:

penalty – a penalty function

ptype – penalty function type [default: linear_equality] args – arguments for the penalty function [default: ()] kwds – keyword arguments for the penalty function [default: {}] k – penalty multiplier [default: 1] h – iterative multiplier [default: 5]

or_(*penalties, **settings)

create a single penalty that selects the minimum of several penalties

Inputs:

penalties – penalty functions

ptype – penalty function type [default: linear_equality] args – arguments for the penalty function [default: ()] kwds – keyword arguments for the penalty function [default: {}] k – penalty multiplier [default: 1] h – iterative multiplier [default: 5]

NOTE: The defaults provide a linear combination of the individual penalties

without any scaling. A different ptype (from ‘mystic.penalty’) will apply a nonlinear scaling to the combined penalty, while a different k will apply a linear scaling.

NOTE: This function is also useful for combining constraints solvers

into a single constraints solver, however can not do so directly. Constraints solvers must first be converted to penalty functions (i.e. with ‘as_penalty’), then combined, then can be converted to a constraints solver (i.e. with ‘as_constraint’). The resulting constraints will likely be more expensive to evaluate and less accurate than writing the constraints solver from scratch.

outer(outer=<function <lambda>>, args=None, kwds=None)

wrap a function around another function: convert y = f(x) to y’ = c(f(x))

This is a useful function for nesting one constraint in another constraint. A constraints function takes an iterable x as input, returning a modified x.

For example: >>> def squared(x): … return x**2 … >>> # equivalent to: ((x+1)**2) >>> @outer(squared) … def constrain(x): … return x+1 … >>> from numpy import array >>> x = array([1,2,3,4,5]) >>> constrain(x) array([ 4, 9, 16, 25, 36])

outer_proxy(outer=<function <lambda>>, args=None, kwds=None)

wrap a function around another function: convert y = f(x) to y’ = c(f(x))

This is a useful function for nesting one constraint in another constraint. A constraints function takes an iterable x as input, returning a modified x.

This function applies the “outer” coupler pattern. However, it does not preserve decorated function signature – it passes args and kwds to the outer function instead of the decorated function.

differential_evolution module¶

Solvers¶

This module contains a collection of optimization routines based on Storn and Price’s differential evolution algorithm. The core solver algorithm was adapted from Phillips’s DETest.py. An alternate solver is provided that follows the logic in Price, Storn, and Lampen – in that both a current generation and a trial generation are maintained, and all vectors for creating difference vectors and mutations draw from the current generation… which remains invariant until the end of the iteration.

A minimal interface that mimics a scipy.optimize interface has also been implemented, and functionality from the mystic solver API has been added with reasonable defaults.

Minimal function interface to optimization routines::

diffev – Differential Evolution (DE) solver diffev2 – Price & Storn’s Differential Evolution solver

The corresponding solvers built on mystic’s AbstractSolver are::

DifferentialEvolutionSolver – a DE solver DifferentialEvolutionSolver2 – Storn & Price’s DE solver

Mystic solver behavior activated in diffev and diffev2::
• EvaluationMonitor = Monitor()

• StepMonitor = Monitor()

• strategy = Best1Bin

• termination = ChangeOverGeneration(ftol,gtol), if gtol provided

‘’ = VTRChangeOverGenerations(ftol), otherwise

Storn & Price’s DE Solver has also been implemented to use the “map” interface. Mystic enables the user to override the standard python map function with their own ‘map’ function, or one of the map functions provided by the pathos package (see http://dev.danse.us/trac/pathos) for distributed and high-performance computing.

Usage¶

Practical advice for how to configure the Differential Evolution Solver for your own objective function can be found on R. Storn’s web page (http://www.icsi.berkeley.edu/~storn/code.html), and is reproduced here:

First try the following classical settings for the solver configuration:
Choose a crossover strategy (e.g. Rand1Bin), set the number of parents
NP to 10 times the number of parameters, select ScalingFactor=0.8, and
CrossProbability=0.9.

It has been found recently that selecting ScalingFactor from the interval
[0.5, 1.0] randomly for each generation or for each difference vector,
a technique called dither, improves convergence behaviour significantly,
especially for noisy objective functions.

It has also been found that setting CrossProbability to a low value,
e.g. CrossProbability=0.2 helps optimizing separable functions since
it fosters the search along the coordinate axes. On the contrary,
this choice is not effective if parameter dependence is encountered,
something which is frequently occuring in real-world optimization
problems rather than artificial test functions. So for parameter
dependence the choice of CrossProbability=0.9 is more appropriate.

Another interesting empirical finding is that rasing NP above, say, 40
does not substantially improve the convergence, independent of the
number of parameters. It is worthwhile to experiment with these suggestions.

Make sure that you initialize your parameter vectors by exploiting
their full numerical range, i.e. if a parameter is allowed to exhibit
values in the range [-100, 100] it's a good idea to pick the initial
values from this range instead of unnecessarily restricting diversity.

Keep in mind that different problems often require different settings
for NP, ScalingFactor and CrossProbability (see Ref 1, 2). If you
experience misconvergence, you typically can increase the value for NP,
but often you only have to adjust ScalingFactor to be a little lower or
higher than 0.8. If you increase NP and simultaneously lower ScalingFactor
a little, convergence is more likely to occur but generally takes longer,
i.e. DE is getting more robust (a convergence speed/robustness tradeoff).

If you still get misconvergence you might want to instead try a different
crossover strategy. The most commonly used are Rand1Bin, Rand1Exp,
Best1Bin, and Best1Exp. The crossover strategy is not so important a
choice, although K. Price claims that binomial (Bin) is never worse than
exponential (Exp).

In case of continued misconvergence, check the choice of objective function.
There might be a better one to describe your problem. Any knowledge that
you have about the problem should be worked into the objective function.
A good objective function can make all the difference.


See mystic.examples.test_rosenbrock for an example of using DifferentialEvolutionSolver. DifferentialEvolutionSolver2 has the identical interface and usage.

All solvers included in this module provide the standard signal handling. For more information, see mystic.mystic.abstract_solver.

References

1. Storn, R. and Price, K. Differential Evolution - A Simple and Efficient Heuristic for Global Optimization over Continuous Spaces. Journal of Global Optimization 11: 341-359, 1997.

2. Price, K., Storn, R., and Lampinen, J. - Differential Evolution, A Practical Approach to Global Optimization. Springer, 1st Edition, 2005.

class DifferentialEvolutionSolver(dim, NP=4)

Differential Evolution optimization.

Takes two initial inputs:

dim – dimensionality of the problem NP – size of the trial solution population. [requires: NP >= 4]

All important class members are inherited from AbstractSolver.

SetConstraints(constraints)

apply a constraints function to the optimization

input::
• a constraints function of the form: xk’ = constraints(xk), where xk is the current parameter vector. Ideally, this function is constructed so the parameter vector it passes to the cost function will satisfy the desired (i.e. encoded) constraints.

Solve(cost=None, termination=None, ExtraArgs=None, **kwds)

Minimize a function using differential evolution.

Uses a differential evolution algorithm to find the minimum of a function of one or more variables.

Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).

• termination (termination, default=None) – termination conditions.

• ExtraArgs (tuple, default=None) – extra arguments for cost.

• strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial solutions.

• CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.

• ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.

• sigint_callback (func, default=None) – callback function for signal handler.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• disp (bool, default=False) – if True, print convergence messages.

Returns

None

UpdateGenealogyRecords(id, newchild)

create an in-memory log of the genealogy of the population

Input::
• id: (int) the index of the candidate in the population matrix

• newchild: (list[float]) a new trialSolution

_Step(cost=None, ExtraArgs=None, **kwds)

perform a single optimization iteration

input::
• cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.

note::

ExtraArgs needs to be a tuple of extra arguments.

This method accepts additional args that are specific for the current solver, as detailed in the _process_inputs method.

__init__(dim, NP=4)
Takes two initial inputs:

dim – dimensionality of the problem NP – size of the trial solution population. [requires: NP >= 4]

All important class members are inherited from AbstractSolver.

__module__ = 'mystic.differential_evolution'
_decorate_objective(cost, ExtraArgs=None)

decorate the cost function with bounds, penalties, monitors, etc

input::
• cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.

_process_inputs(kwds)

process and activate input settings

Parameters
• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• disp (bool, default=False) – if True, print convergence messages.

EvaluationMonitor: a monitor instance to capture each evaluation of cost. StepMonitor: a monitor instance to capture each iteration’s best results. penalty: a function of the form: y’ = penalty(xk), with y = cost(xk) + y’,

where xk is the current parameter vector.

constraints: a function of the form: xk’ = constraints(xk), where xk is

the current parameter vector.

strategy (strategy, default=Best1Bin): the mutation strategy for generating new trial solutions. CrossProbability (float, default=0.9): the probability of cross-parameter

mutations.

ScalingFactor (float, default=0.8): multiplier for mutations on the trial

solution.

Note

The additional args are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the args are not sticky, and are thus set for a one-time use.

class DifferentialEvolutionSolver2(dim, NP=4)

Differential Evolution optimization, using Storn and Price’s algorithm.

Alternate implementation:
• utilizes a map-reduce interface, extensible to parallel computing

• both a current and a next generation are kept, while the current generation is invariant during the main DE logic

Takes two initial inputs:

dim – dimensionality of the problem NP – size of the trial solution population. [requires: NP >= 4]

All important class members are inherited from AbstractSolver.

SetConstraints(constraints)

apply a constraints function to the optimization

input::
• a constraints function of the form: xk’ = constraints(xk), where xk is the current parameter vector. Ideally, this function is constructed so the parameter vector it passes to the cost function will satisfy the desired (i.e. encoded) constraints.

Solve(cost=None, termination=None, ExtraArgs=None, **kwds)

Minimize a function using differential evolution.

Uses a differential evolution algorithm to find the minimum of a function of one or more variables. This implementation holds the current generation invariant until the end of each iteration.

Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).

• termination (termination, default=None) – termination conditions.

• ExtraArgs (tuple, default=None) – extra arguments for cost.

• strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial solutions.

• CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.

• ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.

• sigint_callback (func, default=None) – callback function for signal handler.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• disp (bool, default=False) – if True, print convergence messages.

Returns

None

UpdateGenealogyRecords(id, newchild)

create an in-memory log of the genealogy of the population

Input::
• id: (int) the index of the candidate in the population matrix

• newchild: (list[float]) a new trialSolution

_Step(cost=None, ExtraArgs=None, **kwds)

perform a single optimization iteration

input::
• cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.

note::

ExtraArgs needs to be a tuple of extra arguments.

This method accepts additional args that are specific for the current solver, as detailed in the _process_inputs method.

__init__(dim, NP=4)
Takes two initial inputs:

dim – dimensionality of the problem NP – size of the trial solution population. [requires: NP >= 4]

All important class members are inherited from AbstractSolver.

__module__ = 'mystic.differential_evolution'
_decorate_objective(cost, ExtraArgs=None)

decorate the cost function with bounds, penalties, monitors, etc

input::
• cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.

_process_inputs(kwds)

process and activate input settings

Parameters
• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• disp (bool, default=False) – if True, print convergence messages.

EvaluationMonitor: a monitor instance to capture each evaluation of cost. StepMonitor: a monitor instance to capture each iteration’s best results. penalty: a function of the form: y’ = penalty(xk), with y = cost(xk) + y’,

where xk is the current parameter vector.

constraints: a function of the form: xk’ = constraints(xk), where xk is

the current parameter vector.

strategy (strategy, default=Best1Bin): the mutation strategy for generating new trial solutions. CrossProbability (float, default=0.9): the probability of cross-parameter

mutations.

ScalingFactor (float, default=0.8): multiplier for mutations on the trial

solution.

Note

The additional args are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the args are not sticky, and are thus set for a one-time use.

diffev(cost, x0, npop=4, args=(), bounds=None, ftol=0.005, gtol=None, maxiter=None, maxfun=None, cross=0.9, scale=0.8, full_output=0, disp=1, retall=0, callback=None, **kwds)

Minimize a function using differential evolution.

Uses a differential evolution algorithm to find the minimum of a function of one or more variables. Mimics a scipy.optimize style interface.

Parameters
• cost (func) – the function or method to be minimized: y = cost(x).

• x0 (ndarray) – the initial guess parameter vector x if desired start is a single point, otherwise takes a list of (min,max) bounds that define a region from which random initial points are drawn.

• npop (int, default=4) – size of the trial solution population.

• args (tuple, default=()) – extra arguments for cost.

• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.

• ftol (float, default=5e-3) – acceptable relative error in cost(xopt) for convergence.

• gtol (float, default=None) – maximum iterations to run without improvement.

• maxiter (int, default=None) – the maximum number of iterations to perform.

• maxfun (int, default=None) – the maximum number of function evaluations.

• cross (float, default=0.9) – the probability of cross-parameter mutations.

• scale (float, default=0.8) – multiplier for mutations on the trial solution.

• full_output (bool, default=False) – True if fval and warnflag are desired.

• disp (bool, default=True) – if True, print convergence messages.

• retall (bool, default=False) – True if allvecs is desired.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• handler (bool, default=False) – if True, enable handling interrupt signals.

• strategy (strategy, default=None) – override the default mutation strategy.

• itermon (monitor, default=None) – override the default GenerationMonitor.

• evalmon (monitor, default=None) – override the default EvaluationMonitor.

• constraints (func, default=None) – a function xk' = constraints(xk), where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints.

• penalty (func, default=None) – a function y = penalty(xk), where xk is the current parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0 otherwise).

• tight (bool, default=False) – enforce bounds and constraints concurrently.

• map (func, default=None) – a (parallel) map function y = map(f, x).

Returns

(xopt, {fopt, iter, funcalls, warnflag}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function

• fopt (float): value of cost function at minimum: fopt = cost(xopt)

• iter (int): number of iterations

• funcalls (int): number of function calls

• warnflag (int): warning flag:
• 1 : Maximum number of function evaluations

• 2 : Maximum number of iterations

• allvecs (list): a list of solutions at each iteration

diffev2(cost, x0, npop=4, args=(), bounds=None, ftol=0.005, gtol=None, maxiter=None, maxfun=None, cross=0.9, scale=0.8, full_output=0, disp=1, retall=0, callback=None, **kwds)

Minimize a function using Storn & Price’s differential evolution.

Uses Storn & Prices’s differential evolution algorithm to find the minimum of a function of one or more variables. Mimics a scipy.optimize style interface.

Parameters
• cost (func) – the function or method to be minimized: y = cost(x).

• x0 (ndarray) – the initial guess parameter vector x if desired start is a single point, otherwise takes a list of (min,max) bounds that define a region from which random initial points are drawn.

• npop (int, default=4) – size of the trial solution population.

• args (tuple, default=()) – extra arguments for cost.

• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.

• ftol (float, default=5e-3) – acceptable relative error in cost(xopt) for convergence.

• gtol (float, default=None) – maximum iterations to run without improvement.

• maxiter (int, default=None) – the maximum number of iterations to perform.

• maxfun (int, default=None) – the maximum number of function evaluations.

• cross (float, default=0.9) – the probability of cross-parameter mutations.

• scale (float, default=0.8) – multiplier for mutations on the trial solution.

• full_output (bool, default=False) – True if fval and warnflag are desired.

• disp (bool, default=True) – if True, print convergence messages.

• retall (bool, default=False) – True if allvecs is desired.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• handler (bool, default=False) – if True, enable handling interrupt signals.

• strategy (strategy, default=None) – override the default mutation strategy.

• itermon (monitor, default=None) – override the default GenerationMonitor.

• evalmon (monitor, default=None) – override the default EvaluationMonitor.

• constraints (func, default=None) – a function xk' = constraints(xk), where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints.

• penalty (func, default=None) – a function y = penalty(xk), where xk is the current parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0 otherwise).

• tight (bool, default=False) – enforce bounds and constraints concurrently.

• map (func, default=None) – a (parallel) map function y = map(f, x).

Returns

(xopt, {fopt, iter, funcalls, warnflag}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function

• fopt (float): value of cost function at minimum: fopt = cost(xopt)

• iter (int): number of iterations

• funcalls (int): number of function calls

• warnflag (int): warning flag:
• 1 : Maximum number of function evaluations

• 2 : Maximum number of iterations

• allvecs (list): a list of solutions at each iteration

ensemble module¶

Solvers¶

This module contains a collection of optimization routines that use “map” to distribute several optimizer instances over parameter space. Each solver accepts a imported solver object as the “nested” solver, which becomes the target of the map function.

The set of solvers built on mystic’s AbstractEnsembleSolver are::

LatticeSolver – start from center of N grid points BuckshotSolver – start from N random points in parameter space SparsitySolver – start from N points sampled in sparse regions of space

Usage¶

See mystic.examples.buckshot_example06 for an example of using BuckshotSolver. See mystic.examples.lattice_example06 or an example of using LatticeSolver.

All solvers included in this module provide the standard signal handling. For more information, see mystic.mystic.abstract_solver.

class BuckshotSolver(dim, npts=8)

parallel mapped optimization starting from N uniform randomly sampled points

Takes two initial inputs:

dim – dimensionality of the problem npts – number of parallel solver instances

All important class members are inherited from AbstractEnsembleSolver.

_InitialPoints()

Generate a grid of starting points for the ensemble of optimizers

__init__(dim, npts=8)
Takes two initial inputs:

dim – dimensionality of the problem npts – number of parallel solver instances

All important class members are inherited from AbstractEnsembleSolver.

__module__ = 'mystic.ensemble'
class LatticeSolver(dim, nbins=8)

parallel mapped optimization starting from the centers of N grid points

Takes two initial inputs:

dim – dimensionality of the problem nbins – tuple of number of bins in each dimension

All important class members are inherited from AbstractEnsembleSolver.

_InitialPoints()

Generate a grid of starting points for the ensemble of optimizers

__init__(dim, nbins=8)
Takes two initial inputs:

dim – dimensionality of the problem nbins – tuple of number of bins in each dimension

All important class members are inherited from AbstractEnsembleSolver.

__module__ = 'mystic.ensemble'
class SparsitySolver(dim, npts=8, rtol=None)

parallel mapped optimization starting from N points sampled from sparse regions

Takes three initial inputs:

dim – dimensionality of the problem npts – number of parallel solver instances rtol – size of radial tolerance for sparsity

All important class members are inherited from AbstractEnsembleSolver.

_InitialPoints()

Generate a grid of starting points for the ensemble of optimizers

__init__(dim, npts=8, rtol=None)
Takes three initial inputs:

dim – dimensionality of the problem npts – number of parallel solver instances rtol – size of radial tolerance for sparsity

All important class members are inherited from AbstractEnsembleSolver.

__module__ = 'mystic.ensemble'
buckshot(cost, ndim, npts=8, args=(), bounds=None, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, **kwds)

Minimize a function using the buckshot ensemble solver.

Uses a buckshot ensemble algorithm to find the minimum of a function of one or more variables. Mimics the scipy.optimize.fmin interface. Starts npts solver instances at random points in parameter space.

Parameters
• cost (func) – the function or method to be minimized: y = cost(x).

• ndim (int) – dimensionality of the problem.

• npts (int, default=8) – number of solver instances.

• args (tuple, default=()) – extra arguments for cost.

• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.

• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.

• gtol (float, default=10) – maximum iterations to run without improvement.

• maxiter (int, default=None) – the maximum number of iterations to perform.

• maxfun (int, default=None) – the maximum number of function evaluations.

• full_output (bool, default=False) – True if fval and warnflag are desired.

• disp (bool, default=True) – if True, print convergence messages.

• retall (bool, default=False) – True if allvecs is desired.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• solver (solver, default=None) – override the default nested Solver instance.

• handler (bool, default=False) – if True, enable handling interrupt signals.

• itermon (monitor, default=None) – override the default GenerationMonitor.

• evalmon (monitor, default=None) – override the default EvaluationMonitor.

• constraints (func, default=None) – a function xk' = constraints(xk), where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints.

• penalty (func, default=None) – a function y = penalty(xk), where xk is the current parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0 otherwise).

• tight (bool, default=False) – enforce bounds and constraints concurrently.

• map (func, default=None) – a (parallel) map function y = map(f, x).

• dist (mystic.math.Distribution, default=None) – generate randomness in ensemble starting position using the given distribution.

• step (bool, default=False) – if True, enable Step within the ensemble.

Returns

(xopt, {fopt, iter, funcalls, warnflag, allfuncalls}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function

• fopt (float): value of cost function at minimum: fopt = cost(xopt)

• iter (int): number of iterations

• funcalls (int): number of function calls

• warnflag (int): warning flag:
• 1 : Maximum number of function evaluations

• 2 : Maximum number of iterations

• allfuncalls (int): total function calls (for all solver instances)

• allvecs (list): a list of solutions at each iteration

lattice(cost, ndim, nbins=8, args=(), bounds=None, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, **kwds)

Minimize a function using the lattice ensemble solver.

Uses a lattice ensemble algorithm to find the minimum of a function of one or more variables. Mimics the scipy.optimize.fmin interface. Starts N solver instances at regular intervals in parameter space, determined by nbins (N = numpy.prod(nbins); len(nbins) == ndim).

Parameters
• cost (func) – the function or method to be minimized: y = cost(x).

• ndim (int) – dimensionality of the problem.

• nbins (tuple(int), default=8) – total bins, or # of bins in each dimension.

• args (tuple, default=()) – extra arguments for cost.

• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.

• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.

• gtol (float, default=10) – maximum iterations to run without improvement.

• maxiter (int, default=None) – the maximum number of iterations to perform.

• maxfun (int, default=None) – the maximum number of function evaluations.

• full_output (bool, default=False) – True if fval and warnflag are desired.

• disp (bool, default=True) – if True, print convergence messages.

• retall (bool, default=False) – True if allvecs is desired.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• solver (solver, default=None) – override the default nested Solver instance.

• handler (bool, default=False) – if True, enable handling interrupt signals.

• itermon (monitor, default=None) – override the default GenerationMonitor.

• evalmon (monitor, default=None) – override the default EvaluationMonitor.

• constraints (func, default=None) – a function xk' = constraints(xk), where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints.

• penalty (func, default=None) – a function y = penalty(xk), where xk is the current parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0 otherwise).

• tight (bool, default=False) – enforce bounds and constraints concurrently.

• map (func, default=None) – a (parallel) map function y = map(f, x).

• dist (mystic.math.Distribution, default=None) – generate randomness in ensemble starting position using the given distribution.

• step (bool, default=False) – if True, enable Step within the ensemble.

Returns

(xopt, {fopt, iter, funcalls, warnflag, allfuncalls}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function

• fopt (float): value of cost function at minimum: fopt = cost(xopt)

• iter (int): number of iterations

• funcalls (int): number of function calls

• warnflag (int): warning flag:
• 1 : Maximum number of function evaluations

• 2 : Maximum number of iterations

• allfuncalls (int): total function calls (for all solver instances)

• allvecs (list): a list of solutions at each iteration

sparsity(cost, ndim, npts=8, args=(), bounds=None, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, **kwds)

Minimize a function using the sparsity ensemble solver.

Uses a sparsity ensemble algorithm to find the minimum of a function of one or more variables. Mimics the scipy.optimize.fmin interface. Starts npts solver instances at points in parameter space where existing points are sparse.

Parameters
• cost (func) – the function or method to be minimized: y = cost(x).

• ndim (int) – dimensionality of the problem.

• npts (int, default=8) – number of solver instances.

• args (tuple, default=()) – extra arguments for cost.

• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.

• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.

• gtol (float, default=10) – maximum iterations to run without improvement.

• rtol (float, default=None) – minimum acceptable distance from other points.

• maxiter (int, default=None) – the maximum number of iterations to perform.

• maxfun (int, default=None) – the maximum number of function evaluations.

• full_output (bool, default=False) – True if fval and warnflag are desired.

• disp (bool, default=True) – if True, print convergence messages.

• retall (bool, default=False) – True if allvecs is desired.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• solver (solver, default=None) – override the default nested Solver instance.

• handler (bool, default=False) – if True, enable handling interrupt signals.

• itermon (monitor, default=None) – override the default GenerationMonitor.

• evalmon (monitor, default=None) – override the default EvaluationMonitor.

• constraints (func, default=None) – a function xk' = constraints(xk), where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints.

• penalty (func, default=None) – a function y = penalty(xk), where xk is the current parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0 otherwise).

• tight (bool, default=False) – enforce bounds and constraints concurrently.

• map (func, default=None) – a (parallel) map function y = map(f, x).

• dist (mystic.math.Distribution, default=None) – generate randomness in ensemble starting position using the given distribution.

• step (bool, default=False) – if True, enable Step within the ensemble.

Returns

(xopt, {fopt, iter, funcalls, warnflag, allfuncalls}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function

• fopt (float): value of cost function at minimum: fopt = cost(xopt)

• iter (int): number of iterations

• funcalls (int): number of function calls

• warnflag (int): warning flag:
• 1 : Maximum number of function evaluations

• 2 : Maximum number of iterations

• allfuncalls (int): total function calls (for all solver instances)

• allvecs (list): a list of solutions at each iteration

filters module¶

Input/output ‘filters’

component(n, multiplier=1.0)

component filter, F, where F(x) yields x[n]

generate a data filter from a data masking function

returns a function that filters (x,y) or a monitor, based on the given mask x’,y’ = filter(x,y), where filter removes values where mask is False

For example: >>> mon = Monitor() >>> mon([0.0,0.5,1.0],2) >>> mon([2.0,3.0,4.0],3) >>> mon([4.0,5.0,6.0],4) >>> mon([5.0,5.5,6.5],6) >>> >>> @impose_bounds((0,5)) … def inputs(x): … return x … >>> m = generate_filter(generate_mask(inputs))(mon) >>> m._x [[0.0, 0.5, 1.0], [2.0, 3.0, 4.0]] >>> m._y [2, 3] >>> >>> @integers() … def identity(x): … return x … >>> m = generate_filter(generate_mask(identity))(mon) >>> m._x [[2.0, 3.0, 4.0], [4.0, 5.0, 6.0]] >>> m._y [3, 4]

generate a data mask, based on constraints for x and/or y

cx: mystic.constraint function where x’ = cx(x) and x is parameter array cy: mystic.constraint function where [y’] = cy([y]) and y is cost array

returns a masking function for (x,y) data or monitors, where list[bool] = mask(x,y), with True where constraints are satisfied

For example: >>> mon = Monitor() >>> mon([0.0,0.5,1.0],2) >>> mon([2.0,3.0,4.0],3) >>> mon([4.0,5.0,6.0],4) >>> mon([5.0,5.5,6.5],6) >>> >>> @impose_bounds((0,5)) … def inputs(x): … return x … >>> generate_mask(inputs)(mon) [True, True, False, False] >>> generate_mask(y=inputs)(mon) [True, True, True, False] >>> >>> @integers() … def identity(x): … return x … >>> generate_mask(identity)(mon) [False, True, True, False] >>> generate_mask(y=identity)(mon) [True, True, True, True]

identity(x)

identity filter, F, where F(x) yields x

null_check(params, evalpts, *args)

null validity check

forward_model module¶

This module contains classes that aid in constructing cost functions. Cost function can easily be created by hand; however, mystic also provides an automated method that allows the dynamic wrapping of forward models into cost function objects.

Usage¶

The basic usage pattern for a cost factory is to generate a cost function from a set of data points and a corresponding set of evaluation points. The cost factory requires a “model factory”, which is just a generator of model function instances from a list of coefficients. The following example uses numpy.poly1d, which provides a factory for generating polynomials. An expanded version of the following can be found in mystic.examples.example12.

>>> # get a model factory
>>> import numpy as np
>>> FunctionFactory = np.poly1d
>>>
>>> # generate some evaluation points
>>> xpts = 0.1 * np.arange(-50.,51.)
>>>
>>> # we don't have real data, so generate fake data from target and model
>>> target = [2.,-5.,3.]
>>> ydata = FunctionFactory(target)(xpts)
>>> noise = np.random.normal(0,1,size=len(ydata))
>>> ydata = ydata + noise
>>>
>>> # get a cost factory
>>> from mystic.forward_model import CostFactory
>>> C = CostFactory()
>>>
>>> # generate a cost function for the model factory
>>> metric = lambda x: np.sum(x*x)
>>> cost = C.getCostFunction(evalpts=xpts, observations=ydata,
...                                        sigma=1.0, metric=metric)
>>>
>>> # pass the cost function to the optimizer
>>> initial_guess = [1.,-2.,1.]
>>> solution = fmin_powell(cost, initial_guess)
>>> print(solution)
[ 2.00495233 -5.0126248   2.72873734]


In general, a user will be required to write their own model factory. See the examples contained in mystic.models for more information.

The CostFactory can be used to couple models together into a single cost function. For an example, see mystic.examples.forward_model.

class CostFactory

Bases: object

A cost function generator.

CostFactory builds a list of forward model factories, and maintains a list of associated model names and number of inputs. Can be used to combine several models into a single cost function.

Takes no initial inputs.

__dict__ = mappingproxy({'__module__': 'mystic.forward_model', '__doc__': '\nA cost function generator.\n    ', '__init__': <function CostFactory.__init__>, 'addModel': <function CostFactory.addModel>, 'getForwardEvaluator': <function CostFactory.getForwardEvaluator>, 'getVectorCostFunction': <function CostFactory.getVectorCostFunction>, 'getCostFunction': <function CostFactory.getCostFunction>, 'getCostFunctionSlow': <function CostFactory.getCostFunctionSlow>, 'getParameterList': <function CostFactory.getParameterList>, 'getRandomParams': <function CostFactory.getRandomParams>, '__repr__': <function CostFactory.__repr__>, '__dict__': <attribute '__dict__' of 'CostFactory' objects>, '__weakref__': <attribute '__weakref__' of 'CostFactory' objects>, '__annotations__': {}})
__init__()

CostFactory builds a list of forward model factories, and maintains a list of associated model names and number of inputs. Can be used to combine several models into a single cost function.

Takes no initial inputs.

__module__ = 'mystic.forward_model'
__repr__()

Return repr(self).

__weakref__

list of weak references to the object (if defined)

addModel(model, inputs, name=None, outputFilter=<function identity>, inputChecker=<function null_check>)

Adds a forward model factory to the cost factory.

Inputs:

model – a callable function factory object inputs – number of input arguments to model name – a string representing the model name

Example

>>> import numpy as np
>>> C = CostFactory()

getCostFunction(evalpts, observations, sigma=None, metric=<function CostFactory.<lambda>>)

Get a cost function that allows simultaneous evaluation of all forward models for the same set of evaluation points and observation points.

Inputs:

evalpts – a list of evaluation points observations – a list of data points sigma – a scaling factor applied to the raw cost metric – the cost metric object

The cost metric should be a function of one parameter (possibly an array) that returns a scalar. The default is L2. When called, the “misfit” will be passed in.

NOTE: Input parameters WILL go through filters registered as inputCheckers.

Example

>>> import numpy as np
>>> C = CostFactory()
>>> x = np.array([-2., -1., 0., 1., 2.])
>>> y = np.array([-4., -2., 0., 2., 4.])
>>> F = C.getCostFunction(x, y, metric=lambda x: np.sum(x))
>>> F([1,0,0])
0.0
>>> F([2,0,0])
10.0
>>> F = C.getCostFunction(x, y)
>>> F([2,0,0])
34.0

getCostFunctionSlow(evalpts, observations)

Get a cost function that allows simultaneous evaluation of all forward models for the same set of evaluation points and observation points.

Parameters
• evalpts (list(float)) – a list of evaluation points (i.e. input).

• observations (list(float)) – a list of data points (i.e. output).

Notes

The cost metric is hard-wired to be the sum of the real part of |x|^2, where x is the VectorCostFunction for a given set of parameters.

Input parameters do NOT go through filters registered as inputCheckers.

Examples

>>> import numpy as np
>>> C = CostFactory()
>>> x = np.array([-2., -1., 0., 1., 2.])
>>> y = np.array([-4., -2., 0., 2., 4.])
>>> F = C.getCostFunctionSlow(x, y)
>>> F([1,0,0])
0.0
>>> F([2,0,0])
100.0

getForwardEvaluator(evalpts)

Get a model factory that allows simultaneous evaluation of all forward models for the same set of evaluation points.

Inputs:

evalpts – a list of evaluation points

Example

>>> import numpy as np
>>> C = CostFactory()
>>> F = C.getForwardEvaluator([1,2,3,4,5])
>>> F([1,0,0])
[array([ 1,  4,  9, 16, 25])]
>>> F([0,1,0])
[array([1, 2, 3, 4, 5])]

getParameterList()

Get a ‘pretty’ listing of the input parameters and corresponding models.

getRandomParams()
getVectorCostFunction(evalpts, observations)

Get a vector cost function that allows simultaneous evaluation of all forward models for the same set of evaluation points and observation points.

Inputs:

evalpts – a list of evaluation points observations – a list of data points

The vector cost metric is hard-wired to be the sum of the difference of getForwardEvaluator(evalpts) and the observations.

NOTE: Input parameters do NOT go through filters registered as inputCheckers.

Example

>>> import numpy as np
>>> C = CostFactory()
>>> x = np.array([-2., -1., 0., 1., 2.])
>>> y = np.array([-4., -2., 0., 2., 4.])
>>> F = C.getVectorCostFunction(x, y)
>>> F([1,0,0])
0.0
>>> F([2,0,0])
10.0


helputil module¶

Tools for prettifying help

Some of following code is taken from Ka-Ping Yee’s pydoc module

commandfy(text)

Format a command string

commandstring(text, BoldQ)

Bolds all lines in text that returns true by predicate BoldQ.

paginate(text, BoldQ=<function <lambda>>)

break printed content into pages

linesearch module¶

local copy of scipy.optimize.linesearch

update the termination condition with the given mask

update the termination condition with the given collapse (dict)

update all position masks in the given termination condition

update all weight masks in the given termination condition

math module¶

math: mathematical functions and tools for use in mystic

Functions¶

Mystic provides a set of mathematical functions that support various advanced optimization features such as uncertainty analysis and parameter sensitivity.

Tools¶

Mystic also provides a set of mathematical tools that support advanced features such as parameter space partitioning and monte carlo estimation. These mathematical tools are provided:

polyeval     -- fast evaluation of an n-dimensional polynomial
poly1d       -- generate a 1d polynomial instance
gridpts      -- generate a set of regularly spaced points
fillpts      -- generate a set of space-filling points
samplepts    -- generate a set of randomly sampled points
tolerance    -- absolute difference plus relative difference
almostEqual  -- test if equal within some absolute or relative tolerance
Distribution -- generate a sampling distribution instance

class Distribution(generator=None, *args, **kwds)

Bases: object

Sampling distribution for mystic optimizers

generate a sampling distribution with interface dist(size=None)

input::
• generator: a ‘distribution’ method from scipy.stats or numpy.random

• rng: a mystic.random_state object [default: random_state(‘numpy.random’)]

• args: positional arguments for the distribtution object

• kwds: keyword arguments for the distribution object

note::

this method only accepts numpy.random methods with the keyword ‘size’, and only accepts random_state objects built with module=’numpy.random’

__call__(size=None)

generate a sample of given size (tuple) from the distribution

__dict__ = mappingproxy({'__module__': 'mystic.math', '__doc__': '\nSampling distribution for mystic optimizers\n    ', '__init__': <function Distribution.__init__>, '__call__': <function Distribution.__call__>, '__dict__': <attribute '__dict__' of 'Distribution' objects>, '__weakref__': <attribute '__weakref__' of 'Distribution' objects>, '__annotations__': {}})
__init__(generator=None, *args, **kwds)

generate a sampling distribution with interface dist(size=None)

input::
• generator: a ‘distribution’ method from scipy.stats or numpy.random

• rng: a mystic.random_state object [default: random_state(‘numpy.random’)]

• args: positional arguments for the distribtution object

• kwds: keyword arguments for the distribution object

note::

this method only accepts numpy.random methods with the keyword ‘size’, and only accepts random_state objects built with module=’numpy.random’

__module__ = 'mystic.math'
__weakref__

list of weak references to the object (if defined)

almostEqual(x, y, tol=1e-18, rel=1e-07)

Returns True if two arrays are element-wise equal within a tolerance.

The tolerance values are positive, typically very small numbers. The relative difference (rtol * abs(b)) and the absolute difference atol are added together to compare against the absolute difference between a and b.

Parameters
• a, b (array_like) – Input arrays to compare.

• rtol (float) – The relative tolerance parameter (see Notes).

• atol (float) – The absolute tolerance parameter (see Notes).

Returns

y – Returns True if the two arrays are equal within the given tolerance; False otherwise. If either array contains NaN, then False is returned.

Return type

bool

Notes

If the following equation is element-wise True, then almostEqual returns True.

absolute(a - b) <= (atol + rtol * absolute(b))

Examples

>>> almostEqual([1e10,1.2345678], [1e10,1.2345677])
True
>>> almostEqual([1e10,1.234], [1e10,1.235])
False

fillpts(lb, ub, npts, data=None, rtol=None, dist=None)

takes lower and upper bounds (e.g. lb = [0,3], ub = [2,4]) finds npts that are at least rtol away from legacy data produces a list of sample points s = [[1,3],[1,4],[2,3],[2,4]]

Inputs:

lb – a list of the lower bounds ub – a list of the upper bounds npts – number of sample points data – a list of legacy sample points rtol – target radial distance from each point dist – a mystic.math.Distribution instance (or list of Distributions)

Notes: if rtol is None, use max rtol; if rtol < 0, use quick-n-dirty method

gridpts(q, dist=None)

takes a list of lists of arbitrary length q = [[1,2],[3,4]] and produces a list of gridpoints g = [[1,3],[1,4],[2,3],[2,4]]

Inputs:

q – a list of lists of integers denoting grid points dist – a mystic.math.Distribution instance (or list of Distributions)

Notes

if a mystic.math.Distribution is provided, use it to inject randomness

poly1d(coeff)

generates a 1-D polynomial instance from a list of coefficients using numpy.poly1d(coeffs)

polyeval(coeffs, x)

takes list of coefficients & evaluation points, returns f(x) thus, [a3, a2, a1, a0] yields a3 x^3 + a2 x^2 + a1 x^1 + a0

samplepts(lb, ub, npts, dist=None)

takes lower and upper bounds (e.g. lb = [0,3], ub = [2,4]) produces a list of sample points s = [[1,3],[1,4],[2,3],[2,4]]

Inputs:

lb – a list of the lower bounds ub – a list of the upper bounds npts – number of sample points dist – a mystic.math.Distribution instance (or list of Distributions)

tolerance(x, tol=1e-15, rel=1e-15)

relative plus absolute difference

metropolis module¶

Implements a simple version of the Metropolis-Hastings algorithm

metropolis_hastings(proposal, target, x)

Proposal(x) -> next point. Must be symmetric. This is because otherwise the PDF of the proposal density is needed (not just a way to draw from the proposal)

models module¶

models: sample models and functions prepared for use in mystic

Functions¶

Mystic provides a set of standard fitting functions that derive from the function API found in mystic.models.abstract_model. These standard functions are provided:

sphere     -- De Jong's spherical function
rosen      -- Sum of Rosenbrock's function
step       -- De Jong's step function
quartic    -- De Jong's quartic function
shekel     -- Shekel's function
corana     -- Corana's function
fosc3d     -- Trott's fOsc3D function
nmin51     -- Champion's NMinimize test function
griewangk  -- Griewangk's function
zimmermann -- Zimmermann's function
peaks      -- NAG's peaks function
venkat91   -- Venkataraman's sinc function
schwefel   -- Schwefel's function
ellipsoid  -- Pohlheim's rotated hyper-ellipsoid function
rastrigin  -- Rastrigin's function
powers     -- Pohlheim's sum of different powers function
ackley     -- Ackley's path function
michal     -- Michalewicz's function
branins    -- Branins's rcos function
easom      -- Easom's function
goldstein  -- Goldstein-Price's function
paviani    -- Paviani's function
wavy1      -- a simple sine-based multi-minima function
wavy2      -- another simple sine-based multi-minima function


Models¶

Mystic also provides a set of example models that derive from the model API found in mystic.models.abstract_model. These standard models are provided:

poly       -- 1d model representation for polynomials
circle     -- 2d array representation of a circle
lorentzian -- Lorentzian peak model
br8        -- Bevington & Robinson's model of dual exponential decay
mogi       -- Mogi's model of surface displacements from a point spherical
source in an elastic half space


Additionally, circle has been extended to provide three additional models, each with different packing densities:

dense_circle, sparse_circle, and minimal_circle


Further, poly provides additional models for 2nd, 4th, 6th, 8th, and 16th order Chebyshev polynomials:

chebyshev2, chebyshev4, chebyshev6, chebyshev8, chebyshev16


Also, rosen has been modified to provide models for the 0th and 1st derivative of the Rosenbrock function:

rosen0der, and rosen1der

class AbstractFunction(ndim=None)

Bases: object

Base class for mystic functions

The ‘function’ method must be overwritten, thus allowing calls to the class instance to mimic calls to the function object.

For example, if function is overwritten with the Rosenbrock function:
>>> rosen = Rosenbrock(ndim=3)
>>> rosen([1,1,1])
0.


Provides a base class for mystic functions.

Takes optional input ‘ndim’ (number of dimensions).

__annotations__ = {}
__call__(*args, **kwds)

Call self as a function.

__dict__ = mappingproxy({'__module__': 'mystic.models.abstract_model', '__doc__': "\nBase class for mystic functions\n\nThe 'function' method must be overwritten, thus allowing\ncalls to the class instance to mimic calls to the function object.\n\nFor example, if function is overwritten with the Rosenbrock function:\n    >>> rosen = Rosenbrock(ndim=3)\n    >>> rosen([1,1,1])\n    0.\n   ", '__init__': <function AbstractFunction.__init__>, '__call__': <function AbstractFunction.__call__>, 'function': <function AbstractFunction.function>, 'minimizers': None, '__dict__': <attribute '__dict__' of 'AbstractFunction' objects>, '__weakref__': <attribute '__weakref__' of 'AbstractFunction' objects>, '__annotations__': {}})
__init__(ndim=None)

Provides a base class for mystic functions.

Takes optional input ‘ndim’ (number of dimensions).

__module__ = 'mystic.models.abstract_model'
__weakref__

list of weak references to the object (if defined)

function(coeffs)

takes a list of coefficients x, returns f(x)

minimizers = None
class AbstractModel(name='dummy', metric=<function AbstractModel.<lambda>>, sigma=1.0)

Bases: object

Base class for mystic models

The ‘evaluate’ and ‘ForwardFactory’ methods must be overwritten, thus providing a standard interface for generating a forward model factory and evaluating a forward model. Additionally, two common ways to generate a cost function are built into the model. For “standard models”, the cost function generator will work with no modifications.

See mystic.models.poly for a few basic examples.

Provides a base class for mystic models.

Inputs::

name – a name string for the model metric – the cost metric object [default => lambda x: numpy.sum(x*x)] sigma – a scaling factor applied to the raw cost

CostFactory(target, pts)

generates a cost function instance from list of coefficients and evaluation points

CostFactory2(pts, datapts, nparams)

generates a cost function instance from datapoints and evaluation points

ForwardFactory(coeffs)

generates a forward model instance from a list of coefficients

__dict__ = mappingproxy({'__module__': 'mystic.models.abstract_model', '__doc__': '\nBase class for mystic models\n\nThe \'evaluate\' and \'ForwardFactory\' methods must be overwritten, thus providing\na standard interface for generating a forward model factory and evaluating a\nforward model.  Additionally, two common ways to generate a cost function are\nbuilt into the model.  For "standard models", the cost function generator will\nwork with no modifications.\n\nSee mystic.models.poly for a few basic examples.\n    ', '__init__': <function AbstractModel.__init__>, 'evaluate': <function AbstractModel.evaluate>, 'ForwardFactory': <function AbstractModel.ForwardFactory>, 'CostFactory': <function AbstractModel.CostFactory>, 'CostFactory2': <function AbstractModel.CostFactory2>, '__dict__': <attribute '__dict__' of 'AbstractModel' objects>, '__weakref__': <attribute '__weakref__' of 'AbstractModel' objects>, '__annotations__': {}})
__init__(name='dummy', metric=<function AbstractModel.<lambda>>, sigma=1.0)

Provides a base class for mystic models.

Inputs::

name – a name string for the model metric – the cost metric object [default => lambda x: numpy.sum(x*x)] sigma – a scaling factor applied to the raw cost

__module__ = 'mystic.models.abstract_model'
__weakref__

list of weak references to the object (if defined)

evaluate(coeffs, x)

takes list of coefficients & evaluation points, returns f(x)

ackley(x)

evaluates Ackley’s function for a list of coeffs

f(x) = f_0(x) + f_1(x)

Where: f_0(x) = -20 * exp(-0.2 * sqrt(1/N * sum_(i=0)^(N-1) x_(i)^(2))) and: f_1(x) = -exp(1/N * sum_(i=0)^(N-1) cos(2 * pi * x_(i))) + 20 + exp(1)

Inspect with mystic_model_plotter using::

mystic.models.ackley -b “-10:10:.1, -10:10:.1” -d

The minimum is f(x)=0.0 at x_i=0.0 for all i

branins(x)

evaluates Branins’s function for a list of coeffs

f(x) = f_0(x) + f_1(x)

Where: f_0(x) = a * (x_1 - b * x_(0)^(2) + c * x_0 - d)^2 and f_1(x) = e * (1 - f) * cos(x_0) + e and a=1, b=5.1/(4*pi^2), c=5/pi, d=6, e=10, f=1/(8*pi)

Inspect with mystic_model_plotter using::

mystic.models.branins -b “-10:20:.1, -5:25:.1” -d -x 1

The minimum is f(x)=0.397887 at x=((2 +/- (2*i)+1)*pi, 2.275 + 10*i*(i+1)/2) for all i

corana(x)

evaluates a 4-D Corana’s parabola function for a list of coeffs

f(x) = sum_(i=0)^(3) f_0(x)

Where for abs(x_i - z_i) < 0.05: f_0(x) = 0.15*(z_i - 0.05*sign(z_i))^(2) * d_i and otherwise: f_0(x) = d_i * x_(i)^(2), with z_i = floor(abs(x_i/0.2)+0.49999)*sign(x_i)*0.2 and d_i = 1,1000,10,100.

For len(x) == 1, x = x_0,0,0,0; for len(x) == 2, x = x_0,0,x_1,0; for len(x) == 3, x = x_0,0,x_1,x_2; for len(x) >= 4, x = x_0,x_1,x_2,x_3.

Inspect with mystic_model_plotter using::

mystic.models.corana -b “-1:1:.01, -1:1:.01” -d -x 1

The minimum is f(x)=0 for abs(x_i) < 0.05 for all i.

easom(x)

evaluates Easom’s function for a list of coeffs

f(x) = -cos(x_0) * cos(x_1) * exp(-((x_0-pi)^2+(x_1-pi)^2))

Inspect with mystic_model_plotter using::

mystic.models.easom -b “-5:10:.1, -5:10:.1” -d

The minimum is f(x)=-1.0 at x=(pi,pi)

ellipsoid(x)

evaluates the rotated hyper-ellipsoid function for a list of coeffs

f(x) = sum_(i=0)^(N-1) (sum_(j=0)^(i) x_j)^2

Inspect with mystic_model_plotter using::

mystic.models.ellipsoid -b “-5:5:.1, -5:5:.1” -d

The minimum is f(x)=0.0 at x_i=0.0 for all i

fosc3d(x)

evaluates the fOsc3D function for a list of coeffs

f(x) = f_0(x) + p(x)

Where: f_0(x) = -4 * exp(-x_(0)^2 - x_(1)^2) + sin(6*x_(0)) * sin(5*x_(1)) with for x_1 < 0: p(x) = 100.*x_(1)^2 and otherwise: p(x) = 0.

Inspect with mystic_model_plotter using::

mystic.models.fosc3d -b “-5:5:.1, 0:5:.1” -d

The minimum is f(x)=-4.501069742528923 at x=(-0.215018, 0.240356)

goldstein(x)

evaluates Goldstein-Price’s function for a list of coeffs

f(x) = (1 + (x_0 + x_1 + 1)^2 * f_0(x)) * (30 + (2*x_0 - 3*x_1)^2 * f_1(x))

Where: f_0(x) = 19 - 14*x_0 + 3*x_(0)^2 - 14*x_1 + 6*x_(0)*x_(1) + 3*x_(1)^2 and f_1(x) = 18 - 32*x_0 + 12*x_(0)^2 + 48*x_1 - 36*x_(0)*x_(1) + 27*x_(1)^2

Inspect with mystic_model_plotter using::

mystic.models.goldstein -b “-5:5:.1, -5:5:.1” -d -x 1

The minimum is f(x)=3.0 at x=(0,-1)

griewangk(x)

evaluates an N-dimensional Griewangk’s function for a list of coeffs

f(x) = f_0(x) - f_1(x) + 1

Where: f_0(x) = sum_(i=0)^(N-1) x_(i)^(2) / 4000. and: f_1(x) = prod_(i=0)^(N-1) cos( x_i / (i+1)^(1/2) )

Inspect with mystic_model_plotter using::

mystic.models.griewangk -b “-10:10:.1, -10:10:.1” -d -x 5

The minimum is f(x)=0.0 for x_i=0.0

michal(x)

evaluates Michalewicz’s function for a list of coeffs

f(x) = -sum_(i=0)^(N-1) sin(x_i) * (sin((i+1) * (x_i)^(2) / pi))^(20)

Inspect with mystic_model_plotter using::

mystic.models.michal -b “0:3.14:.1, 0:3.14:.1, 1.28500168, 1.92305311, 1.72047194” -d

For x=(2.20289811, 1.57078059, 1.28500168, 1.92305311, 1.72047194, …)[:N] and c=(-0.801303, -1.0, -0.959092, -0.896699, -1.030564, …)[:N], the minimum is f(x)=sum(c) for all x_i=(0,pi)

nmin51(x)

evaluates the NMinimize51 function for a list of coeffs

f(x) = f_0(x) + f_1(x)

Where: f_0(x) = exp(sin(50*x_0)) + sin(60*exp(x_1)) + sin(70*sin(x_0)) and f_1(x) = sin(sin(80*x_1)) - sin(10*(x_0 + x_1)) + (x_(0)^2 + x_(1)^2)/4

Inspect with mystic_model_plotter using::

mystic.models.nmin51 -b “-5:5:.1, 0:5:.1” -d

The minimum is f(x)=-3.306869 at x=(-0.02440313,0.21061247)

paviani(x)

evaluates Paviani’s function for a list of coeffs

f(x) = f_0(x) - f_1(x)

Where: f_0(x) = sum_(i=0)^(N-1) (ln(x_i - 2)^2 + ln(10 - x_i)^2) and f_1(x) = prod_(i=0)^(N-1) x_(i)^(.2)

Inspect with mystic_model_plotter using::

mystic.models.paviani -b “2:10:.1, 2:10:.1” -d

For N=1, the minimum is f(x)=2.133838 at x_i=8.501586, for N=3, the minimum is f(x)=7.386004 at x_i=8.589578, for N=5, the minimum is f(x)=9.730525 at x_i=8.740743, for N=8, the minimum is f(x)=-3.411859 at x_i=9.086900, for N=10, the minimum is f(x)=-45.778470 at x_i=9.350241.

peaks(x)

evaluates an 2-dimensional peaks function for a list of coeffs

f(x) = f_0(x) - f_1(x) - f_2(x)

Where: f_0(x) = 3 * (1 - x_0)^2 * exp(-x_0^2 - (x_1 + 1)^2) and f_1(x) = 10 * (.2 * x_0 - x_0^3 - x_1^5) * exp(-x_0^2 - x_1^2) and f_2(x) = exp(-(x_0 + 1)^2 - x_1^2) / 3

Inspect with mystic_model_plotter using::

mystic.models.peaks -b “-5:5:.1, -5:5:.1” -d

The minimum is f(x)=-6.551133332835841 at x=(0.22827892, -1.62553496)

powers(x)

evaluates the sum of different powers function for a list of coeffs

f(x) = sum_(i=0)^(N-1) abs(x_(i))^(i+2)

Inspect with mystic_model_plotter using::

mystic.models.powers -b “-5:5:.1, -5:5:.1” -d

The minimum is f(x)=0.0 at x_i=0.0 for all i

quartic(x)

evaluates an N-dimensional quartic function for a list of coeffs

f(x) = sum_(i=0)^(N-1) (x_(i)^4 * (i+1) + k_i)

Where k_i is a random variable with uniform distribution bounded by [0,1).

Inspect with mystic_model_plotter using::

mystic.models.quartic -b “-3:3:.1, -3:3:.1” -d -x 1

The minimum is f(x)=N*E[k] for x_i=0.0, where E[k] is the expectation of k, and thus E[k]=0.5 for a uniform distribution bounded by [0,1).

rastrigin(x)

evaluates Rastrigin’s function for a list of coeffs

f(x) = 10 * N + sum_(i=0)^(N-1) (x_(i)^2 - 10 * cos(2 * pi * x_(i)))

Inspect with mystic_model_plotter using::

mystic.models.rastrigin -b “-5:5:.1, -5:5:.1” -d

The minimum is f(x)=0.0 at x_i=0.0 for all i

rosen(x)

evaluates an N-dimensional Rosenbrock saddle for a list of coeffs

f(x) = sum_(i=0)^(N-2) 100*(x_(i+1) - x_(i)^(2))^(2) + (1 - x_(i))^(2)

Inspect with mystic_model_plotter using::

mystic.models.rosen -b “-3:3:.1, -1:5:.1, 1” -d -x 1

The minimum is f(x)=0.0 at x_i=1.0 for all i

rosen0der(x)

evaluates an N-dimensional Rosenbrock saddle for a list of coeffs

f(x) = sum_(i=0)^(N-2) 100*(x_(i+1) - x_(i)^(2))^(2) + (1 - x_(i))^(2)

Inspect with mystic_model_plotter using::

mystic.models.rosen -b “-3:3:.1, -1:5:.1, 1” -d -x 1

The minimum is f(x)=0.0 at x_i=1.0 for all i

rosen1der(x)

evaluates an N-dimensional Rosenbrock derivative for a list of coeffs

The minimum is f’(x)=[0.0]*n at x=[1.0]*n, where len(x) >= 2.

schwefel(x)

evaluates Schwefel’s function for a list of coeffs

f(x) = sum_(i=0)^(N-1) -x_i * sin(sqrt(abs(x_i)))

Where abs(x_i) <= 500.

Inspect with mystic_model_plotter using::

mystic.models.schwefel -b “-500:500:10, -500:500:10” -d

The minimum is f(x)=-(N+1)*418.98288727243374 at x_i=420.9687465 for all i

shekel(x)

evaluates a 2-D Shekel’s Foxholes function for a list of coeffs

f(x) = 1 / (0.002 + f_0(x))

Where: f_0(x) = sum_(i=0)^(24) 1 / (i + sum_(j=0)^(1) (x_j - a_ij)^(6)) with a_ij=(-32,-16,0,16,32). for j=0 and i=(0,1,2,3,4), a_i0=a_k0 with k=i mod 5 also j=1 and i=(0,5,10,15,20), a_i1=a_k1 with k=i+k’ and k’=(1,2,3,4).

Inspect with mystic_model_plotter using::

mystic.models.shekel -b “-50:50:1, -50:50:1” -d -x 1

The minimum is f(x)=0 for x=(-32,-32)

sphere(x)

evaluates an N-dimensional spherical function for a list of coeffs

f(x) = sum_(i=0)^(N-1) x_(i)^2

Inspect with mystic_model_plotter using::

mystic.models.sphere -b “-5:5:.1, -5:5:.1” -d

The minimum is f(x)=0.0 at x_i=0.0 for all i

step(x)

evaluates an N-dimensional step function for a list of coeffs

f(x) = f_0(x) + p_i(x), with i=0,1

Where for abs(x_i) <= 5.12: f_0(x) = 30 + sum_(i=0)^(N-1) floor x_i and for x_i > 5.12: p_0(x) = 30 * (1 + (x_i - 5.12)) and for x_i < 5.12: p_1(x) = 30 * (1 + (5.12 - x_i)) Otherwise, f_0(x) = 0 and p_i(x)=0 for i=0,1.

Inspect with mystic_model_plotter using::

mystic.models.step -b “-10:10:.2, -10:10:.2” -d -x 1

The minimum is f(x)=(30 - 6*N) for all x_i=[-5.12,-5)

venkat91(x)

evaluates Venkataraman’s sinc function for a list of coeffs

f(x) = -20 * sin(r(x))/r(x)

Where: r(x) = sqrt((x_0 - 4)^2 + (x_1 - 4)^2 + 0.1)

Inspect with mystic_model_plotter using::

mystic.models.venkat91 -b “-10:10:.1, -10:10:.1” -d

The minimum is f(x)=-19.668329370585823 at x=(4.0, 4.0)

wavy1(x)

evaluates the wavy1 function for a list of coeffs

f(x) = abs(x + 3*sin(x + pi) + pi)

Inspect with mystic_model_plotter using::

mystic.models.wavy1 -b “-20:20:.5, -20:20:.5” -d -r numpy.add

The minimum is f(x)=0.0 at x_i=-pi for all i

wavy2(x)

evaluates the wavy2 function for a list of coeffs

f(x) = 4*sin(x)+sin(4*x)+sin(8*x)+sin(16*x)+sin(32*x)+sin(64*x)

Inspect with mystic_model_plotter using::

mystic.models.wavy2 -b “-10:10:.2, -10:10:.2” -d -r numpy.add

The function has degenerate global minima of f(x)=-6.987594 at x_i = 4.489843526 + 2*k*pi for all i, and k is an integer

zimmermann(x)

evaluates a Zimmermann function for a list of coeffs

f(x) = max(f_0(x), p_i(x)), with i = 0,1,2,3

Where: f_0(x) = 9 - x_0 - x_1 with for x_0 < 0: p_0(x) = -100 * x_0 and for x_1 < 0: p_1(x) = -100 * x_1 and for c_2(x) > 16 and c_3(x) > 14: p_i(x) = 100 * c_i(x), with i = 2,3 c_2(x) = (x_0 - 3)^2 + (x_1 - 2)^2 c_3(x) = x_0 * x_1 Otherwise, p_i(x)=0 for i=0,1,2,3 and c_i(x)=0 for i=2,3.

Inspect with mystic_model_plotter using::

mystic.models.zimmermann -b “-5:10:.1, -5:10:.1” -d -x 1

The minimum is f(x)=0.0 at x=(7.0,2.0)

monitors module¶

monitors: callable class instances that record data

Monitors¶

Monitors provide the ability to monitor progress as the optimization is underway. Monitors also can be used to extract and prepare information for mystic’s analysis viewers. Each of mystic’s monitors are customizable, and provide the user with a different type of output. The following monitors are available:

Monitor        -- the basic monitor; only writes to internal state
LoggingMonitor -- a logging monitor; also writes to a logfile
VerboseMonitor -- a verbose monitor; also writes to stdout/stderr
VerboseLoggingMonitor -- a verbose logging monitor; best of both worlds
CustomMonitor  -- a customizable 'n-variable' version of Monitor
Null           -- a null object, which reliably does nothing


Usage¶

Typically monitors are either bound to a model function by a modelFactory, or bound to a cost function by a Solver.

Examples

>>> # get and configure monitors
>>> from mystic.monitors import Monitor, VerboseMonitor
>>> evalmon = Monitor()
>>> stepmon = VerboseMonitor(5)
>>>
>>> # instantiate and configure the solver
>>> from mystic.termination import CandidateRelativeTolerance as CRT
>>> solver.SetInitialPoints(x0)
>>>
>>> # associate the monitor with a solver, then solve
>>> solver.SetEvaluationMonitor(evalmon)
>>> solver.SetGenerationMonitor(stepmon)
>>> solver.Solve(rosen, CRT())
>>>
>>> # access the 'iteration' history
>>> stepmon.x     # parameters after each iteration
>>> stepmon.y     # cost after each iteration
>>>
>>> # access the 'evaluation' history
>>> evalmon.x     # parameters after each evaluation
>>> evalmon.y     # cost after each evaluation

CustomMonitor(*args, **kwds)

generate a custom Monitor

Parameters
• args (tuple(str)) – tuple of the required Monitor inputs (e.g. x).

• kwds (dict(str)) – dict of {"input":"doc"} (e.g. x='Params').

Returns

a customized monitor instance

Examples

>>> sow = CustomMonitor('x','y',x="Params",y="Costs",e="Error",d="Deriv")
>>> sow(1,1)
>>> sow(2,4,e=0)
>>> sow.x
[1,2]
>>> sow.y
[1,4]
>>> sow.e
[0]
>>> sow.d
[]

class LoggingMonitor(interval=1, filename='log.txt', new=False, all=True, info=None, **kwds)

A basic Monitor that writes to a file at specified intervals.

Logs output ‘y’ and input parameters ‘x’ to a file every ‘interval’.

__call__(x, y, id=None, best=0, k=False)

Call self as a function.

__init__(interval=1, filename='log.txt', new=False, all=True, info=None, **kwds)
__module__ = 'mystic.monitors'
__reduce__()

Helper for pickle.

__setstate__(state)
info(message)
class Monitor(**kwds)

Bases: object

Instances of objects that can be passed as monitors. Typically, a Monitor logs a list of parameters and the corresponding costs, retrievable by accessing the Monitor’s member variables.

example usage…
>>> sow = Monitor()
>>> sow([1,2],3)
>>> sow([4,5],6)
>>> sow.x
[[1, 2], [4, 5]]
>>> sow.y
[3, 6]


add the contents of self and the given monitor

__call__(x, y, id=None, **kwds)

Call self as a function.

__dict__ = mappingproxy({'__module__': 'mystic.monitors', '__doc__': "\nInstances of objects that can be passed as monitors.\nTypically, a Monitor logs a list of parameters and the\ncorresponding costs, retrievable by accessing the Monitor's\nmember variables.\n\nexample usage...\n    >>> sow = Monitor()\n    >>> sow([1,2],3)\n    >>> sow([4,5],6)\n    >>> sow.x\n    [[1, 2], [4, 5]]\n    >>> sow.y\n    [3, 6]\n\n    ", '__init__': <function Monitor.__init__>, '__len__': <function Monitor.__len__>, 'info': <function Monitor.info>, '__call__': <function Monitor.__call__>, '__add__': <function Monitor.__add__>, '__getitem__': <function Monitor.__getitem__>, '__setitem__': <function Monitor.__setitem__>, 'extend': <function Monitor.extend>, 'prepend': <function Monitor.prepend>, 'get_x': <function Monitor.get_x>, 'get_id': <function Monitor.get_id>, 'get_info': <function Monitor.get_info>, '_Monitor__step': <function Monitor.__step>, 'min': <function Monitor.min>, 'get_iwts': <function Monitor.get_iwts>, 'get_ipos': <function Monitor.get_ipos>, 'get_wts': <function Monitor.get_wts>, 'get_pos': <function Monitor.get_pos>, 'get_y': <function Monitor.get_y>, '_get_y': <function Monitor._get_y>, 'get_ix': <function Monitor.get_ix>, 'get_ax': <function Monitor.get_ax>, 'get_iy': <function Monitor.get_iy>, 'get_ay': <function Monitor.get_ay>, '_k': <function Monitor._k>, '_ik': <function Monitor._ik>, '_step': <property object>, 'x': <property object>, 'ix': <property object>, 'ax': <property object>, 'y': <property object>, 'iy': <property object>, 'ay': <property object>, 'id': <property object>, 'wts': <property object>, 'pos': <property object>, '_wts': <property object>, '_pos': <property object>, '__dict__': <attribute '__dict__' of 'Monitor' objects>, '__weakref__': <attribute '__weakref__' of 'Monitor' objects>, '__annotations__': {}})
__getitem__(y)

x.__getitem__(y) <==> x[y]

__init__(**kwds)
__len__()
__module__ = 'mystic.monitors'
__setitem__(i, y)

x.__setitem__(i, y) <==> x[i]=y

__step()
__weakref__

list of weak references to the object (if defined)

_get_y(monitor)

avoid double-conversion by combining k’s

_ik(y, k=False, type=<class 'list'>)
_k(y, type=<class 'list'>)
property _pos

Positions

property _step
property _wts

Weights

property ax

Params

property ay

Costs

extend(monitor)

append the contents of the given monitor

get_ax()
get_ay()
get_id()
get_info()
get_ipos()
get_iwts()
get_ix()
get_iy()
get_pos()
get_wts()
get_x()
get_y()
property id

Id

info(message)
property ix

Params

property iy

Costs

min()

get the minimum monitor entry

property pos

Positions

prepend(monitor)

prepend the contents of the given monitor

property wts

Weights

property x

Params

property y

Costs

class Null(*args, **kwargs)

Bases: object

A Null object

Null objects always and reliably “do nothing.”

__bool__()
__call__(*args, **kwargs)

Call self as a function.

__delattr__(name)

Implement delattr(self, name).

__dict__ = mappingproxy({'__module__': 'mystic.monitors', '__doc__': 'A Null object\n\nNull objects always and reliably "do nothing." ', '__new__': <staticmethod object>, '__init__': <function Null.__init__>, '__call__': <function Null.__call__>, '__repr__': <function Null.__repr__>, '__bool__': <function Null.__bool__>, '__nonzero__': <function Null.__nonzero__>, '__getattr__': <function Null.__getattr__>, '__setattr__': <function Null.__setattr__>, '__delattr__': <function Null.__delattr__>, '__len__': <function Null.__len__>, '__getnewargs__': <function Null.__getnewargs__>, '__getitem__': <function Null.__getitem__>, '__setitem__': <function Null.__setitem__>, 'min': <function Null.min>, '__dict__': <attribute '__dict__' of 'Null' objects>, '__weakref__': <attribute '__weakref__' of 'Null' objects>, '_inst': Null(), 'info': Null(), 'k': None, 'x': (), '_x': (), 'y': (), '_y': (), '_id': (), '_npts': None, 'label': None, '__annotations__': {}})
__getattr__(name)
__getitem__(y)
__getnewargs__()
__init__(*args, **kwargs)
__len__()
__module__ = 'mystic.monitors'
static __new__(cls, *args, **kwargs)
__nonzero__()
__repr__()

Return repr(self).

__setattr__(name, value)

Implement setattr(self, name, value).

__setitem__(i, y)
__weakref__

list of weak references to the object (if defined)

_id = ()
_npts = None
_x = ()
_y = ()
k = None
label = None
min()
x = ()
y = ()
class VerboseLoggingMonitor(interval=1, yinterval=10, xinterval=inf, filename='log.txt', new=False, all=True, info=None, **kwds)

A Monitor that writes to a file and the screen at specified intervals.

Logs output ‘y’ and input parameters ‘x’ to a file every ‘interval’, also print every ‘yinterval’.

__call__(x, y, id=None, best=0, k=False)

Call self as a function.

__init__(interval=1, yinterval=10, xinterval=inf, filename='log.txt', new=False, all=True, info=None, **kwds)
__module__ = 'mystic.monitors'
__reduce__()

Helper for pickle.

__setstate__(state)
info(message)
class VerboseMonitor(interval=10, xinterval=inf, all=True, **kwds)

A verbose version of the basic Monitor.

Prints output ‘y’ every ‘interval’, and optionally prints input parameters ‘x’ every ‘xinterval’.

__call__(x, y, id=None, best=0, k=False)

Call self as a function.

__init__(interval=10, xinterval=inf, all=True, **kwds)
__module__ = 'mystic.monitors'
info(message)

load npts, params, and cost into monitor from file at given path

_measures(monitor, last=None, weights=False)

return positions or weights from the last N entries in a monitor

this function requires a montor that is monitoring a product_measure

_positions(monitor, last=None)

return positions from the last N entries in a monitor

this function requires a montor that is monitoring a product_measure

_solutions(monitor, last=None)

return the params from the last N entries in a monitor

_weights(monitor, last=None)

return weights from the last N entries in a monitor

this function requires a montor that is monitoring a product_measure

munge module¶

__orig_write_converge_file(mon, log_file='paramlog.py')
__orig_write_support_file(mon, log_file='paramlog.py')
converge_to_support(steps, energy)
converge_to_support_converter(file_in, file_out)
isNull(mon)
old_to_new_support_converter(file_in, file_out)
raw_to_converge(steps, energy)
raw_to_converge_converter(file_in, file_out)
raw_to_support(steps, energy)
raw_to_support_converter(file_in, file_out)

read parameter history and cost history from the given source

source is a monitor, logfile, support file, solver restart file, dataset, etc

import the targets; targets are name strings

read trajectories from a convergence logfile or a monitor

source can either be a monitor instance or a logfile path

sequence(x)

True if x is a list, tuple, or a ndarray

write_converge_file(mon, log_file='paramlog.py', **kwds)
write_monitor(steps, energy, id=[], k=None)
write_raw_file(mon, log_file='paramlog.py', **kwds)
write_support_file(mon, log_file='paramlog.py', **kwds)

penalty module¶

penalty methods: methods used to convert a function into a penalty function

Suppose a given condition f(x) is satisfied when f(x) == 0.0 for equality constraints, and f(x) <= 0.0 for inequality constraints. This condition f(x) can be used as the basis for a mystic.penalty function.

Examples

>>> def penalty_mean(x, target):
...   return mean(x) - target
...
... def penalty(x):
...   return 0.0
...
>>> penalty([1,2,3,4,5])
400.0
>>> penalty([3,4,5,6,7])
7.8886090522101181e-29


References

1. http://en.wikipedia.org/wiki/Penalty_method

2. “Applied Optimization with MATLAB Programming”, by Venkataraman, Wiley, 2nd edition, 2009.

3. http://www.srl.gatech.edu/education/ME6103/Penalty-Barrier.ppt

4. “An Augmented Lagrange Multiplier Based Method for Mixed Integer Discrete Continuous Optimization and Its Applications to Mechanical Design”, by Kannan and Kramer, 1994.

barrier_inequality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)

apply a infinite barrier if the given inequality constraint is violated, and a logarithmic penalty if the inequality constraint is satisfied

penalty is p(x) = inf if constraint is violated, otherwise penalty is p(x) = -1/pk*log(-f(x)), with pk = 2k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1

the condition f(x) is satisfied when f(x) <= 0.0

lagrange_equality(condition=<function <lambda>>, args=None, kwds=None, k=20, h=5)

apply a quadratic penalty if the given equality constraint is violated

penalty is p(x) = pk*f(x)**2 + lam*f(x), with pk = k*pow(h,n) and n=0 also lagrange multiplier lam = 2k*f(x) where f.iter() can be used to increment n = n+1

the condition f(x) is satisfied when f(x) = 0.0

lagrange_inequality(condition=<function <lambda>>, args=None, kwds=None, k=20, h=5)

apply a quadratic penalty if the given inequality constraint is violated

penalty is p(x) = pk*mpf**2 + beta*mpf, with pk = k*pow(h,n) and n=0 also mpf = max(-beta/2k, f(x)) and lagrange multiplier beta = 2k*mpf where f.iter() can be used to increment n = n+1

the condition f(x) is satisfied when f(x) <= 0.0

linear_equality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)

apply a linear penalty if the given equality constraint is violated

penalty is p(x) = pk*abs(f(x)), with pk = k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1

the condition f(x) is satisfied when f(x) == 0.0

linear_inequality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)

apply a linear penalty if the given inequality constraint is violated

penalty is p(x) = pk*abs(f(x)), with pk = 2k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1

the condition f(x) is satisfied when f(x) <= 0.0

quadratic_equality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)

apply a quadratic penalty if the given equality constraint is violated

penalty is p(x) = pk*f(x)**2, with pk = k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1

the condition f(x) is satisfied when f(x) == 0.0

quadratic_inequality(condition=<function <lambda>>, args=None, kwds=None, k=100, h=5)

apply a quadratic penalty if the given inequality constraint is violated

penalty is p(x) = pk*f(x)**2, with pk = 2k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1

the condition f(x) is satisfied when f(x) <= 0.0

uniform_equality(condition=<function <lambda>>, args=None, kwds=None, k=inf, h=5)

apply a uniform penalty if the given equality constraint is violated

penalty is p(x) = pk, with pk = k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1

the condition f(x) is satisfied when f(x) == 0.0

uniform_inequality(condition=<function <lambda>>, args=None, kwds=None, k=inf, h=5)

apply a uniform penalty if the given inequality constraint is violated

penalty is p(x) = pk, with pk = k*pow(h,n) and n=0 where f.iter() can be used to increment n = n+1

the condition f(x) is satisfied when f(x) <= 0.0

pools module¶

This module contains map and pipe interfaces to standard (i.e. serial) python.

Pipe methods provided:

pipe - blocking communication pipe [returns: value]

Map methods provided:

map - blocking and ordered worker pool [returns: list] imap - non-blocking and ordered worker pool [returns: iterator]

Usage¶

A typical call to a pathos python map will roughly follow this example:

>>> # instantiate and configure the worker pool
>>> from pathos.serial import SerialPool
>>> pool = SerialPool()
>>>
>>> # do a blocking map on the chosen function
>>> print(pool.map(pow, [1,2,3,4], [5,6,7,8]))
>>>
>>> # do a non-blocking map, then extract the results from the iterator
>>> results = pool.imap(pow, [1,2,3,4], [5,6,7,8])
>>> print("...")
>>> print(list(results))
>>>
>>> # do one item at a time, using a pipe
>>> print(pool.pipe(pow, 1, 5))
>>> print(pool.pipe(pow, 2, 6))


Notes

This worker pool leverages the built-in python maps, and thus does not have limitations due to serialization of the function f or the sequences in args. The maps in this worker pool have full functionality whether run from a script or in the python interpreter, and work reliably for both imported and interactively-defined functions.

class SerialPool(*args, **kwds)

Mapper that leverages standard (i.e. serial) python maps.

Important class members:

nodes - number (and potentially description) of workers ncpus - number of worker processors servers - list of worker servers scheduler - the associated scheduler workdir - associated WORKDIR for scratch calculations/files Other class members: scatter - True, if uses ‘scatter-gather’ (instead of ‘worker-pool’) source - False, if minimal use of TemporaryFiles is desired timeout - number of seconds to wait for return value from scheduler __get_nodes() get the number of nodes in the pool __module__ = 'mystic.pools' __set_nodes(nodes) set the number of nodes in the pool _exiting = False _is_alive(negate=False, run=True) clear() hard restart close() close the pool to any new jobs imap(f, *args, **kwds) run a batch of jobs with a non-blocking and ordered map Returns a list iterator of results of applying the function f to the items of the argument sequence(s). If more than one sequence is given, the function is called with an argument list consisting of the corresponding item of each sequence. join() cleanup the closed worker processes map(f, *args, **kwds) run a batch of jobs with a blocking and ordered map Returns a list of results of applying the function f to the items of the argument sequence(s). If more than one sequence is given, the function is called with an argument list consisting of the corresponding item of each sequence. property nodes get the number of nodes in the pool pipe(f, *args, **kwds) submit a job and block until results are available Returns result of calling the function f on a selected worker. This function will block until results are available. restart(force=False) restart a closed pool terminate() a more abrupt close __get_nodes__(self) get the number of nodes in the pool __set_nodes__(self, nodes) set the number of nodes in the pool python_map module¶ Defaults for mapper and launcher. These should be available as a minimal (dependency-free) pure-python install from pathos: serial_launcher -- syntax for standard python execution python_map -- wrapper around the standard python map worker_pool -- the worker_pool map strategy  python_map(func, *arglist, **kwds) maps function func across arguments arglist. Provides the standard python map function, however also accepts kwds in order to conform with the (deprecated) pyina.ez_map interface. Notes The following kwds used in ez_map are accepted, but disabled: • nodes – the number of parallel nodes • launcher – the launcher object • scheduler – the scheduler object • mapper – the mapper object • timelimit – string representation of maximum run time (e.g. ‘00:02’) • queue – string name of selected queue (e.g. ‘normal’) serial_launcher(kdict={}) prepare launch for standard execution syntax: (python) (program) (progargs) Notes run non-python shell commands by setting python to a null string: kdict = {'python':'', ...} worker_pool() use the ‘worker pool’ strategy; hence one job is allocated to each worker, and the next new work item is provided when a node completes its work samplers module¶ samplers: optimizer-guided directed sampling class BuckshotSampler(bounds, model, npts=None, **kwds) optimizer-directed sampling starting at N randomly sampled points Inputs: bounds – list[tuples]: (lower,upper) bound for each model input model – function: y = model(x), where x is an input parameter vector npts – int: number of points to sample the model Note additional keywords (evalmon, stepmon, maxiter, maxfun, dist, tight, saveiter, state, termination, constraints, penalty, reducer, solver) are available for use. See mystic.ensemble for more details. __module__ = 'mystic.samplers' _init_solver() initialize the ensemble solver class LatticeSampler(bounds, model, npts=None, **kwds) optimizer-directed sampling starting at the centers of N grid points Inputs: bounds – list[tuples]: (lower,upper) bound for each model input model – function: y = model(x), where x is an input parameter vector npts – int: number of points to sample the model Note additional keywords (evalmon, stepmon, maxiter, maxfun, dist, tight, saveiter, state, termination, constraints, penalty, reducer, solver) are available for use. See mystic.ensemble for more details. __module__ = 'mystic.samplers' _init_solver() initialize the ensemble solver class SparsitySampler(bounds, model, npts=None, **kwds) optimizer-directed sampling starting at N points sampled in sparse reigons Inputs: bounds – list[tuples]: (lower,upper) bound for each model input model – function: y = model(x), where x is an input parameter vector npts – int: number of points to sample the model Note additional keywords (evalmon, stepmon, maxiter, maxfun, dist, tight, saveiter, state, termination, constraints, penalty, reducer, solver) are available for use. See mystic.ensemble for more details. __init__(bounds, model, npts=None, **kwds) Inputs: bounds – list[tuples]: (lower,upper) bound for each model input model – function: y = model(x), where x is an input parameter vector npts – int: number of points to sample the model Note additional keywords (evalmon, stepmon, maxiter, maxfun, dist, tight, saveiter, state, termination, constraints, penalty, reducer, solver) are available for use. See mystic.ensemble for more details. __module__ = 'mystic.samplers' _init_solver() initialize the ensemble solver scemtools module¶ Implements the “Shuffled Complex Evolution Metropolis” Algoritm of Vrugt et al. [1] Reference: [1] Jasper A. Vrugt, Hoshin V. Gupta, Willem Bouten, and Soroosh Sorooshian A Shuffled Complex Evolution Metropolis algorithm for optimization and uncertainty assessment of hydrologic model parameters, WATER RESOURCES RESEARCH, VOL. 39, NO. 8, 1201, doi:10.1029/2002WR001642, 2003 http://www.agu.org/pubs/crossref/2003/2002WR001642.shtml [2] Vrugt JA, Nuallain , Robinson BA, Bouten W, Dekker SC, Sloot PM Application of parallel computing to stochastic parameter estimation in environmental models, Computers & Geosciences, Vol. 32, No. 8. (October 2006), pp. 1139-1155. http://www.science.uva.nl/research/scs/papers/archive/Vrugt2006b.pdf multinormal_pdf(mean, var) var must be symmetric positive definite myinsert(a, x) remix(Cs, As) Mixing and dealing the complexes. The types of Cs and As are very important…. scem(Ck, ak, Sk, Sak, target, cn) This is the SCEM algorithm starting from line [35] of the reference [1]. • [inout] Ck is the kth ‘complex’ with m points. This should be an m by n array n being the dimensionality of the density function. i.e., the data are arranged in rows. Ck is assumed to be sorted according to the target density. • [inout] ak, the density of the points of Ck. • [inout] Sk, the entire chain. (should be a list) • [inout] Sak, the cost of the entire chain (should be a list) Sak would be more convenient to use if it is a numpy array, but we need to append to it frequently. • [in] target: target density function • [in] cn: jumprate. (see Paragraph 37 of [1.] • The invariants: ak is always aligned with Ck, and are the cost of Ck • Similarly, Sak is always aligned with Sk in the same way. • On return… sort order in Ck/ak is destroyed. but see sort_complex2 sequential_deal(inarray, n) • inarray: should be a set of N objects (the objects can be vectors themselves, but inarray should be index-able like a list. It is coerced into a numpy array because the last operations requires that it is also indexable by a ‘list.’ • it should have a length divisble by n, otherwise the reshape will fail (this is a feature !) • sequential_deal(range(20), 5) wil return a 5 element list, each element being a 4-list of index. (see below) >>> for l in sequential_deal(range(20),5): ... print(l) ... [ 0 5 10 15] [ 1 6 11 16] [ 2 7 12 17] [ 3 8 13 18] [ 4 9 14 19]  sort_ab_with_b(a, b, ord=- 1) default is descending… sort_and_deal(cards, target, nplayers) sort_complex(c, a) sort_complex0(c, a) sort_complex2(c, a) • c and a are partially sorted (either the last one is bad, or the first one) • pos0 (first one out of order) -1 (last one out of order) update_complex(Ck, ak, c, a, pos) • ak is sorted (descending) • Ck[pos] and ak[pos] will be removed, and then c and a spliced in at the proper place • pos is 0, or -1 scipy_optimize module¶ Solvers¶ This module contains a collection of optimization routines adapted from scipy.optimize. The minimal scipy interface has been preserved, and functionality from the mystic solver API has been added with reasonable defaults. Minimal function interface to optimization routines:: fmin – Nelder-Mead Simplex algorithm (uses only function calls) fmin_powell – Powell’s (modified) level set method (uses only function calls) The corresponding solvers built on mystic’s AbstractSolver are:: NelderMeadSimplexSolver – Nelder-Mead Simplex algorithm PowellDirectionalSolver – Powell’s (modified) level set method Mystic solver behavior activated in fmin:: • EvaluationMonitor = Monitor() • StepMonitor = Monitor() • termination = CandidateRelativeTolerance(xtol,ftol) Mystic solver behavior activated in fmin_powell:: • EvaluationMonitor = Monitor() • StepMonitor = Monitor() • termination = NormalizedChangeOverGeneration(ftol) Usage¶ See mystic.examples.test_rosenbrock2 for an example of using NelderMeadSimplexSolver. See mystic.examples.test_rosenbrock3 or an example of using PowellDirectionalSolver. All solvers included in this module provide the standard signal handling. For more information, see mystic.mystic.abstract_solver. References 1. Nelder, J.A. and Mead, R. (1965), “A simplex method for function minimization”, The Computer Journal, 7, pp. 308-313. 2. Wright, M.H. (1996), “Direct Search Methods: Once Scorned, Now Respectable”, in Numerical Analysis 1995, Proceedings of the 1995 Dundee Biennial Conference in Numerical Analysis, D.F. Griffiths and G.A. Watson (Eds.), Addison Wesley Longman, Harlow, UK, pp. 191-208. 3. Gao, F. and Han, L. (2012), “Implementing the Nelder-Mead simplex algorithm with adaptive parameters”, Computational Optimization and Applications. 51:1, pp. 259-277. 4. Powell M.J.D. (1964) An efficient method for finding the minimum of a function of several variables without calculating derivatives, Computer Journal, 7 (2):155-162. 5. Press W., Teukolsky S.A., Vetterling W.T., and Flannery B.P.: Numerical Recipes (any edition), Cambridge University Press class NelderMeadSimplexSolver(dim) Nelder Mead Simplex optimization adapted from scipy.optimize.fmin. Takes one initial input: dim – dimensionality of the problem The size of the simplex is dim+1. Solve(cost=None, termination=None, ExtraArgs=None, **kwds) Minimize a function using the downhill simplex algorithm. Uses a Nelder-Mead simplex algorithm to find the minimum of a function of one or more variables. Parameters • cost (func, default=None) – the function to be minimized: y = cost(x). • termination (termination, default=None) – termination conditions. • ExtraArgs (tuple, default=None) – extra arguments for cost. • sigint_callback (func, default=None) – callback function for signal handler. • callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector. • disp (bool, default=False) – if True, print convergence messages. • radius (float, default=0.05) – percentage change for initial simplex values. • adaptive (bool, default=False) – adapt algorithm parameters to the dimensionality of the initial parameter vector x. Returns None _SetEvaluationLimits(iterscale=200, evalscale=200) set the evaluation limits input:: • iterscale and evalscale are integers used to set the maximum iteration and evaluation limits, respectively. The new limit is defined as limit = (nDim * nPop * scale) + count, where count is the number of existing iterations or evaluations, respectively. The default for iterscale is 200, and the default for evalscale is also 200. _Step(cost=None, ExtraArgs=None, **kwds) perform a single optimization iteration input:: • cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective. note:: ExtraArgs needs to be a tuple of extra arguments. This method accepts additional args that are specific for the current solver, as detailed in the _process_inputs method. __init__(dim) Takes one initial input: dim – dimensionality of the problem The size of the simplex is dim+1. __module__ = 'mystic.scipy_optimize' _decorate_objective(cost, ExtraArgs=None) decorate the cost function with bounds, penalties, monitors, etc input:: • cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective. _process_inputs(kwds) process and activate input settings Parameters • callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector. • disp (bool, default=False) – if True, print convergence messages. Additional Args: EvaluationMonitor: a monitor instance to capture each evaluation of cost. StepMonitor: a monitor instance to capture each iteration’s best results. penalty: a function of the form: y’ = penalty(xk), with y = cost(xk) + y’, where xk is the current parameter vector. constraints: a function of the form: xk’ = constraints(xk), where xk is the current parameter vector. radius (float, default=0.05): percentage change for initial simplex values. adaptive (bool, default=False): adapt algorithm parameters to the dimensionality of the initial parameter vector x. Note The additional args are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the args are not sticky, and are thus set for a one-time use. _setSimplexWithinRangeBoundary(radius=None) ensure that initial simplex is set within bounds Input:: • radius: size of the initial simplex [default=0.05] class PowellDirectionalSolver(dim) Powell Direction Search optimization, adapted from scipy.optimize.fmin_powell. Takes one initial input: dim – dimensionality of the problem Finalize() cleanup upon exiting the main optimization loop Solve(cost=None, termination=None, ExtraArgs=None, **kwds) Minimize a function using modified Powell’s method. Uses a modified Powell Directional Search algorithm to find the minimum of a function of one or more variables. Parameters • cost (func, default=None) – the function to be minimized: y = cost(x). • termination (termination, default=None) – termination conditions. • ExtraArgs (tuple, default=None) – extra arguments for cost. • sigint_callback (func, default=None) – callback function for signal handler. • callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector. • direc (tuple, default=None) – the initial direction set. • xtol (float, default=1e-4) – line-search error tolerance. • imax (float, default=500) – line-search maximum iterations. • disp (bool, default=False) – if True, print convergence messages. Returns None _SetEvaluationLimits(iterscale=1000, evalscale=1000) set the evaluation limits input:: • iterscale and evalscale are integers used to set the maximum iteration and evaluation limits, respectively. The new limit is defined as limit = (nDim * nPop * scale) + count, where count is the number of existing iterations or evaluations, respectively. The default for iterscale is 1000, and the default for evalscale is also 1000. _Step(cost=None, ExtraArgs=None, **kwds) perform a single optimization iteration input:: • cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective. note:: ExtraArgs needs to be a tuple of extra arguments. This method accepts additional args that are specific for the current solver, as detailed in the _process_inputs method. __generations() get the number of iterations __init__(dim) Takes one initial input: dim – dimensionality of the problem __module__ = 'mystic.scipy_optimize' _process_inputs(kwds) process and activate input settings Parameters • callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector. • disp (bool, default=False) – if True, print convergence messages. Additional Args: EvaluationMonitor: a monitor instance to capture each evaluation of cost. StepMonitor: a monitor instance to capture each iteration’s best results. penalty: a function of the form: y’ = penalty(xk), with y = cost(xk) + y’, where xk is the current parameter vector. constraints: a function of the form: xk’ = constraints(xk), where xk is the current parameter vector. direc (tuple, default=None): the initial direction set. xtol (float, default=1e-4): line-search error tolerance. imax (float, default=500): line-search maximum iterations. Note The additional args are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the args are not sticky, and are thus set for a one-time use. property generations get the number of iterations fmin(cost, x0, args=(), bounds=None, xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, **kwds) Minimize a function using the downhill simplex algorithm. Uses a Nelder-Mead simplex algorithm to find the minimum of a function of one or more variables. This algorithm only uses function values, not derivatives or second derivatives. Mimics the scipy.optimize.fmin interface. This algorithm has a long history of successful use in applications. It will usually be slower than an algorithm that uses first or second derivative information. In practice it can have poor performance in high-dimensional problems and is not robust to minimizing complicated functions. Additionally, there currently is no complete theory describing when the algorithm will successfully converge to the minimum, or how fast it will if it does. Both the ftol and xtol criteria must be met for convergence. Parameters • cost (func) – the function or method to be minimized: y = cost(x). • x0 (ndarray) – the initial guess parameter vector x. • args (tuple, default=()) – extra arguments for cost. • bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter. • xtol (float, default=1e-4) – acceptable absolute error in xopt for convergence. • ftol (float, default=1e-4) – acceptable absolute error in cost(xopt) for convergence. • maxiter (int, default=None) – the maximum number of iterations to perform. • maxfun (int, default=None) – the maximum number of function evaluations. • full_output (bool, default=False) – True if fval and warnflag are desired. • disp (bool, default=True) – if True, print convergence messages. • retall (bool, default=False) – True if allvecs is desired. • callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector. • handler (bool, default=False) – if True, enable handling interrupt signals. • itermon (monitor, default=None) – override the default GenerationMonitor. • evalmon (monitor, default=None) – override the default EvaluationMonitor. • constraints (func, default=None) – a function xk' = constraints(xk), where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints. • penalty (func, default=None) – a function y = penalty(xk), where xk is the current parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0 otherwise). • tight (bool, default=False) – enforce bounds and constraints concurrently. Returns (xopt, {fopt, iter, funcalls, warnflag}, {allvecs}) Notes • xopt (ndarray): the minimizer of the cost function • fopt (float): value of cost function at minimum: fopt = cost(xopt) • iter (int): number of iterations • funcalls (int): number of function calls • warnflag (int): warning flag: • 1 : Maximum number of function evaluations • 2 : Maximum number of iterations • allvecs (list): a list of solutions at each iteration fmin_powell(cost, x0, args=(), bounds=None, xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, direc=None, **kwds) Minimize a function using modified Powell’s method. Uses a modified Powell Directional Search algorithm to find the minimum of a function of one or more variables. This method only uses function values, not derivatives. Mimics the scipy.optimize.fmin_powell interface. Powell’s method is a conjugate direction method that has two loops. The outer loop simply iterates over the inner loop, while the inner loop minimizes over each current direction in the direction set. At the end of the inner loop, if certain conditions are met, the direction that gave the largest decrease is dropped and replaced with the difference between the current estimated x and the estimated x from the beginning of the inner-loop. The conditions for replacing the direction of largest increase is that: (a) no further gain can be made along the direction of greatest increase in the iteration, and (b) the direction of greatest increase accounted for a large sufficient fraction of the decrease in the function value from the current iteration of the inner loop. Parameters • cost (func) – the function or method to be minimized: y = cost(x). • x0 (ndarray) – the initial guess parameter vector x. • args (tuple, default=()) – extra arguments for cost. • bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter. • xtol (float, default=1e-4) – acceptable relative error in xopt for convergence. • ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence. • gtol (float, default=2) – maximum iterations to run without improvement. • maxiter (int, default=None) – the maximum number of iterations to perform. • maxfun (int, default=None) – the maximum number of function evaluations. • full_output (bool, default=False) – True if fval and warnflag are desired. • disp (bool, default=True) – if True, print convergence messages. • retall (bool, default=False) – True if allvecs is desired. • callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector. • direc (tuple, default=None) – the initial direction set. • handler (bool, default=False) – if True, enable handling interrupt signals. • itermon (monitor, default=None) – override the default GenerationMonitor. • evalmon (monitor, default=None) – override the default EvaluationMonitor. • constraints (func, default=None) – a function xk' = constraints(xk), where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints. • penalty (func, default=None) – a function y = penalty(xk), where xk is the current parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0 otherwise). • tight (bool, default=False) – enforce bounds and constraints concurrently. Returns (xopt, {fopt, iter, funcalls, warnflag, direc}, {allvecs}) Notes • xopt (ndarray): the minimizer of the cost function • fopt (float): value of cost function at minimum: fopt = cost(xopt) • iter (int): number of iterations • funcalls (int): number of function calls • warnflag (int): warning flag: • 1 : Maximum number of function evaluations • 2 : Maximum number of iterations • direc (tuple): the current direction set • allvecs (list): a list of solutions at each iteration functional interfaces for mystic’s visual analytics scripts collapse_plotter(filename, **kwds) generate cost convergence rate plots from file written with write_support_file Available from the command shell as: mystic_collapse_plotter filename [options]  or as a function call: mystic.collapse_plotter(filename, **options)  Parameters filename (str) – name of the convergence logfile (e.g paramlog.py). Returns None Notes • The option dots takes a boolean, and will show data points in the plot. • The option linear takes a boolean, and will plot in a linear scale. • The option out takes a string of the filepath for the generated plot. If out = True, return the Figure object instead of generating a plot. • The option iter takes an integer of the largest iteration to plot. • The option label takes a label string. For example, label = "y" will label the plot with a ‘y’, while label = " log-cost, log_{10}(\hat{P} - \hat{P}_{max})" will label the y-axis with standard LaTeX math formatting. Note that the leading space is required, and that the text is aligned along the axis. • The option col takes a string of comma-separated integers indicating iteration numbers where parameter collapse has occurred. If a second set of integers is provided (delineated by a semicolon), the additional set of integers will be plotted with a different linestyle (to indicate a different type of collapse). log_reader(filename, **kwds) plot parameter convergence from file written with LoggingMonitor Available from the command shell as: mystic_log_reader filename [options]  or as a function call: mystic.log_reader(filename, **options)  Parameters filename (str) – name of the convergence logfile (e.g log.txt). Returns None Notes • The option out takes a string of the filepath for the generated plot. If out = True, return the Figure object instead of generating a plot. • The option dots takes a boolean, and will show data points in the plot. • The option line takes a boolean, and will connect the data with a line. • The option iter takes an integer of the largest iteration to plot. • The option legend takes a boolean, and will display the legend. • The option nid takes an integer of the nth simultaneous points to plot. • The option param takes an indicator string. The indicator string is built from comma-separated array slices. For example, param = ":" will plot all parameters. Alternatively, param = ":2, 3:" will plot all parameters except for the third parameter, while param = "0" will only plot the first parameter. model_plotter(model, logfile=None, **kwds) generate surface contour plots for model, specified by full import path; and generate model trajectory from logfile (or solver restart file), if provided Available from the command shell as: mystic_model_plotter model (logfile) [options]  or as a function call: mystic.model_plotter(model, logfile=None, **options)  Parameters • model (str) – full import path for the model (e.g. mystic.models.rosen) • logfile (str, default=None) – name of convergence logfile (e.g. log.txt) Returns None Notes • The option out takes a string of the filepath for the generated plot. If out = True, return the Figure object instead of generating a plot. • The option bounds takes an indicator string, where bounds are given as comma-separated slices. For example, using bounds = "-1:10, 0:20" will set lower and upper bounds for x to be (-1,10) and y to be (0,20). The “step” can also be given, to control the number of lines plotted in the grid. Thus "-1:10:.1, 0:20" sets the bounds as above, but uses increments of .1 along x and the default step along y. For models > 2D, the bounds can be used to specify 2 dimensions plus fixed values for remaining dimensions. Thus, "-1:10, 0:20, 1.0" plots the 2D surface where the z-axis is fixed at z=1.0. When called from a script, slice objects can be used instead of a string, thus "-1:10:.1, 0:20, 1.0" becomes (slice(-1,10,.1), slice(20), 1.0). • The option label takes comma-separated strings. For example, label = "x,y," will place ‘x’ on the x-axis, ‘y’ on the y-axis, and nothing on the z-axis. LaTeX is also accepted. For example, label = " h $,$ {\alpha}$,$ v\$" will label the axes with standard LaTeX math formatting. Note that the leading space is required, while a trailing space aligns the text with the axis instead of the plot frame.

• The option nid takes an integer of the nth simultaneous points to plot.

• The option iter takes an integer of the largest iteration to plot.

• The option reduce can be given to reduce the output of a model to a scalar, thus converting model(params) to reduce(model(params)). A reducer is given by the import path (e.g. numpy.add).

• The option scale will convert the plot to log-scale, and scale the cost by z=log(4*z*scale+1)+2. This is useful for visualizing small contour changes around the minimium.

• If using log-scale produces negative numbers, the option shift can be used to shift the cost by z=z+shift. Both shift and scale are intended to help visualize contours.

• The option fill takes a boolean, to plot using filled contours.

• The option depth takes a boolean, to plot contours in 3D.

• The option dots takes a boolean, to show trajectory points in the plot.

• The option join takes a boolean, to connect trajectory points.

• The option verb takes a boolean, to print the model documentation.

• The option kernel can be given to transform the input of a model from nD to 2D, where params' = model(params) with params' being 2D. A kernel is given by the import path (e.g. mymodule.kernel). Using kernel can be slow, as it may calcuate inverse transform at each point.

• The option tol takes a float of max distance of dots from surface. For finer control, provide an array[float] the same length as params.

search module¶

a global searcher

class Searcher(npts=4, retry=1, tol=8, memtol=1, memsize=None, map=None, archive=None, cache=None, sprayer=None, seeker=None, traj=False, disp=False, repeat=0)

Bases: object

searcher, which searches for all minima of a response surface

Input:

npts - number of solvers in the ensemble retry - max consectutive retries w/o a cache ‘miss’ tol - rounding precision for the minima comparator memtol - rounding precision for memoization memsize - maximum size of cache to hold in memory map - map used for spawning solvers in the ensemble archive - the sampled point archive(s) cache - the trajectory cache(s) sprayer - the mystic.ensemble instance seeker - the mystic.solvers instance traj - if True, save the parameter trajectories disp - if True, be verbose repeat - number of times to repeat the search

Coordinates(unique=False, all=False)

return the sequence of stored model parameter input values

Input:

unique: if True, only return unique values all: if True, return all sampled inputs (not just trajectory inputs)

Output:

a list of parameter trajectories

Minima(tol=None)

return a dict of (coordinates,values) of all discovered minima

Input:

tol: tolerance within which to consider a point a minima

Output:

a dict of (coordinates,values) of all discovered minima

Reset(cache=None, inv=None)

clear the trajectory cache of sampled points

Input:

cache - the trajectory cache(s) inv - if True, reset the cache for the inverse of the objective

Samples(all=False)

return array of (coordinates, cost) for all trajectories

Search(model, bounds, stop=None, traj=None, disp=None, **kwds)

use an ensemble of optimizers to search for all minima

Inputs:

model - function z=f(x) to be used as the objective of the Searcher bounds - tuple of floats (min,max), bounds on the search region stop - termination condition traj - klepto.archive to store sampled points disp - if True, be verbose monitor - mystic.monitor instance to store parameter trajectories evalmon - mystic.monitor instance to store parameter evaluations penalty - mystic.penalty instance of the form y’ = k*p(x) constraints - mystic.constraints instance of the form x’ = c(x) tight - if True, enforce bounds and constraints concurrently

Trajectories(all=False)

return tuple (iteration, coordinates, cost) of all trajectories

UseTrajectories(traj=True)

save all sprayers, thus save all trajectories

Values(unique=False, all=False)

return the sequence of stored response surface outputs

Input:

unique: if True, only return unique values all: if True, return all sampled values (not just trajectory values)

Output:

a list of stored response surface outputs

Verbose(disp=True)

be verbose

__dict__ = mappingproxy({'__module__': 'mystic.search', '__init__': <function Searcher.__init__>, '_print': <function Searcher._print>, '_memoize': <function Searcher._memoize>, '_configure': <function Searcher._configure>, '_solve': <function Searcher._solve>, '_search': <function Searcher._search>, 'UseTrajectories': <function Searcher.UseTrajectories>, 'Verbose': <function Searcher.Verbose>, 'Search': <function Searcher.Search>, 'Reset': <function Searcher.Reset>, 'Values': <function Searcher.Values>, 'Coordinates': <function Searcher.Coordinates>, 'Minima': <function Searcher.Minima>, '_summarize': <function Searcher._summarize>, 'Trajectories': <function Searcher.Trajectories>, 'Samples': <function Searcher.Samples>, '__dict__': <attribute '__dict__' of 'Searcher' objects>, '__weakref__': <attribute '__weakref__' of 'Searcher' objects>, '__doc__': None, '__annotations__': {}})
__init__(npts=4, retry=1, tol=8, memtol=1, memsize=None, map=None, archive=None, cache=None, sprayer=None, seeker=None, traj=False, disp=False, repeat=0)

searcher, which searches for all minima of a response surface

Input:

npts - number of solvers in the ensemble retry - max consectutive retries w/o a cache ‘miss’ tol - rounding precision for the minima comparator memtol - rounding precision for memoization memsize - maximum size of cache to hold in memory map - map used for spawning solvers in the ensemble archive - the sampled point archive(s) cache - the trajectory cache(s) sprayer - the mystic.ensemble instance seeker - the mystic.solvers instance traj - if True, save the parameter trajectories disp - if True, be verbose repeat - number of times to repeat the search

__module__ = 'mystic.search'
__weakref__

list of weak references to the object (if defined)

_configure(model, bounds, stop=None, **kwds)

configure ensemble solver from objective and other solver options

_memoize(solver, tol=1, all=False, size=None)

apply caching to ensemble solver instance

_print(solver, tol=8)

print bestSolution and bestEnergy for each sprayer

run the solver, store the trajectory, and cache to the archive

_solve(id=None, disp=None)

run the solver (i.e. search for the minima)

_summarize()

provide a summary of the search results

solvers module¶

solvers: minimal and expanded interfaces for optimization algorithms

Standard Interface¶

All of mystic’s optimizers derive from the solver API, which provides each optimizer with a standard, but highly-customizable interface. A description of the solver API is found in mystic.models.abstract_model, and in each derived optimizer. Mystic’s optimizers are:

** Global Optimizers **
DifferentialEvolutionSolver  -- Differential Evolution algorithm
DifferentialEvolutionSolver2 -- Price & Storn's Differential Evolution
** Pseudo-Global Optimizers **
SparsitySolver               -- N Solvers sampled where point desity is low
BuckshotSolver               -- Uniform Random Distribution of N Solvers
LatticeSolver                -- Distribution of N Solvers on a Regular Grid
** Local-Search Optimizers **
PowellDirectionalSolver      -- Powell's (modified) Level Set algorithm


Minimal Interface¶

Most of mystic’s optimizers can be called from a minimal (i.e. one-line) interface. The collection of arguments is often unique to the optimizer, and if the underlying solver derives from a third-party package, the original interface is reproduced. Minimal interfaces to these optimizers are provided:

** Global Optimizers **
diffev      -- DifferentialEvolutionSolver
diffev2     -- DifferentialEvolutionSolver2
** Pseudo-Global Optimizers **
sparsity    -- SparsitySolver
buckshot    -- BuckshotSolver
lattice     -- LatticeSolver
** Local-Search Optimizers **
fmin_powell -- PowellDirectionalSolver


• mystic.differential_evolution [differential evolution solvers]

• mystic.scipy_optimize [scipy local-search solvers]

• mystic.ensemble [pseudo-global solvers]

or the API documentation found here:
• mystic.abstract_solver [the solver API definition]

• mystic.abstract_map_solver [the parallel solver API]

• mystic.abstract_ensemble_solver [the ensemble solver API]

class BuckshotSolver(dim, npts=8)

parallel mapped optimization starting from N uniform randomly sampled points

Takes two initial inputs:

dim – dimensionality of the problem npts – number of parallel solver instances

All important class members are inherited from AbstractEnsembleSolver.

_InitialPoints()

Generate a grid of starting points for the ensemble of optimizers

__annotations__ = {}
__init__(dim, npts=8)
Takes two initial inputs:

dim – dimensionality of the problem npts – number of parallel solver instances

All important class members are inherited from AbstractEnsembleSolver.

__module__ = 'mystic.ensemble'
class DifferentialEvolutionSolver(dim, NP=4)

Differential Evolution optimization.

Takes two initial inputs:

dim – dimensionality of the problem NP – size of the trial solution population. [requires: NP >= 4]

All important class members are inherited from AbstractSolver.

SetConstraints(constraints)

apply a constraints function to the optimization

input::
• a constraints function of the form: xk’ = constraints(xk), where xk is the current parameter vector. Ideally, this function is constructed so the parameter vector it passes to the cost function will satisfy the desired (i.e. encoded) constraints.

Solve(cost=None, termination=None, ExtraArgs=None, **kwds)

Minimize a function using differential evolution.

Uses a differential evolution algorithm to find the minimum of a function of one or more variables.

Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).

• termination (termination, default=None) – termination conditions.

• ExtraArgs (tuple, default=None) – extra arguments for cost.

• strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial solutions.

• CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.

• ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.

• sigint_callback (func, default=None) – callback function for signal handler.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• disp (bool, default=False) – if True, print convergence messages.

Returns

None

UpdateGenealogyRecords(id, newchild)

create an in-memory log of the genealogy of the population

Input::
• id: (int) the index of the candidate in the population matrix

• newchild: (list[float]) a new trialSolution

_Step(cost=None, ExtraArgs=None, **kwds)

perform a single optimization iteration

input::
• cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.

note::

ExtraArgs needs to be a tuple of extra arguments.

This method accepts additional args that are specific for the current solver, as detailed in the _process_inputs method.

__annotations__ = {}
__init__(dim, NP=4)
Takes two initial inputs:

dim – dimensionality of the problem NP – size of the trial solution population. [requires: NP >= 4]

All important class members are inherited from AbstractSolver.

__module__ = 'mystic.differential_evolution'
_decorate_objective(cost, ExtraArgs=None)

decorate the cost function with bounds, penalties, monitors, etc

input::
• cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.

_process_inputs(kwds)

process and activate input settings

Parameters
• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• disp (bool, default=False) – if True, print convergence messages.

EvaluationMonitor: a monitor instance to capture each evaluation of cost. StepMonitor: a monitor instance to capture each iteration’s best results. penalty: a function of the form: y’ = penalty(xk), with y = cost(xk) + y’,

where xk is the current parameter vector.

constraints: a function of the form: xk’ = constraints(xk), where xk is

the current parameter vector.

strategy (strategy, default=Best1Bin): the mutation strategy for generating new trial solutions. CrossProbability (float, default=0.9): the probability of cross-parameter

mutations.

ScalingFactor (float, default=0.8): multiplier for mutations on the trial

solution.

Note

The additional args are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the args are not sticky, and are thus set for a one-time use.

class DifferentialEvolutionSolver2(dim, NP=4)

Differential Evolution optimization, using Storn and Price’s algorithm.

Alternate implementation:
• utilizes a map-reduce interface, extensible to parallel computing

• both a current and a next generation are kept, while the current generation is invariant during the main DE logic

Takes two initial inputs:

dim – dimensionality of the problem NP – size of the trial solution population. [requires: NP >= 4]

All important class members are inherited from AbstractSolver.

SetConstraints(constraints)

apply a constraints function to the optimization

input::
• a constraints function of the form: xk’ = constraints(xk), where xk is the current parameter vector. Ideally, this function is constructed so the parameter vector it passes to the cost function will satisfy the desired (i.e. encoded) constraints.

Solve(cost=None, termination=None, ExtraArgs=None, **kwds)

Minimize a function using differential evolution.

Uses a differential evolution algorithm to find the minimum of a function of one or more variables. This implementation holds the current generation invariant until the end of each iteration.

Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).

• termination (termination, default=None) – termination conditions.

• ExtraArgs (tuple, default=None) – extra arguments for cost.

• strategy (strategy, default=Best1Bin) – the mutation strategy for generating new trial solutions.

• CrossProbability (float, default=0.9) – the probability of cross-parameter mutations.

• ScalingFactor (float, default=0.8) – multiplier for mutations on the trial solution.

• sigint_callback (func, default=None) – callback function for signal handler.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• disp (bool, default=False) – if True, print convergence messages.

Returns

None

UpdateGenealogyRecords(id, newchild)

create an in-memory log of the genealogy of the population

Input::
• id: (int) the index of the candidate in the population matrix

• newchild: (list[float]) a new trialSolution

_Step(cost=None, ExtraArgs=None, **kwds)

perform a single optimization iteration

input::
• cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.

note::

ExtraArgs needs to be a tuple of extra arguments.

This method accepts additional args that are specific for the current solver, as detailed in the _process_inputs method.

__annotations__ = {}
__init__(dim, NP=4)
Takes two initial inputs:

dim – dimensionality of the problem NP – size of the trial solution population. [requires: NP >= 4]

All important class members are inherited from AbstractSolver.

__module__ = 'mystic.differential_evolution'
_decorate_objective(cost, ExtraArgs=None)

decorate the cost function with bounds, penalties, monitors, etc

input::
• cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.

_process_inputs(kwds)

process and activate input settings

Parameters
• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• disp (bool, default=False) – if True, print convergence messages.

EvaluationMonitor: a monitor instance to capture each evaluation of cost. StepMonitor: a monitor instance to capture each iteration’s best results. penalty: a function of the form: y’ = penalty(xk), with y = cost(xk) + y’,

where xk is the current parameter vector.

constraints: a function of the form: xk’ = constraints(xk), where xk is

the current parameter vector.

strategy (strategy, default=Best1Bin): the mutation strategy for generating new trial solutions. CrossProbability (float, default=0.9): the probability of cross-parameter

mutations.

ScalingFactor (float, default=0.8): multiplier for mutations on the trial

solution.

Note

The additional args are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the args are not sticky, and are thus set for a one-time use.

class LatticeSolver(dim, nbins=8)

parallel mapped optimization starting from the centers of N grid points

Takes two initial inputs:

dim – dimensionality of the problem nbins – tuple of number of bins in each dimension

All important class members are inherited from AbstractEnsembleSolver.

_InitialPoints()

Generate a grid of starting points for the ensemble of optimizers

__annotations__ = {}
__init__(dim, nbins=8)
Takes two initial inputs:

dim – dimensionality of the problem nbins – tuple of number of bins in each dimension

All important class members are inherited from AbstractEnsembleSolver.

__module__ = 'mystic.ensemble'

load solver state from a restart file

Takes one initial input:

dim – dimensionality of the problem

The size of the simplex is dim+1.

Solve(cost=None, termination=None, ExtraArgs=None, **kwds)

Minimize a function using the downhill simplex algorithm.

Uses a Nelder-Mead simplex algorithm to find the minimum of a function of one or more variables.

Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).

• termination (termination, default=None) – termination conditions.

• ExtraArgs (tuple, default=None) – extra arguments for cost.

• sigint_callback (func, default=None) – callback function for signal handler.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• disp (bool, default=False) – if True, print convergence messages.

• radius (float, default=0.05) – percentage change for initial simplex values.

• adaptive (bool, default=False) – adapt algorithm parameters to the dimensionality of the initial parameter vector x.

Returns

None

_SetEvaluationLimits(iterscale=200, evalscale=200)

set the evaluation limits

input::
• iterscale and evalscale are integers used to set the maximum iteration and evaluation limits, respectively. The new limit is defined as limit = (nDim * nPop * scale) + count, where count is the number of existing iterations or evaluations, respectively. The default for iterscale is 200, and the default for evalscale is also 200.

_Step(cost=None, ExtraArgs=None, **kwds)

perform a single optimization iteration

input::
• cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.

note::

ExtraArgs needs to be a tuple of extra arguments.

This method accepts additional args that are specific for the current solver, as detailed in the _process_inputs method.

__annotations__ = {}
__init__(dim)
Takes one initial input:

dim – dimensionality of the problem

The size of the simplex is dim+1.

__module__ = 'mystic.scipy_optimize'
_decorate_objective(cost, ExtraArgs=None)

decorate the cost function with bounds, penalties, monitors, etc

input::
• cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.

_process_inputs(kwds)

process and activate input settings

Parameters
• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• disp (bool, default=False) – if True, print convergence messages.

EvaluationMonitor: a monitor instance to capture each evaluation of cost. StepMonitor: a monitor instance to capture each iteration’s best results. penalty: a function of the form: y’ = penalty(xk), with y = cost(xk) + y’,

where xk is the current parameter vector.

constraints: a function of the form: xk’ = constraints(xk), where xk is

the current parameter vector.

radius (float, default=0.05): percentage change for initial simplex values. adaptive (bool, default=False): adapt algorithm parameters to the

dimensionality of the initial parameter vector x.

Note

The additional args are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the args are not sticky, and are thus set for a one-time use.

ensure that initial simplex is set within bounds

Input::
• radius: size of the initial simplex [default=0.05]

class PowellDirectionalSolver(dim)

Powell Direction Search optimization, adapted from scipy.optimize.fmin_powell.

Takes one initial input:

dim – dimensionality of the problem

Finalize()

cleanup upon exiting the main optimization loop

Solve(cost=None, termination=None, ExtraArgs=None, **kwds)

Minimize a function using modified Powell’s method.

Uses a modified Powell Directional Search algorithm to find the minimum of a function of one or more variables.

Parameters
• cost (func, default=None) – the function to be minimized: y = cost(x).

• termination (termination, default=None) – termination conditions.

• ExtraArgs (tuple, default=None) – extra arguments for cost.

• sigint_callback (func, default=None) – callback function for signal handler.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• direc (tuple, default=None) – the initial direction set.

• xtol (float, default=1e-4) – line-search error tolerance.

• imax (float, default=500) – line-search maximum iterations.

• disp (bool, default=False) – if True, print convergence messages.

Returns

None

_SetEvaluationLimits(iterscale=1000, evalscale=1000)

set the evaluation limits

input::
• iterscale and evalscale are integers used to set the maximum iteration and evaluation limits, respectively. The new limit is defined as limit = (nDim * nPop * scale) + count, where count is the number of existing iterations or evaluations, respectively. The default for iterscale is 1000, and the default for evalscale is also 1000.

_Step(cost=None, ExtraArgs=None, **kwds)

perform a single optimization iteration

input::
• cost is the objective function, of the form y = cost(x, *ExtraArgs), where x is a candidate solution, and ExtraArgs is the tuple of positional arguments required to evaluate the objective.

note::

ExtraArgs needs to be a tuple of extra arguments.

This method accepts additional args that are specific for the current solver, as detailed in the _process_inputs method.

__annotations__ = {}
__generations()

get the number of iterations

__init__(dim)
Takes one initial input:

dim – dimensionality of the problem

__module__ = 'mystic.scipy_optimize'
_process_inputs(kwds)

process and activate input settings

Parameters
• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• disp (bool, default=False) – if True, print convergence messages.

EvaluationMonitor: a monitor instance to capture each evaluation of cost. StepMonitor: a monitor instance to capture each iteration’s best results. penalty: a function of the form: y’ = penalty(xk), with y = cost(xk) + y’,

where xk is the current parameter vector.

constraints: a function of the form: xk’ = constraints(xk), where xk is

the current parameter vector.

direc (tuple, default=None): the initial direction set. xtol (float, default=1e-4): line-search error tolerance. imax (float, default=500): line-search maximum iterations.

Note

The additional args are ‘sticky’, in that once they are given, they remain set until they are explicitly changed. Conversely, the args are not sticky, and are thus set for a one-time use.

property generations

get the number of iterations

class SparsitySolver(dim, npts=8, rtol=None)

parallel mapped optimization starting from N points sampled from sparse regions

Takes three initial inputs:

dim – dimensionality of the problem npts – number of parallel solver instances rtol – size of radial tolerance for sparsity

All important class members are inherited from AbstractEnsembleSolver.

_InitialPoints()

Generate a grid of starting points for the ensemble of optimizers

__annotations__ = {}
__init__(dim, npts=8, rtol=None)
Takes three initial inputs:

dim – dimensionality of the problem npts – number of parallel solver instances rtol – size of radial tolerance for sparsity

All important class members are inherited from AbstractEnsembleSolver.

__module__ = 'mystic.ensemble'
buckshot(cost, ndim, npts=8, args=(), bounds=None, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, **kwds)

Minimize a function using the buckshot ensemble solver.

Uses a buckshot ensemble algorithm to find the minimum of a function of one or more variables. Mimics the scipy.optimize.fmin interface. Starts npts solver instances at random points in parameter space.

Parameters
• cost (func) – the function or method to be minimized: y = cost(x).

• ndim (int) – dimensionality of the problem.

• npts (int, default=8) – number of solver instances.

• args (tuple, default=()) – extra arguments for cost.

• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.

• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.

• gtol (float, default=10) – maximum iterations to run without improvement.

• maxiter (int, default=None) – the maximum number of iterations to perform.

• maxfun (int, default=None) – the maximum number of function evaluations.

• full_output (bool, default=False) – True if fval and warnflag are desired.

• disp (bool, default=True) – if True, print convergence messages.

• retall (bool, default=False) – True if allvecs is desired.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• solver (solver, default=None) – override the default nested Solver instance.

• handler (bool, default=False) – if True, enable handling interrupt signals.

• itermon (monitor, default=None) – override the default GenerationMonitor.

• evalmon (monitor, default=None) – override the default EvaluationMonitor.

• constraints (func, default=None) – a function xk' = constraints(xk), where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints.

• penalty (func, default=None) – a function y = penalty(xk), where xk is the current parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0 otherwise).

• tight (bool, default=False) – enforce bounds and constraints concurrently.

• map (func, default=None) – a (parallel) map function y = map(f, x).

• dist (mystic.math.Distribution, default=None) – generate randomness in ensemble starting position using the given distribution.

• step (bool, default=False) – if True, enable Step within the ensemble.

Returns

(xopt, {fopt, iter, funcalls, warnflag, allfuncalls}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function

• fopt (float): value of cost function at minimum: fopt = cost(xopt)

• iter (int): number of iterations

• funcalls (int): number of function calls

• warnflag (int): warning flag:
• 1 : Maximum number of function evaluations

• 2 : Maximum number of iterations

• allfuncalls (int): total function calls (for all solver instances)

• allvecs (list): a list of solutions at each iteration

diffev(cost, x0, npop=4, args=(), bounds=None, ftol=0.005, gtol=None, maxiter=None, maxfun=None, cross=0.9, scale=0.8, full_output=0, disp=1, retall=0, callback=None, **kwds)

Minimize a function using differential evolution.

Uses a differential evolution algorithm to find the minimum of a function of one or more variables. Mimics a scipy.optimize style interface.

Parameters
• cost (func) – the function or method to be minimized: y = cost(x).

• x0 (ndarray) – the initial guess parameter vector x if desired start is a single point, otherwise takes a list of (min,max) bounds that define a region from which random initial points are drawn.

• npop (int, default=4) – size of the trial solution population.

• args (tuple, default=()) – extra arguments for cost.

• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.

• ftol (float, default=5e-3) – acceptable relative error in cost(xopt) for convergence.

• gtol (float, default=None) – maximum iterations to run without improvement.

• maxiter (int, default=None) – the maximum number of iterations to perform.

• maxfun (int, default=None) – the maximum number of function evaluations.

• cross (float, default=0.9) – the probability of cross-parameter mutations.

• scale (float, default=0.8) – multiplier for mutations on the trial solution.

• full_output (bool, default=False) – True if fval and warnflag are desired.

• disp (bool, default=True) – if True, print convergence messages.

• retall (bool, default=False) – True if allvecs is desired.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• handler (bool, default=False) – if True, enable handling interrupt signals.

• strategy (strategy, default=None) – override the default mutation strategy.

• itermon (monitor, default=None) – override the default GenerationMonitor.

• evalmon (monitor, default=None) – override the default EvaluationMonitor.

• constraints (func, default=None) – a function xk' = constraints(xk), where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints.

• penalty (func, default=None) – a function y = penalty(xk), where xk is the current parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0 otherwise).

• tight (bool, default=False) – enforce bounds and constraints concurrently.

• map (func, default=None) – a (parallel) map function y = map(f, x).

Returns

(xopt, {fopt, iter, funcalls, warnflag}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function

• fopt (float): value of cost function at minimum: fopt = cost(xopt)

• iter (int): number of iterations

• funcalls (int): number of function calls

• warnflag (int): warning flag:
• 1 : Maximum number of function evaluations

• 2 : Maximum number of iterations

• allvecs (list): a list of solutions at each iteration

diffev2(cost, x0, npop=4, args=(), bounds=None, ftol=0.005, gtol=None, maxiter=None, maxfun=None, cross=0.9, scale=0.8, full_output=0, disp=1, retall=0, callback=None, **kwds)

Minimize a function using Storn & Price’s differential evolution.

Uses Storn & Prices’s differential evolution algorithm to find the minimum of a function of one or more variables. Mimics a scipy.optimize style interface.

Parameters
• cost (func) – the function or method to be minimized: y = cost(x).

• x0 (ndarray) – the initial guess parameter vector x if desired start is a single point, otherwise takes a list of (min,max) bounds that define a region from which random initial points are drawn.

• npop (int, default=4) – size of the trial solution population.

• args (tuple, default=()) – extra arguments for cost.

• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.

• ftol (float, default=5e-3) – acceptable relative error in cost(xopt) for convergence.

• gtol (float, default=None) – maximum iterations to run without improvement.

• maxiter (int, default=None) – the maximum number of iterations to perform.

• maxfun (int, default=None) – the maximum number of function evaluations.

• cross (float, default=0.9) – the probability of cross-parameter mutations.

• scale (float, default=0.8) – multiplier for mutations on the trial solution.

• full_output (bool, default=False) – True if fval and warnflag are desired.

• disp (bool, default=True) – if True, print convergence messages.

• retall (bool, default=False) – True if allvecs is desired.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• handler (bool, default=False) – if True, enable handling interrupt signals.

• strategy (strategy, default=None) – override the default mutation strategy.

• itermon (monitor, default=None) – override the default GenerationMonitor.

• evalmon (monitor, default=None) – override the default EvaluationMonitor.

• constraints (func, default=None) – a function xk' = constraints(xk), where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints.

• penalty (func, default=None) – a function y = penalty(xk), where xk is the current parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0 otherwise).

• tight (bool, default=False) – enforce bounds and constraints concurrently.

• map (func, default=None) – a (parallel) map function y = map(f, x).

Returns

(xopt, {fopt, iter, funcalls, warnflag}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function

• fopt (float): value of cost function at minimum: fopt = cost(xopt)

• iter (int): number of iterations

• funcalls (int): number of function calls

• warnflag (int): warning flag:
• 1 : Maximum number of function evaluations

• 2 : Maximum number of iterations

• allvecs (list): a list of solutions at each iteration

fmin(cost, x0, args=(), bounds=None, xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, **kwds)

Minimize a function using the downhill simplex algorithm.

Uses a Nelder-Mead simplex algorithm to find the minimum of a function of one or more variables. This algorithm only uses function values, not derivatives or second derivatives. Mimics the scipy.optimize.fmin interface.

This algorithm has a long history of successful use in applications. It will usually be slower than an algorithm that uses first or second derivative information. In practice it can have poor performance in high-dimensional problems and is not robust to minimizing complicated functions. Additionally, there currently is no complete theory describing when the algorithm will successfully converge to the minimum, or how fast it will if it does. Both the ftol and xtol criteria must be met for convergence.

Parameters
• cost (func) – the function or method to be minimized: y = cost(x).

• x0 (ndarray) – the initial guess parameter vector x.

• args (tuple, default=()) – extra arguments for cost.

• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.

• xtol (float, default=1e-4) – acceptable absolute error in xopt for convergence.

• ftol (float, default=1e-4) – acceptable absolute error in cost(xopt) for convergence.

• maxiter (int, default=None) – the maximum number of iterations to perform.

• maxfun (int, default=None) – the maximum number of function evaluations.

• full_output (bool, default=False) – True if fval and warnflag are desired.

• disp (bool, default=True) – if True, print convergence messages.

• retall (bool, default=False) – True if allvecs is desired.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• handler (bool, default=False) – if True, enable handling interrupt signals.

• itermon (monitor, default=None) – override the default GenerationMonitor.

• evalmon (monitor, default=None) – override the default EvaluationMonitor.

• constraints (func, default=None) – a function xk' = constraints(xk), where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints.

• penalty (func, default=None) – a function y = penalty(xk), where xk is the current parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0 otherwise).

• tight (bool, default=False) – enforce bounds and constraints concurrently.

Returns

(xopt, {fopt, iter, funcalls, warnflag}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function

• fopt (float): value of cost function at minimum: fopt = cost(xopt)

• iter (int): number of iterations

• funcalls (int): number of function calls

• warnflag (int): warning flag:
• 1 : Maximum number of function evaluations

• 2 : Maximum number of iterations

• allvecs (list): a list of solutions at each iteration

fmin_powell(cost, x0, args=(), bounds=None, xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, direc=None, **kwds)

Minimize a function using modified Powell’s method.

Uses a modified Powell Directional Search algorithm to find the minimum of a function of one or more variables. This method only uses function values, not derivatives. Mimics the scipy.optimize.fmin_powell interface.

Powell’s method is a conjugate direction method that has two loops. The outer loop simply iterates over the inner loop, while the inner loop minimizes over each current direction in the direction set. At the end of the inner loop, if certain conditions are met, the direction that gave the largest decrease is dropped and replaced with the difference between the current estimated x and the estimated x from the beginning of the inner-loop. The conditions for replacing the direction of largest increase is that: (a) no further gain can be made along the direction of greatest increase in the iteration, and (b) the direction of greatest increase accounted for a large sufficient fraction of the decrease in the function value from the current iteration of the inner loop.

Parameters
• cost (func) – the function or method to be minimized: y = cost(x).

• x0 (ndarray) – the initial guess parameter vector x.

• args (tuple, default=()) – extra arguments for cost.

• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.

• xtol (float, default=1e-4) – acceptable relative error in xopt for convergence.

• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.

• gtol (float, default=2) – maximum iterations to run without improvement.

• maxiter (int, default=None) – the maximum number of iterations to perform.

• maxfun (int, default=None) – the maximum number of function evaluations.

• full_output (bool, default=False) – True if fval and warnflag are desired.

• disp (bool, default=True) – if True, print convergence messages.

• retall (bool, default=False) – True if allvecs is desired.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• direc (tuple, default=None) – the initial direction set.

• handler (bool, default=False) – if True, enable handling interrupt signals.

• itermon (monitor, default=None) – override the default GenerationMonitor.

• evalmon (monitor, default=None) – override the default EvaluationMonitor.

• constraints (func, default=None) – a function xk' = constraints(xk), where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints.

• penalty (func, default=None) – a function y = penalty(xk), where xk is the current parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0 otherwise).

• tight (bool, default=False) – enforce bounds and constraints concurrently.

Returns

(xopt, {fopt, iter, funcalls, warnflag, direc}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function

• fopt (float): value of cost function at minimum: fopt = cost(xopt)

• iter (int): number of iterations

• funcalls (int): number of function calls

• warnflag (int): warning flag:
• 1 : Maximum number of function evaluations

• 2 : Maximum number of iterations

• direc (tuple): the current direction set

• allvecs (list): a list of solutions at each iteration

lattice(cost, ndim, nbins=8, args=(), bounds=None, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, **kwds)

Minimize a function using the lattice ensemble solver.

Uses a lattice ensemble algorithm to find the minimum of a function of one or more variables. Mimics the scipy.optimize.fmin interface. Starts N solver instances at regular intervals in parameter space, determined by nbins (N = numpy.prod(nbins); len(nbins) == ndim).

Parameters
• cost (func) – the function or method to be minimized: y = cost(x).

• ndim (int) – dimensionality of the problem.

• nbins (tuple(int), default=8) – total bins, or # of bins in each dimension.

• args (tuple, default=()) – extra arguments for cost.

• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.

• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.

• gtol (float, default=10) – maximum iterations to run without improvement.

• maxiter (int, default=None) – the maximum number of iterations to perform.

• maxfun (int, default=None) – the maximum number of function evaluations.

• full_output (bool, default=False) – True if fval and warnflag are desired.

• disp (bool, default=True) – if True, print convergence messages.

• retall (bool, default=False) – True if allvecs is desired.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• solver (solver, default=None) – override the default nested Solver instance.

• handler (bool, default=False) – if True, enable handling interrupt signals.

• itermon (monitor, default=None) – override the default GenerationMonitor.

• evalmon (monitor, default=None) – override the default EvaluationMonitor.

• constraints (func, default=None) – a function xk' = constraints(xk), where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints.

• penalty (func, default=None) – a function y = penalty(xk), where xk is the current parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0 otherwise).

• tight (bool, default=False) – enforce bounds and constraints concurrently.

• map (func, default=None) – a (parallel) map function y = map(f, x).

• dist (mystic.math.Distribution, default=None) – generate randomness in ensemble starting position using the given distribution.

• step (bool, default=False) – if True, enable Step within the ensemble.

Returns

(xopt, {fopt, iter, funcalls, warnflag, allfuncalls}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function

• fopt (float): value of cost function at minimum: fopt = cost(xopt)

• iter (int): number of iterations

• funcalls (int): number of function calls

• warnflag (int): warning flag:
• 1 : Maximum number of function evaluations

• 2 : Maximum number of iterations

• allfuncalls (int): total function calls (for all solver instances)

• allvecs (list): a list of solutions at each iteration

sparsity(cost, ndim, npts=8, args=(), bounds=None, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None, **kwds)

Minimize a function using the sparsity ensemble solver.

Uses a sparsity ensemble algorithm to find the minimum of a function of one or more variables. Mimics the scipy.optimize.fmin interface. Starts npts solver instances at points in parameter space where existing points are sparse.

Parameters
• cost (func) – the function or method to be minimized: y = cost(x).

• ndim (int) – dimensionality of the problem.

• npts (int, default=8) – number of solver instances.

• args (tuple, default=()) – extra arguments for cost.

• bounds (list(tuple), default=None) – list of pairs of bounds (min,max), one for each parameter.

• ftol (float, default=1e-4) – acceptable relative error in cost(xopt) for convergence.

• gtol (float, default=10) – maximum iterations to run without improvement.

• rtol (float, default=None) – minimum acceptable distance from other points.

• maxiter (int, default=None) – the maximum number of iterations to perform.

• maxfun (int, default=None) – the maximum number of function evaluations.

• full_output (bool, default=False) – True if fval and warnflag are desired.

• disp (bool, default=True) – if True, print convergence messages.

• retall (bool, default=False) – True if allvecs is desired.

• callback (func, default=None) – function to call after each iteration. The interface is callback(xk), with xk the current parameter vector.

• solver (solver, default=None) – override the default nested Solver instance.

• handler (bool, default=False) – if True, enable handling interrupt signals.

• itermon (monitor, default=None) – override the default GenerationMonitor.

• evalmon (monitor, default=None) – override the default EvaluationMonitor.

• constraints (func, default=None) – a function xk' = constraints(xk), where xk is the current parameter vector, and xk’ is a parameter vector that satisfies the encoded constraints.

• penalty (func, default=None) – a function y = penalty(xk), where xk is the current parameter vector, and y' == 0 when the encoded constraints are satisfied (and y' > 0 otherwise).

• tight (bool, default=False) – enforce bounds and constraints concurrently.

• map (func, default=None) – a (parallel) map function y = map(f, x).

• dist (mystic.math.Distribution, default=None) – generate randomness in ensemble starting position using the given distribution.

• step (bool, default=False) – if True, enable Step within the ensemble.

Returns

(xopt, {fopt, iter, funcalls, warnflag, allfuncalls}, {allvecs})

Notes

• xopt (ndarray): the minimizer of the cost function

• fopt (float): value of cost function at minimum: fopt = cost(xopt)

• iter (int): number of iterations

• funcalls (int): number of function calls

• warnflag (int): warning flag:
• 1 : Maximum number of function evaluations

• 2 : Maximum number of iterations

• allfuncalls (int): total function calls (for all solver instances)

• allvecs (list): a list of solutions at each iteration

strategy module¶

Differential Evolution Strategies

These strategies are to be passed into DifferentialEvolutionSolver’s Solve method, and determine how the candidate parameter values mutate across a population.

Best1Bin(inst, candidate)

trial solution is current best solution plus scaled difference of two randomly chosen candidates; mutates at random

trial = best + scale*(candidate1 - candidate2)

Best1Exp(inst, candidate)

trial solution is current best solution plus scaled difference of two randomly chosen candidates; mutates until random stop

trial = best + scale*(candidate1 - candidate2)

Best2Bin(inst, candidate)

trial solution is current best solution plus scaled contributions of four randomly chosen candidates; mutates at random

trial = best + scale*(candidate1 - candidate2 - candidate3 - candidate4)

Best2Exp(inst, candidate)

trial solution is current best solution plus scaled contributions from four randomly chosen candidates; mutates until random stop

trial = best + scale*(candidate1 + candidate2 - candidate3 - candidate4)

Rand1Bin(inst, candidate)

trial solution is randomly chosen candidate plus scaled difference of two other randomly chosen candidates; mutates at random

trial = candidate1 + scale*(candidate2 - candidate3)

Rand1Exp(inst, candidate)

trial solution is randomly chosen candidate plus scaled difference of two other randomly chosen candidates; mutates until random stop

trial = candidate1 + scale*(candidate2 - candidate3)

Rand2Bin(inst, candidate)

trial solution is randomly chosen candidate plus scaled contributions of four other randomly chosen candidates; mutates at random

trial = candidate1 + scale*(candidate2 - candidate3 - candidate4 - candidate5)

Rand2Exp(inst, candidate)

trial solution is randomly chosen candidate plus scaled contributions from four other randomly chosen candidates; mutates until random stop

trial = candidate1 + scale*(candidate2 + candidate3 - candidate4 - candidate5)

RandToBest1Bin(inst, candidate)

trial solution is itself plus scaled difference of best solution and tri