autofit.DynestyDynamic#

class autofit.DynestyDynamic(name: Optional[str] = None, path_prefix: Optional[str] = None, unique_tag: Optional[str] = None, prior_passer=None, iterations_per_update: Optional[int] = None, number_of_cores: Optional[int] = None, **kwargs)[source]#
__init__(name: Optional[str] = None, path_prefix: Optional[str] = None, unique_tag: Optional[str] = None, prior_passer=None, iterations_per_update: Optional[int] = None, number_of_cores: Optional[int] = None, **kwargs)[source]#

A Dynesty non-linear search, using a dynamically changing number of live points.

For a full description of Dynesty, checkout its GitHub and readthedocs webpages:

https://github.com/joshspeagle/dynesty https://dynesty.readthedocs.io/en/latest/index.html

Parameters
  • name – The name of the search, controlling the last folder results are output.

  • path_prefix – The path of folders prefixing the name folder where results are output.

  • unique_tag – The name of a unique tag for this model-fit, which will be given a unique entry in the sqlite database and also acts as the folder after the path prefix and before the search name.

  • prior_passer – Controls how priors are passed from the results of this NonLinearSearch to a subsequent non-linear search.

  • iterations_per_update – The number of iterations performed between every Dynesty back-up (via dumping the Dynesty instance as a pickle).

  • number_of_cores – The number of cores Emcee sampling is performed using a Python multiprocessing Pool instance. If 1, a pool instance is not created and the job runs in serial.

  • session – An SQLalchemy session instance so the results of the model-fit are written to an SQLite database.

Methods

__init__([name, path_prefix, unique_tag, ...])

A Dynesty non-linear search, using a dynamically changing number of live points.

check_model(model)

check_pool(uses_pool, pool)

config_dict_with_test_mode_settings_from(...)

copy_with_paths(paths)

exact_fit(factor_approx[, status])

fit(model, analysis[, info, pickle_files, ...])

Fit a model, M with some function f that takes instances of the class represented by model M and gives a score for their fitness.

fitness_function_from_model_and_analysis(...)

iterations_from(sampler)

Returns the next number of iterations that a dynesty call will use and the total number of iterations that have been performed so far.

live_points_init_from(model, fitness_function)

By default, dynesty live points are generated via the sampler's in-built initialization.

make_pool()

Make the pool instance used to parallelize a NonLinearSearch alongside a set of unique ids for every process in the pool.

make_sneaky_pool(fitness_function)

Create a pool for multiprocessing that uses slight-of-hand to avoid copying the fitness function between processes multiple times.

optimise(factor_approx[, status])

Perform optimisation for expectation propagation.

perform_update(model, analysis, during_analysis)

Perform an update of the NonLinearSearch results, which occurs every iterations_per_update of the non-linear search.

plot_results(samples)

read_uses_pool()

If a Dynesty fit does not use a parallel pool, and is then resumed using one, this causes significant slow down.

remove_state_files()

run_sampler(sampler)

Run the Dynesty sampler, which could be either the static of dynamic sampler.

sampler_from(model, fitness_function, ...)

Returns an instance of the Dynesty dynamic sampler set up using the input variables of this class.

samples_from(model)

Create a Samples object from this non-linear search's output files on the hard-disk and model.

write_uses_pool(uses_pool)

If a Dynesty fit does not use a parallel pool, and is then resumed using one, this causes significant slow down.

Attributes

checkpoint_file

The path to the file used by dynesty for checkpointing.

config_dict_run

config_dict_search

config_dict_settings

config_type

logger

Log 'msg % args' with severity 'DEBUG'.

name

paths

samples_cls

timer

total_live_points