API Reference

This section details the classes and functions available in the parismc package.

Core Classes

class parismc.Sampler(ndim: int, n_seed: int, log_density_func: Callable[[numpy.ndarray], numpy.ndarray], init_cov_list: List[numpy.ndarray], prior_transform: Callable | None = None, config: SamplerConfig | None = None)[source]

Bases: object

Advanced Monte Carlo sampler with adaptive covariance and clustering.

This sampler implements an importance sampling algorithm with: - Adaptive proposal covariance matrices - Automatic cluster merging - Boundary-aware sampling - Optional multiprocessing support

Parameters:
  • ndim (int) – Dimensionality of the parameter space

  • n_seed (int) – Number of initial seed points (processes)

  • log_density_func (callable) – Function that computes log densities for sample points

  • init_cov_list (list of array-like) – Initial covariance matrices for each process

  • prior_transform (callable, optional) – Function to transform from unit cube to parameter space

  • config (SamplerConfig, optional) – Configuration object with sampling parameters

Examples

>>> def log_density(x):
...     return -0.5 * np.sum(x**2, axis=1)
>>> sampler = Sampler(ndim=2, n_seed=3, log_density_func=log_density,
...                   init_cov_list=[np.eye(2)] * 3)
>>> sampler.prepare_lhs_samples(1000, 100)
>>> sampler.run_sampling(500, './results')
apply_prior_transform(points: numpy.ndarray, prior_transform: Callable | None) numpy.ndarray[source]

Apply prior transformation to points in unit hypercube [0,1]^ndim

get_samples_with_weights(flatten: bool = False) Tuple[List[numpy.ndarray], List[numpy.ndarray]] | Tuple[numpy.ndarray, numpy.ndarray][source]

Get samples and their weights in the parameter space.

Parameters:

flattenbool, optional

If True, returns concatenated arrays of all samples and weights. If False, returns lists of arrays for each process. Default is False.

Returns:

If flatten=False:

tuple: (transformed_samples_list, weights_list) where each is a list of arrays

If flatten=True:

tuple: (all_samples, all_weights) where each is a single concatenated array

imp_weights_list() List[numpy.ndarray][source]

Calculate and return importance weights for all samples across all processes, handling the special case of the latest batch of samples.

Returns:

list of numpy.ndarray:

A list where each element is an array of importance weights for a process.

initialize_first_iteration(num_iterations: int, external_lhs_points: numpy.ndarray | None = None, external_lhs_log_densities: numpy.ndarray | None = None) None[source]

Initialize the first iteration with LHS samples.

Parameters:
  • external_lhs_points (np.ndarray, optional) – External LHS points to use instead of internal ones

  • external_lhs_log_densities (np.ndarray, optional) – External LHS densities corresponding to external_lhs_points

static load_state(filename: str) Sampler[source]

Load a sampler state from a file.

Parameters:

filenamestr

The filename to load the state from.

Returns:

Sampler

The loaded Sampler instance.

prepare_lhs_samples(lhs_num: int, batch_size: int) None[source]

Prepare LHS samples and initialize the sampler state.

run_sampling(num_iterations: int, savepath: str, print_iter: int = 1, stop_dlogZ: float | None = None, external_lhs_points: numpy.ndarray | None = None, external_lhs_log_densities: numpy.ndarray | None = None, callback: Callable[[Sampler, int], None] | None = None) None[source]

Run the sampling process for a specified number of iterations.

Parameters:
  • num_iterations (int) – Total number of iterations to execute.

  • savepath (str) – Directory path for saving sampler state.

  • print_iter (int, optional) – Progress update cadence.

  • stop_dlogZ (float, optional) – Absolute difference threshold |logZ(i) - logZ(i-alpha)| to trigger early stopping; disabled when None.

  • callback (callable, optional) – Function called at the start of each iteration. Signature: callback(sampler, i)

save_state(filename: str | None = None) None[source]

Save the current state of the sampler to a file.

Parameters:

filenamestr, optional

The filename to save the state to. If None, uses self.savepath/sampler_state.pkl

transformed_log_density_func(x: numpy.ndarray) numpy.ndarray[source]

Apply prior transform before calling the original log density function.

class parismc.SamplerConfig(seed: int | None = None, merge_confidence: float = 0.9, alpha: int = 1000, trail_size: int = 1000, boundary_limiting: bool = True, use_beta: bool = True, integral_num: int = 100000, gamma: int = 100, exclude_scale_z: float = numpy.inf, use_pool: bool = False, n_pool: int = 10, stop_on_merge: bool = False, merge_type: str = 'single', cov_jitter: float = 1e-10, debug: bool = False)[source]

Configuration parameters for the Sampler.

Parameters:
  • merge_confidence (float) – (Optional, for ‘distance’ merge_type only) Probability mass inside merge radius R_m. Default 0.9.

  • alpha (int) – Number of most recent samples used for importance weighting (sliding window size).

  • trail_size (int) – Maximum number of attempts (MaxResample) after rejecting an invalid sample.

  • boundary_limiting (bool) – Enable boundary constraint handling for [0,1]^d unit cube.

  • use_beta (bool) – Apply Beta correction for boundary truncation effects.

  • integral_num (int) – Number of MC samples for beta coefficient estimation.

  • gamma (int) – Frequency of covariance matrix updates (in iterations).

  • exclude_scale_z (float) – Exclusion scale for outlier detection (default: inf).

  • use_pool (bool) – Enable multiprocessing.

  • n_pool (int) – Number of parallel processes.

  • stop_on_merge (bool) – Stop sampling if a merge occurs.

  • merge_type (str) – Merging strategy: ‘distance’, ‘single’ (default), or ‘multiple’.

  • cov_jitter (float) – Numerical jitter added to covariance diagonal for stability (epsilon).

  • debug (bool) – Enable debug logging.

alpha: int = 1000
boundary_limiting: bool = True
cov_jitter: float = 1e-10
debug: bool = False
gamma: int = 100
integral_num: int = 100000
merge_confidence: float = 0.9
merge_type: str = 'single'
n_pool: int = 10
seed: int | None = None
stop_on_merge: bool = False
trail_size: int = 1000
use_beta: bool = True
use_pool: bool = False

Utilities

parismc.utils.find_sigma_level(ndim, prob)[source]

Helper method to compute sigma level for a given probability.

parismc.utils.weighting_seeds_manycov(points_array, means_array, inv_covariances_array, norm_terms_array, proposalcoeff_array)[source]
parismc.utils.weighting_seeds_manypoint(points_array, means_array, inv_covariance, norm_term, proposalcoeff_array)[source]
parismc.utils.weighting_seeds_onepoint_with_onemean(points_array, means_array, inv_covariance, norm_term)[source]