emcee is an MIT licensed pure-Python implementation of Goodman & Weare’s Affine Invariant Markov chain Monte Carlo (MCMC) Ensemble sampler and these pages will show you how to use it.
This documentation won’t teach you too much about MCMC but there are a lot of resources available for that (try this one). We also published a paper explaining the emcee algorithm and implementation in detail.
emcee has been used in quite a few projects in the astrophysical literature and it is being actively developed on GitHub.
If you wanted to draw samples from a 5 dimensional Gaussian, you would do something like:
import numpy as np import emcee def log_prob(x, ivar): return -0.5 * np.sum(ivar * x ** 2) ndim, nwalkers = 5, 100 ivar = 1. / np.random.rand(ndim) p0 = np.random.randn(nwalkers, ndim) sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob, args=[ivar]) sampler.run_mcmc(p0, 10000)
A more complete example is available in the Quickstart tutorial.
How to Use This Guide¶
To start, you’re probably going to need to follow the Installation guide to get emcee installed on your computer. After you finish that, you can probably learn most of what you need from the tutorials listed below (you might want to start with Quickstart and go from there). If you need more details about specific functionality, the User Guide below should have what you need.
We welcome bug reports, patches, feature requests, and other comments via the GitHub issue tracker, but you should check out the contribution guidelines first. If you have a question about the use of emcee, please post it to the users list instead of the issue tracker.
- The Ensemble Sampler
- Autocorrelation Analysis
- Upgrading From Pre-3.0 Versions
License & Attribution¶
Copyright 2010-2019 Dan Foreman-Mackey and contributors.
emcee is free software made available under the MIT License. For details
- Added tutorial for moves interface
- Added information about contributions to documentation
- Improved documentation for installation and testing
- Fixed dtype issues and instability in linear dependence test
- Final release for JOSS submission
- Added support for long double dtypes
- Prepared manuscript to submit to JOSS
- Improved packaging and release infrastructure
- Fixed bug in initial linear dependence test
- Improved autocorrelation time computation.
- Numpy compatibility issues.
- Fixed deprecated integer division behavior in PTSampler.
- Removing dependence on
- Added arguments to
- Added automatic load-balancing for MPI runs.
- Added custom load-balancing for MPI and multiprocessing.
- New default multiprocessing pool that supports
- Re-licensed under the MIT license!
- Clearer less verbose documentation.
- Added checks for parameters becoming infinite or NaN.
- Added checks for log-probability becoming NaN.
- Improved parallelization and various other tweaks in
- Added a parallel tempering sampler
- Added instructions and utilities for using
flatlnprobabilityproperty to the
EnsembleSamplerobject to be consistent with the
- Updated document for publication in PASP.
- Various bug fixes.
- Made the packaging system more robust even when numpy is not installed.
- Another bug fix related to metadata blobs: the shape of the final
blobsobject was incorrect and all of the entries would generally be identical because we needed to copy the list that was appended at each step. Thanks goes to Jacqueline Chen (MIT) for catching this problem.
- Fixed bug related to metadata blobs. The sample function was yielding
blobsobject even when it wasn’t expected.
- Allow the
lnprobfnto return arbitrary “blobs” of data as well as the log-probability.
- Python 3 compatible (thanks Alex Conley)!
- Various speed ups and clean ups in the core code base.
- New documentation with better examples and more discussion.
- Fixed transpose bug in the usage of
- Initial release.