Project details for BayesOpt, a Bayesian Optimization toolbox

Screenshot BayesOpt, a Bayesian Optimization toolbox 0.6

by rmcantin - March 26, 2014, 17:48:17 CET [ Project Homepage BibTeX Download ]

view ( today), download ( today ), 0 subscriptions

Description:

BayesOpt is an efficient, C++ implementation of the Bayesian optimization methodology for nonlinear-optimization, experimental design and stochastic bandits. In the literature it is also called Sequential Kriging Optimization (SKO) or Efficient Global Optimization (EGO).

There are also interfaces for C, Matlab/Octave and Python. The online HTML version of the documentation is in: http://rmcantin.bitbucket.org/html/

Bayesian optimization uses a distribution over functions to build a metamodel of the unknown function for we are looking the extrema, and then apply some active learning strategy to select the query points that provides most potential interest for the seek. For that reason, it has been traditionally intended for optimization of expensive function. However, the efficiency of the library make it also interesting for many types of functions. It is intended to be both fast and clear for development and research. At the same time, it does everything the "right way". For example:

  • latin hypercube sampling is used for the preliminary design step,
  • extensive use of Cholesky decomposition and related tecniques to improve numeral stability and reduce computational cost,
  • kernels, criteria and parametric functions can be combined to produce more advanced functions, etc.

Originally, it was developed for as part of a robotics research project, where a Gaussian process with hyperpriors on the mean and signal covariance parameters. Then, the metamodel was constructed using the Maximum a Posteriory (MAP) of the parameters. However, the library now has grown to support many more surrogate models, with different distributions (Gaussian processes, Student's-t processes, etc.), with many kernels and mean functions. It also provides different criteria (even some combined criteria) so the library can be used to any problem involving some bounded optimization, stochastic bandits, active learning for regression, etc.

You can also find more details at: http://bitbucket.org/rmcantin/bayesopt/wiki/Home

Important: This code is free to use. However, if you are using the library, specially if it is for research or academic purposes, please send me an email at rmcantin@unizar.es with your name, institution and a brief description of your interest for this code (one or two lines).

Changes to previous version:

-Complete refactoring of inner parts of the library. The code is easier to understand/modify and it allow simpler integration with new algorithms.

-Updated to the latest version of NLOPT (2.4.1). Wrapper code symplified.

-Error codes replaced with exceptions in C++ interface. Library is exception safe.

-API modified to support new learning methods for kernel hyperparameters (e.g: MCMC). Warning: config parameters about learning have changed. Code using previous versions might not work. Some of the learning methods (like MCMC) are not yet implemented.

-Added configuration of random numbers (can be fixed for debugging). Fixed issue with random numbers using different sources or random number with potential correlations. Now all the elements are guaranteed to use the same instance of the random engine.

-Improved numerical results (e.g.: hyperparameter optimization is done in log space)

-More examples and tests.

-Fixed bugs.

-The number of inner iterations have been increased by default, so overall optimization time using default configuration might be slower, but with improved results.

BibTeX Entry: Download
Supported Operating Systems: Linux, Windows, Macos
Data Formats: Agnostic
Tags: Active Learning, Optimization, Experimental Design, Gaussian Process, Bandits, Bayesian Optimization
Archive: download here

Comments

No one has posted any comments yet. Perhaps you'd like to be the first?

Leave a comment

You must be logged in to post comments.