Project details for BayesOpt, a Bayesian Optimization toolbox

Screenshot BayesOpt, a Bayesian Optimization toolbox 0.5

by rmcantin - July 26, 2013, 00:51:50 CET [ Project Homepage BibTeX Download ]

view ( today), download ( today ), 0 subscriptions


BayesOpt is an efficient, C++ implementation of the Bayesian optimization methodology for nonlinear-optimization, experimental design and stochastic bandits. In the literature it is also called Sequential Kriging Optimization (SKO) or Efficient Global Optimization (EGO).

There are also interfaces for C, Matlab/Octave and Python. The online HTML version of the documentation is in:

Bayesian optimization uses a distribution over functions to build a metamodel of the unknown function for we are looking the extrema, and then apply some active learning strategy to select the query points that provides most potential interest for the seek. For that reason, it has been traditionally intended for optimization of expensive function. However, the efficiency of the library make it also interesting for many types of functions. It is intended to be both fast and clear for development and research. At the same time, it does everything the "right way". For example:

  • latin hypercube sampling is used for the preliminary design step,
  • extensive use of Cholesky decomposition and related tecniques to improve numeral stability and reduce computational cost,
  • kernels, criteria and parametric functions can be combined to produce more advanced functions, etc.

Originally, it was developed for as part of a robotics research project, where a Gaussian process with hyperpriors on the mean and signal covariance parameters. Then, the metamodel was constructed using the Maximum a Posteriory (MAP) of the parameters. However, the library now has grown to support many more surrogate models, with different distributions (Gaussian processes, Student's-t processes, etc.), with many kernels and mean functions. It also provides different criteria (even some combined criteria) so the library can be used to any problem involving some bounded optimization, stochastic bandits, active learning for regression, etc.

You can also find more details at:

Important: This code is free to use. However, if you are using the library, specially if it is for research or academic purposes, please send me an email at with your name, institution and a brief description of your interest for this code (one or two lines).

Changes to previous version:

-Fixed bugs.

-Improved and extended documentation.

-Simplified installation. Separate Python module.

-New demos and examples.

-REMBO algorithm (Bayesian optimization in high dimensions through random embeding)

-Support for Sobol sequences for initial design.

-Support for Matlab in Windows using MinGW (Support for Visual Studio was already available)

BibTeX Entry: Download
Supported Operating Systems: Linux, Windows, Macos
Data Formats: Agnostic
Tags: Active Learning, Optimization, Experimental Design, Gaussian Process, Bandits, Bayesian Optimization
Archive: download here


No one has posted any comments yet. Perhaps you'd like to be the first?

Leave a comment

You must be logged in to post comments.