-
- Description:
This library implements various Bayesian linear models (Bayesian linear regression) and generalized linear models. A few features of this library are:
A fancy basis functions/feature composition framework for combining basis functions like radial basis function, sigmoidal basis functions, polynomial basis functions etc.
Basis functions that can be used to approximate Gaussian processes with shift invariant covariance functions (e.g. square exponential) when used with linear models [1], [2], [3].
Non-Gaussian likelihoods with Bayesian generalized linear models (GLMs). We infer all of the parameters in the GLMs using stochastic variational inference [4], and we approximate the posterior over the weights with a mixture of Gaussians, like [5].
Large scale learning using stochastic gradient descent (Adam, AdaDelta and more).
Scikit Learn compatibility, i.e. usable with pipelines.
A host of decorators for scipy.optimize.minimize and stochastic gradients that enhance the functionality of these optimisers.
[1] Yang, Z., Smola, A. J., Song, L., & Wilson, A. G. "A la Carte -- Learning Fast Kernels". Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, pp. 1098-1106, 2015.
[2] Le, Q., Sarlos, T., & Smola, A. "Fastfood-approximating kernel expansions in loglinear time." Proceedings of the international conference on machine learning. 2013.
[3] Rahimi, A., & Recht, B. "Random features for large-scale kernel machines." Advances in neural information processing systems. 2007.
[4] Kingma, D. P., & Welling, M. "Auto-encoding variational Bayes". Proceedings of the 2nd International Conference on Learning Representations (ICLR). 2014.
[5] Gershman, S., Hoffman, M., & Blei, D. "Nonparametric variational inference". Proceedings of the international conference on machine learning. 2012.
- Changes to previous version:
- 1.0 release!
- Now there is a random search phase before optimization of all hyperparameters in the regression algorithms. This improves the performance of revrand since local optima are more easily avoided with this improved initialisation
- Regression regularizers (weight variances) associated with each basis object, this approximates GP kernel addition more closely
- Random state can be set for all random objects
- Numerous small improvements to make revrand production ready
- Final report
- Documentation improvements
- BibTeX Entry: Download
- Supported Operating Systems: Platform Independent
- Data Formats: Numpy
- Tags: Stochastic Gradient Descent, Large Scale Learning, Nonparametric Bayes, Nonlinear Regression, Gaussian Processes, Generalized Linear Models, Spark, Fast Food, Random Features
- Archive: download here
Comments
No one has posted any comments yet. Perhaps you'd like to be the first?
Leave a comment
You must be logged in to post comments.