GPstuffhttp://mloss.orgUpdates and additions to GPstuffenThu, 09 Jun 2016 17:45:15 -0000GPstuff 4.7<html><p>If you use GPstuff, please use the reference: Jarno Vanhatalo, Jaakko Riihimäki, Jouni Hartikainen, Pasi Jylänki, Ville Tolvanen, Aki Vehtari (2013). GPstuff: Bayesian Modeling with Gaussian Processes. In Journal of Machine Learning Research, 14:1175-1179. </p> <p>See also user guide at </p> <p>GPstuff is a toolbox for Bayesian Modeling with Gaussian Processes with following features and more: </p> <ul> <li> Several covariance functions (e.g. squared exponential, exponential, Matérn, periodic and a compactly supported piece wise polynomial function)<ul> <li> Sums, products and scaling of covariance functions </li> <li> Euclidean and delta distance </li> </ul> </li> <li> Several mean functions with marginalized parameters </li> <li> Several likelihood/observation models<ul> <li> Continuous observations: Gaussian, Gaussian scale mixture (MCMC only), Student's-t, quantile regression </li> <li> Classification: Logit, Probit, multinomial logit (softmax), multinomial probit </li> <li> Count data: Binomial, Poisson, (Zero truncated) Negative-Binomial, Hurdle model, Zero-inflated Negative-Binomial, Multinomial </li> <li> Survival: Cox-PH, Weibull, log-Gaussian, log-logistic </li> <li> Point process: Log-Gaussian Cox process </li> <li> Density estimation and regression: logistic GP </li> <li> Other: derivative observations (for sexp covariance function only) </li> <li> Monotonicity information </li> </ul> </li> <li> Hierarchical priors for hyperparameters </li> <li> Sparse models<ul> <li> Sparse matrix routines for compactly supported covariance functions </li> <li> Fully and partially independent conditional (FIC, PIC) </li> <li> Compactly supported plus FIC (CS+FIC) </li> <li> Variational sparse (VAR), Deterministic training conditional (DTC), Subset of regressors (SOR) (Gaussian/EP only) </li> <li> PASS-GP </li> </ul> </li> <li> Latent inference<ul> <li> Exact (Gaussian only) </li> <li> Laplace, Expectation propagation (EP), Parallel EP, Robust-EP </li> <li> marginal posterior corrections (cm2 and fact) </li> <li> Scaled Metropolis, Hamiltonian Monte Carlo (HMC), Scaled HMC, Elliptical slice sampling </li> <li> State space inference (1D for some covariance functions) </li> </ul> </li> <li> Hyperparameter inference<ul> <li> Type II ML/MAP </li> <li> Leave-one-out cross-validation (LOO-CV), Laplace/EP LOO-CV </li> <li> Metropolis, HMC, No-U-Turn-Sampler (NUTS), Slice Sampling (SLS), Surrogate SLS, Shrinking-rank SLS, Covariance-matching SLS </li> <li> Grid, CCD, Importance sampling </li> </ul> </li> <li> Model assessment<ul> <li> LOO-CV, Laplace/EP LOO-CV, IS-LOO-CV, k-fold-CV </li> <li> WAIC, DIC </li> <li> Average predictive comparison </li> </ul> </li> </ul></html>Jarno Vanhatalo, Jaakko Riihimaki, Jouni Hartikainen, Pasi Jylanki, Ville Tolvanen and Aki VehtariThu, 09 Jun 2016 17:45:15 -0000 learningnonparametric bayesgaussian processbayesian inference