mloss.org new softwarehttp://mloss.orgUpdates and additions to mloss.orgenMon, 26 Apr 2021 12:57:51 -0000Alpenglow 1.0.6http://mloss.org/revision/view/2202/<html><p>Alpenglow is an open-source recommender systems research framework, aimed at providing tools for rapid prototyping and evaluation of algorithms for time-aware and streaming recommendation tasks. It supports modeling non-stationary environments using prequential evaluation and incremental updating of models. The framework is implemented in C++, and also provides an easy-to-use python API.</p> <p>Features:</p> <ul> <li>various tools for evaluation</li> <li>preconfigured experiments</li> <li>option for embedding traditional periodic retraining in the prequential framework</li> </ul> <p>Implemented models:</p> <ul> <li>matrix factorization (SGD, ALS, iALS)</li> <li>asymmetric matrix factorization</li> <li>SVD++</li> <li>factorization machines</li> <li>nearest neighbor</li> <li>time-aware popularity</li> <li>transition probability</li> </ul> </html>Domokos Kelen, Erzsebet Frigo, Robert Palovics, Levente Kocsis, Andras A. BenczurMon, 26 Apr 2021 12:57:51 -0000http://mloss.org/software/rss/comments/2202http://mloss.org/revision/view/2202/recommender systemonline leaningStochaskell 1.0.0http://mloss.org/revision/view/2201/<html><p>Stochaskell is a probabilistic programming language (PPL) designed for portability, allowing models and inference strategies to be written in a single language, but inference to be performed by a variety of PPLs. This is achieved through runtime code generation, whereby code is automatically produced to perform inference via an external PPL on subsets of the model specified by the user. In this way, users can benefit from the diverse probabilistic programming ecosystem without the cost of manually rewriting models in multiple different languages.</p> <p>Stochaskell also implements a novel method for automatically deriving a Reversible Jump Markov chain Monte Carlo sampler from probabilistic programs that specify the target and proposal distributions. The main challenge in automatically deriving such an inference procedure, in comparison to deriving a generic Metropolis-Hastings sampler, is in calculating the Jacobian adjustment to the proposal acceptance ratio. To achieve this, our approach relies on the interaction of several different components, including automatic differentiation, transformation inversion, and optimised code generation.</p> </html>David A RobertsMon, 08 Feb 2021 08:21:07 -0000http://mloss.org/software/rss/comments/2201http://mloss.org/revision/view/2201/probabilistic programmingcontextual 0.9.8.4http://mloss.org/revision/view/2200/<html><p>Over the past decade, contextual bandit algorithms have been gaining in popularity due to their effectiveness and flexibility in solving sequential decision problems---from online advertising and finance to clinical trial design and personalized medicine. At the same time, there are, as of yet, surprisingly few options that enable researchers and practitioners to simulate and compare the wealth of new and existing bandit algorithms in a standardized way. To help close this gap between analytical research and empirical evaluation the current paper introduces the object-oriented \proglang{R} package \pkg{contextual}: a user-friendly and, through its object-oriented design, easily extensible framework that facilitates parallelized comparison of contextual and context-free bandit policies through both simulation and offline analysis.</p> </html>Robin van Emden, Maurits Kaptein, Jules KruijswijkMon, 27 Jul 2020 16:05:32 -0000http://mloss.org/software/rss/comments/2200http://mloss.org/revision/view/2200/reinforcement learningsimulationdata generatorcontext aware recommendationbanditscomparisonsr-cran-Boruta 6.0.0http://mloss.org/revision/view/2194/<html><p>Wrapper Algorithm for All Relevant Feature Selection: An all relevant feature selection wrapper algorithm. It finds relevant features by comparing original attributes' importance with importance achievable at random, estimated using their permuted copies (shadows). </p></html>Miron Bartosz Kursa [aut, cre] (), Witold Remigiusz Rudnicki [aut]Sat, 01 Sep 2018 00:00:04 -0000http://mloss.org/software/rss/comments/2194http://mloss.org/revision/view/2194/r-cranr-cran-BART 1.9http://mloss.org/revision/view/2192/<html><p>Bayesian Additive Regression Trees: Bayesian Additive Regression Trees (BART) provide flexible nonparametric modeling of covariates for continuous, binary, categorical and time-to-event outcomes. For more information on BART, see Chipman, George and McCulloch (2010) [HTML_REMOVED] and Sparapani, Logan, McCulloch and Laud (2016) [HTML_REMOVED]. </p></html>Robert McCulloch [aut], Rodney Sparapani [aut, cre], Robert Gramacy [aut], Charles Spanbauer [aut], Matthew Pratola [aut], Bill Venables [ctb], Brian Ripley [ctb]Fri, 17 Aug 2018 00:00:00 -0000http://mloss.org/software/rss/comments/2192http://mloss.org/revision/view/2192/r-cranr-cran-bst 0.3-15http://mloss.org/revision/view/2195/<html><p>Gradient Boosting: Functional gradient descent algorithm for a variety of convex and non-convex loss functions, for both classical and robust regression and classification problems. See Wang (2011) [HTML_REMOVED], Wang (2012) [HTML_REMOVED], Wang (2018) [HTML_REMOVED], Wang (2018) [HTML_REMOVED]. </p></html>Zhu Wang [aut, cre], Torsten Hothorn [ctb]Sun, 22 Jul 2018 00:00:00 -0000http://mloss.org/software/rss/comments/2195http://mloss.org/revision/view/2195/r-cranMLPACK 3.0.2http://mloss.org/revision/view/2191/<html><p>mlpack is a fast, flexible C++ machine learning library. Its aim is to make large-scale machine learning possible for novice users by means of a simple, consistent API, while simultaneously exploiting C++ language features to provide maximum performance and maximum flexibility for expert users. mlpack also provides bindings to other languages. </p> <p>The following methods are provided: </p> <ul> <li> Approximate furthest neighbor search techniques </li> <li> Collaborative Filtering (with NMF) </li> <li> Decision Stumps </li> <li> DBSCAN </li> <li> Density Estimation Trees </li> <li> Euclidean Minimum Spanning Trees </li> <li> Fast Exact Max-Kernel Search (FastMKS) </li> <li> Gaussian Mixture Models (GMMs) </li> <li> Hidden Markov Models (HMMs) </li> <li> Hoeffding trees (streaming decision trees) </li> <li> Kernel Principal Components Analysis (KPCA) </li> <li> K-Means Clustering </li> <li> Least-Angle Regression (LARS/LASSO) </li> <li> Local Coordinate Coding </li> <li> Locality-Sensitive Hashing (LSH) </li> <li> Logistic regression </li> <li> Naive Bayes Classifier </li> <li> Neighborhood Components Analysis (NCA) </li> <li> Neural Networks (FFNs, CNNs, RNNs) </li> <li> Nonnegative Matrix Factorization (NMF) </li> <li> Perceptron </li> <li> Principal Components Analysis (PCA) </li> <li> QUIC-SVD </li> <li> RADICAL (ICA) </li> <li> Regularized SVD </li> <li> Rank-Approximate Nearest Neighbor (RANN) </li> <li> Simple Least-Squares Linear Regression (and Ridge Regression) </li> <li> Sparse Autoencoder </li> <li> Sparse Coding </li> <li> Tree-based Neighbor Search (all-k-nearest-neighbors, all-k-furthest-neighbors), using either kd-trees or cover trees </li> <li> Tree-based Range Search </li> <li> and also more not listed here </li> </ul> <p>Command-line executables are provided for each of these, and the C++ classes which define the methods are highly flexible, extensible, and modular. More information (including documentation, tutorials, and bug reports) is available at http://www.mlpack.org/. </p></html>Ryan Curtin, James Cline, Neil Slagle, Matthew Amidon, Ajinkya Kale, Bill March, Nishant Mehta, Parikshit Ram, Dongryeol Lee, Rajendran Mohan, Trironk Kiatkungwanglai, Patrick Mason, Marcus Edel, etc.Sat, 09 Jun 2018 18:03:57 -0000http://mloss.org/software/rss/comments/2191http://mloss.org/revision/view/2191/gmmhmmmachine learningsparsedual treefastscalabletreeSpectra. A Library for Large Scale Eigenvalue Problems 0.6.2http://mloss.org/revision/view/2190/<html><p>Spectra is a C++ library for large scale eigenvalue problems, built on top of Eigen (<a href="http://eigen.tuxfamily.org">http://eigen.tuxfamily.org</a>). </p> <p>Spectra is designed to calculate a specified number (k) of eigenvalues of a large square matrix (A). Usually k is much smaller than the size of matrix (n), so that only a few eigenvalues and eigenvectors are computed, which in general is more efficient than calculating the whole spectral decomposition. Users can choose eigenvalue selection rules to pick the eigenvalues of interest, such as the largest k eigenvalues, or eigenvalues with largest real parts, etc. </p> <p>Spectra is implemented as a header-only C++ library, whose only dependence, Eigen, is also header-only. Hence Spectra can be easily embedded in C++ projects that require calculating eigenvalues of large matrices. </p> <p>Key Features: </p> <ul> <li> Calculates a small number of eigenvalues/eigenvectors of a large square matrix. </li> <li> Broad application in dimensionality reduction, principal component analysis, community detection, etc. </li> <li> High performance. In most cases faster than ARPACK. </li> <li> Header-only. Easy to be embedded into other projects. </li> <li> Supports symmetric/general, dense/sparse matrices. </li> <li> Elegant and user-friendly API with great flexibility. </li> <li> Convenient and powerful R interface, the RSpectra R package. </li> </ul></html>Yixuan QiuWed, 23 May 2018 19:40:46 -0000http://mloss.org/software/rss/comments/2190http://mloss.org/revision/view/2190/singular value decompositionprincipal component analysisfactorizationeigenvalueTheano 1.0.2http://mloss.org/revision/view/2189/<html><p>Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Theano features: </p> <pre><code>* tight integration with numpy – Use numpy.ndarray in Theano-compiled functions. * transparent use of a GPU – perform data-intensive computations much faster than on a CPU. * symbolic differentiation – Let Theano do your derivatives. * speed and stability optimizations – Get the right answer for log(1+x) even when x is really tiny. * dynamic C code generation – Evaluate expressions faster. * extensive unit-testing and self-verification – Detect and diagnose many types of mistake. </code></pre><p>Theano has been powering large-scale computationally intensive scientific investigations since 2007. But it is also approachable enough to be used in the classroom (IFT6266 at the University of Montreal). </p> <p>Theano has been used primarily to implement large-scale deep learning algorithms. To see how, see the Deep Learning Tutorials (http://www.deeplearning.net/tutorial/) </p></html>mostly LISA labWed, 23 May 2018 16:34:31 -0000http://mloss.org/software/rss/comments/2189http://mloss.org/revision/view/2189/pythoncudagpusymbolic differentiationnumpydlib ml 19.11http://mloss.org/revision/view/2188/<html><p>A C++ toolkit containing machine learning algorithms and tools that facilitate creating complex software in C++ to solve real world problems. </p> <p>The library provides efficient implementations of the following algorithms: </p> <ul> <li> Deep neural networks </li> <li> support vector machines for classification, regression, and ranking </li> <li> reduced-rank methods for large-scale classification and regression.<br /> This includes an SVM implementation and a method for performing kernel ridge regression with efficient LOO cross-validation. </li> <li> multi-class SVM </li> <li> structural SVM (modes: single-threaded, multi-threaded, and fully distributed) </li> <li> sequence labeling using structured SVMs </li> <li> relevance vector machines for regression and classification </li> <li> reduced set approximation of SV decision surfaces </li> <li> online kernel RLS regression </li> <li> online kernelized centroid estimation/one class classifier </li> <li> online SVM classification </li> <li> kernel k-means clustering </li> <li> radial basis function networks </li> <li> kernelized recursive feature ranking </li> <li> Bayesian network inference using junction trees or MCMC </li> <li> General purpose unconstrained non-linear optimization algorithms using the conjugate gradient, BFGS, and L-BFGS techniques </li> <li> Levenberg-Marquardt for solving non-linear least squares problems </li> <li> A general purpose cutting plane optimizer. </li> </ul> <p>The library also comes with extensive documentation and example programs that walk the user through the use of these machine learning techniques.<br /> </p> <p>Finally, dlib includes a fast matrix library that lets the user use a simple Matlab like syntax. It is also capable of using BLAS and LAPACK libraries such as ATLAS or the Intel MKL when available. Additionally, the use of BLAS and LAPACK is transparent to the user, that is, the dlib matrix object uses BLAS and LAPACK internally to optimize various operations while still allowing the user to use a simple MATLAB like syntax. </p></html>Davis KingFri, 18 May 2018 04:19:52 -0000http://mloss.org/software/rss/comments/2188http://mloss.org/revision/view/2188/svmclassificationclusteringregressionkernel methodsmatrix librarykkmeansoptimizationalgorithmsexact bayesian methodsapproximate inferencebayesian networksjunction tree