About: DAL is an efficient and flexibible MATLAB toolbox for sparse/lowrank learning/reconstruction based on the dual augmented Lagrangian method. Changes:

About: Estimates statistical significance of association between variables and their principal components (PCs). Changes:Initial Announcement on mloss.org.

About: DRVQ is a C++ library implementation of dimensionalityrecursive vector quantization, a fast vector quantization method in highdimensional Euclidean spaces under arbitrary data distributions. It is an approximation of kmeans that is practically constant in data size and applies to arbitrarily high dimensions but can only scale to a few thousands of centroids. As a byproduct of training, a tree structure performs either exact or approximate quantization on trained centroids, the latter being not very precise but extremely fast. Changes:Initial Announcement on mloss.org.

About: hapFabia is an R package for identification of very short segments of identity by descent (IBD) characterized by rare variants in large sequencing data. It detects 100 times smaller segments than previous methods. Changes:o citation update o plot function improved

About: hapFabia is an R package for identification of very short segments of identity by descent (IBD) characterized by rare variants in large sequencing data. Changes:o citation update o plot function improved

About: A library for calculating and accessing generalized Stirling numbers of the second kind, which are used for inference in PoissonDirichlet processes. Changes:Initial Announcement on mloss.org.

About: Evolutionary Learning of Globally Optimal Trees Changes:Fetched by rcranrobot on 20140501 00:00:05.459097

About: The glmie toolbox contains scalable estimation routines for GLMs (generalised linear models) and SLMs (sparse linear models) as well as an implementation of a scalable convex variational Bayesian inference relaxation. We designed the glmie package to be simple, generic and easily expansible. Most of the code is written in Matlab including some MEX files. The code is fully compatible to both Matlab 7.x and GNU Octave 3.2.x. Probabilistic classification, sparse linear modelling and logistic regression are covered in a common algorithmical framework allowing for both MAP estimation and approximate Bayesian inference. Changes:added factorial mean field inference as a third algorithm complementing expectation propagation and variational Bayes generalised nonGaussian potentials so that affine instead of linear functions of the latent variables can be used

About: ALgebraic COmbinatorial COmpletion of MAtrices. A collection of algorithms to impute or denoise single entries in an incomplete rank one matrix, to determine for which entries this is possible with any algorithm, and to provide algorithmindependent error estimates. Includes demo scripts. Changes:Initial Announcement on mloss.org.

About: ClowdFlows is a web based platform for service oriented data mining publicly available at http://clowdflows.org . A web based interface allows users to construct data mining workflows that are hosted on the web and can be (if allowed by the author) accessed by anyone by following a URL of the workflow. Changes:Initial Announcement on mloss.org.

About: [FACTORIE](http://factorie.cs.umass.edu) is a toolkit for deployable probabilistic modeling, implemented as a software library in [Scala](http://scalalang.org). It provides its users with a succinct language for creating [factor graphs](http://en.wikipedia.org/wiki/Factor_graph), estimating parameters and performing inference. It also has implementations of many machine learning tools and a full NLP pipeline. Changes:Initial Announcement on mloss.org.

About: Dataefficient policy search framework using probabilistic Gaussian process models Changes:Initial Announcement on mloss.org.

About: PRoNTo is freely available software and aims to facilitate the interaction between the neuroimaging and machine learning communities. The toolbox is based on pattern recognition techniques for the analysis of neuroimaging data. PRoNTo supports the analysis of all image modalities as long as they are NIfTI format files. However, only the following modalites have been tested for version 1.1: sMRI, fMRI, PET, FA (fractional anisotropy) and Beta (GLM coefficients) images. Changes:Initial Announcement on mloss.org.

About: Approximate Rank One FACtorization of tensors. An algorithm for factorization of threewaytensors and determination of their rank, includes example applications. Changes:Initial Announcement on mloss.org.

About: Apache Mahout is an Apache Software Foundation project with the goal of creating both a community of users and a scalable, Javabased framework consisting of many machine learning algorithm [...] Changes:Apache Mahout 0.8 contains, amongst a variety of performance improvements and bug fixes, an implementation of Streaming KMeans, deeper Lucene/Solr integration and new scalable recommender algorithms. For a full description of the newest release, see http://mahout.apache.org/.

About: This is the core MCMC sampler for the nonparametric sparse factor analysis model presented in David A. Knowles and Zoubin Ghahramani (2011). Nonparametric Bayesian Sparse Factor Models with application to Gene Expression modelling. Annals of Applied Statistics Changes:Initial Announcement on mloss.org.

About: Regularization paTH for LASSO problem (thalasso) thalasso solves problems of the following form: minimize 1/2X*betay^2 + lambda*sumbeta_i, where X and y are problem data and beta and lambda are variables. Changes:Initial Announcement on mloss.org.

About: A Java framework for statistical analysis and classification of biological sequences Changes:New classes:
New features and improvements:
Restructuring:
Several minor new features, bug fixes, and code cleanups

About: Regularization for semiparametric additive hazards regression Changes:Fetched by rcranrobot on 20150801 00:00:03.948957

About: A collection of python code to perform research in optimization. The aim is to provide reusable components that can be quickly applied to machine learning problems. Used in:  Ellipsoidal multiple instance learning  difference of convex functions algorithms for sparse classfication  Contextual bandits upper confidence bound algorithm (using GP)  learning output kernels, that is kernels between the labels of a classifier. Changes:
