About: A scalable, fast C++ machine learning library, with emphasis on usability. Changes:Fixed CoverTree to properly handle singlepoint datasets.  Fixed a bug in CosineTree (and thus QUICSVD) that caused split failures for some datasets (#717).  Added mlpack_preprocess_describe program, which can be used to print statistics on a given dataset (#742).  Fix prioritized recursion for kfurthestneighbor search (mlpack_kfn and the KFN class), leading to ordersofmagnitude speedups in some cases.  Bump minimum required version of Armadillo to 4.200.0.  Added simple Gradient Descent optimizer, found in src/mlpack/core/optimizers/gradient_descent/ (#792).  Added approximate furthest neighbor search algorithms QDAFN and DrusillaSelect in src/mlpack/methods/approx_kfn/, with commandline program mlpack_approx_kfn.

About: A library of scalable Bayesian generalised linear models with fancy features Changes:

About: The GPML toolbox is a flexible and generic Octave/Matlab implementation of inference and prediction with Gaussian process models. The toolbox offers exact inference, approximate inference for nonGaussian likelihoods (Laplace's Method, Expectation Propagation, Variational Bayes) as well for large datasets (FITC, VFE, KISSGP). A wide range of covariance, likelihood, mean and hyperprior functions allows to create very complex GP models. Changes:A major code restructuring effort did take place in the current release unifying certain inference functions and allowing more flexibility in covariance function composition. We also redesigned the whole derivative computation pipeline to strongly improve the overall runtime. We finally include gridbased covariance approximations natively. More generic sparse approximation using Power EP
Approximate covariance object unifying sparse approximations, gridbased approximations and exact covariance computations
Hiearchical structure of covariance functions
Faster derivative computations for mean and cov functions
New mean functions
New optimizer
New GLM link function
Smaller fixes

About: A Java Toolbox for Scalable Probabilistic Machine Learning. Changes:
Detailed information can be found in the toolbox's web page

About: Kernelbased Learning Platform (KeLP) is Java framework that supports the implementation of kernelbased learning algorithms, as well as an agile definition of kernel functions over generic data representation, e.g. vectorial data or discrete structures. The framework has been designed to decouple kernel functions and learning algorithms, through the definition of specific interfaces. Once a new kernel function has been implemented, it can be automatically adopted in all the available kernelmachine algorithms. KeLP includes different Online and Batch Learning algorithms for Classification, Regression and Clustering, as well as several Kernel functions, ranging from vectorbased to structural kernels. It allows to build complex kernel machine based systems, leveraging on JSON/XML interfaces to instantiate prediction models without writing a single line of code. Changes:In addition to minor bug fixes, this release includes:
Check out this new version from our repositories. API Javadoc is already available. Your suggestions will be very precious for us, so download and try KeLP 2.1.0!

About: MLweb is an open source project that aims at bringing machine learning capabilities into web pages and web applications, while maintaining all computations on the client side. It includes (i) a javascript library to enable scientific computing within web pages, (ii) a javascript library implementing machine learning algorithms for classification, regression, clustering and dimensionality reduction, (iii) a web application providing a matlablike development environment. Changes:

About: Nowadays this is very popular to use the deep architectures in machine learning. Deep Belief Networks (DBNs) are deep architectures that use a stack of Restricted Boltzmann Machines (RBM) to create a powerful generative model using training data. DBNs have many abilities such as feature extraction and classification that are used in many applications including image processing, speech processing, text categorization, etc. This paper introduces a new object oriented toolbox with the most important abilities needed for the implementation of DBNs. According to the results of the experiments conducted on the MNIST (image), ISOLET (speech), and the 20 Newsgroups (text) datasets, it was shown that the toolbox can learn automatically a good representation of the input from unlabeled data with better discrimination between different classes. Also on all the aforementioned datasets, the obtained classification errors are comparable to those of the state of the art classifiers. In addition, the toolbox supports different sampling methods (e.g. Gibbs, CD, PCD and our new FEPCD method), different sparsity methods (quadratic, rate distortion and our new normal method), different RBM types (generative and discriminative), GPU based, etc. The toolbox is a userfriendly open source software in MATLAB and Octave and is freely available on the website. Changes:New in toolbox

About: Large scale, distributed graph processing made easy. Changes:Bug fixes, Graph generators

About: The GPstuff toolbox is a versatile collection of Gaussian process models and computational tools required for inference. The tools include, among others, various inference methods, sparse approximations and model assessment methods. Changes:20160609 Version 4.7 Development and release branches available at https://github.com/gpstuffdev/gpstuff New features
Improvements
Bugfixes

About: ELKI is a framework for implementing datamining algorithms with support for index structures, that includes a wide variety of clustering and outlier detection methods. Changes:Additions and improvements from ELKI 0.7.0 to 0.7.1: Algorithm additions:
Important bug fixes:
UI improvements:
Smaller changes:

About: testing mloss.org Changes:Initial Announcement on mloss.org.

About: The apcluster package implements Frey's and Dueck's Affinity Propagation clustering in R. The package further provides leveraged affinity propagation, exemplarbased agglomerative clustering, and various tools for visual analysis of clustering results. Changes:

About: A Java framework for statistical analysis and classification of biological sequences Changes:New classes and packages:
New features and improvements:

About: KernelBased Analysis of Biological Sequences Changes:

About: Variational Bayesian inference tools for Python Changes:

About: The Cognitive Foundry is a modular Java software library of machine learning components and algorithms designed for research and applications. Changes:

About: KEEL (Knowledge Extraction based on Evolutionary Learning) is an open source (GPLv3) Java software tool that can be used for a large number of different knowledge data discovery tasks. KEEL provides a simple GUI based on data flow to design experiments with different datasets and computational intelligence algorithms (paying special attention to evolutionary algorithms) in order to assess the behavior of the algorithms. It contains a wide variety of classical knowledge extraction algorithms, preprocessing techniques (training set selection, feature selection, discretization, imputation methods for missing values, among others), computational intelligence based learning algorithms, hybrid models, statistical methodologies for contrasting experiments and so forth. It allows to perform a complete analysis of new computational intelligence proposals in comparison to existing ones. Moreover, KEEL has been designed with a twofold goal: research and educational. KEEL is also coupled with KEELdataset: a webpage that aims at providing to the machine learning researchers a set of benchmarks to analyze the behavior of the learning methods. Concretely, it is possible to find benchmarks already formatted in KEEL format for classification (such as standard, multi instance or imbalanced data), semisupervised classification, regression, time series and unsupervised learning. Also, a set of low quality data benchmarks is maintained in the repository. Changes:Initial Announcement on mloss.org.

About: The Universal Java Matrix Package (UJMP) is a data processing tool for Java. Unlike JAMA and Colt, it supports multithreading and is therefore much faster on current hardware. It does not only support matrices with double values, but instead handles every type of data as a matrix through a common interface, e.g. CSV files, Excel files, images, WAVE audio files, tables in SQL data bases, and much more. Changes:Updated to version 0.3.0

About: R package implementing statistical test and post hoc tests to compare multiple algorithms in multiple problems. Changes:Initial Announcement on mloss.org.
