About: libnabo is a fast K Nearset Neighbor library for low-dimensional spaces. Changes:
|
About: libDAI provides free & open source implementations of various (approximate) inference methods for graphical models with discrete variables, including Bayesian networks and Markov Random Fields. Changes:Release 0.3.2 fixes various bugs and adds GLC (Generalized Loop Corrections) written by Siamak Ravanbakhsh.
|
About: Recur is a collection of Gstreamer plugins and language modelling tools based on recurrent neural networks. Changes:Initial Announcement on mloss.org.
|
About: Cluster quality Evaluation software. Implements cluster quality metrics based on ground truths such as Purity, Entropy, Negentropy, F1 and NMI. It includes a novel approach to correct for pathological or ineffective clusterings called 'Divergence from a Random Baseline'. Changes:Moved project to GitHub.
|
About: xgboost: eXtreme Gradient Boosting It is an efficient and scalable implementation of gradient boosting framework. The package includes efficient linear model solver and tree learning algorithm. The package can automatically do parallel computation with OpenMP, and it can be more than 10 times faster than existing gradient boosting packages such as gbm or sklearn.GBM . It supports various objective functions, including regression, classification and ranking. The package is made to be extensible, so that user are also allowed to define there own objectives easily. The newest version of xgboost now supports distributed learning on various platforms such as hadoop, mpi and scales to even larger problems Changes:
|
About: Nilearn is a Python module for fast and easy statistical learning on NeuroImaging data. It leverages the scikit-learn Python toolbox for multivariate statistics with applications such as predictive modelling, classification, decoding, or connectivity analysis. Changes:Initial Announcement on mloss.org.
|
About: A Theano framework for building and training neural networks Changes:Initial Announcement on mloss.org.
|
About: A streaming inference and query engine for the Cross-Categorization model of tabular data. Changes:Initial Announcement on mloss.org.
|
About: A toolkit for hyperparameter optimization for machine learning algorithms. Changes:Initial Announcement on mloss.org.
|
About: Libcmaes is a multithreaded C++11 library (with Python bindings) for high performance blackbox stochastic optimization of difficult, possibly non-linear and non-convex functions, using the CMA-ES algorithm for Covariance Matrix Adaptation Evolution Strategy. Libcmaes is useful to minimize / maximize any function, without information regarding gradient or derivability. Changes:This is a major release, with several novelties, improvements and fixes, among which:
|
About: MALSS is a python module to facilitate machine learning tasks. Changes:Initial Announcement on mloss.org.
|
About: The fertilized forests project has the aim to provide an easy to use, easy to extend, yet fast library for decision forests. It summarizes the research in this field and provides a solid platform to extend it. Offering consistent interfaces to C++, Python and Matlab and being available for all major compilers gives the user high flexibility for using the library. Changes:Initial Announcement on mloss.org.
|
About: rabit (Reliable Allreduce and Broadcast Interface) is a light weight library that provides a fault tolerant interface of Allreduce and Broadcast for portable , scalable and reliable distributed machine learning programs. Rabit programs can run on various platforms such as Hadoop, MPI and no installation is needed. Rabit now support kmeans clustering, and distributed xgboost: an extremely efficient disrtibuted boosted tree(GBDT) toolkit. Changes:Initial Announcement on mloss.org.
|
About: Scalable tensor factorization Changes:
|
About: pyGPs is a Python package for Gaussian process (GP) regression and classification for machine learning. Changes:Changelog pyGPs v1.3.2December 15th 2014
|
About: C++ software for statistical classification, probability estimation and interpolation/non-linear regression using variable bandwidth kernel estimation. Changes:New in Version 0.9.8:
|
About: Gaussian processes with general nonlinear likelihoods using the unscented transform or Taylor series linearisation. Changes:Initial Announcement on mloss.org.
|
About: pySPACE is the abbreviation for "Signal Processing and Classification Environment in Python using YAML and supporting parallelization". It is a modular software for processing of large data streams that has been specifically designed to enable distributed execution and empirical evaluation of signal processing chains. Various signal processing algorithms (so called nodes) are available within the software, from finite impulse response filters over data-dependent spatial filters (e.g. CSP, xDAWN) to established classifiers (e.g. SVM, LDA). pySPACE incorporates the concept of node and node chains of the MDP framework. Due to its modular architecture, the software can easily be extended with new processing nodes and more general operations. Large scale empirical investigations can be configured using simple text- configuration files in the YAML format, executed on different (distributed) computing modalities, and evaluated using an interactive graphical user interface. Changes:improved testing, improved documentation, windows compatibility, more algorithms
|
About: RLPy is a framework for performing reinforcement learning (RL) experiments in Python. RLPy provides a large library of agent and domain components, and a suite of tools to aid in experiments (parallelization, hyperparameter optimization, code profiling, and plotting). Changes:
|
About: Caffe aims to provide computer vision scientists with a clean, modifiable implementation of state-of-the-art deep learning algorithms. We believe that Caffe is the fastest available GPU CNN implementation. Caffe also provides seamless switching between CPU and GPU, which allows one to train models with fast GPUs and then deploy them on non-GPU clusters. Even in CPU mode, computing predictions on an image takes only 20 ms (in batch mode). Changes:LOTS of stuff: https://github.com/BVLC/caffe/releases/tag/v0.9999
|