Showing Items 111120 of 604 on page 12 of 61: First Previous 7 8 9 10 11 12 13 14 15 16 17 Next Last
About: The DLLearner framework contains several algorithms for supervised concept learning in Description Logics (DLs) and OWL. Changes:See http://dllearner.org/development/changelog/.

About: DRVQ is a C++ library implementation of dimensionalityrecursive vector quantization, a fast vector quantization method in highdimensional Euclidean spaces under arbitrary data distributions. It is an approximation of kmeans that is practically constant in data size and applies to arbitrarily high dimensions but can only scale to a few thousands of centroids. As a byproduct of training, a tree structure performs either exact or approximate quantization on trained centroids, the latter being not very precise but extremely fast. Changes:Initial Announcement on mloss.org.

About: dysii is a C++ library for distributed probabilistic inference and learning in largescale dynamical systems. It provides methods such as the Kalman, unscented Kalman, and particle filters and [...] Changes:Initial Announcement on mloss.org.

About: EANT Without Structural Optimization is used to learn a policy in either complete or partially observable reinforcement learning domains of continuous state and action space. Changes:Initial Announcement on mloss.org.

About: The Easysvm package provides a set of tools based on the Shogun toolbox allowing to train and test SVMs in a simple way. Changes:Fixes for shogun 0.7.3.

About: Eblearn is an objectoriented C++ library that implements various Changes:Initial Announcement on mloss.org.

About: Nonnegative Sparse Coding, Discriminative Semisupervised Learning, sparse probability graph Changes:Initial Announcement on mloss.org.

About: Elefant is an open source software platform for the Machine Learning community licensed under the Mozilla Public License (MPL) and developed using Python, C, and C++. We aim to make it the platform [...] Changes:This release contains the Stream module as a first step in the direction of providing C++ library support. Stream aims to be a software framework for the implementation of large scale online learning algorithms. Large scale, in this context, should be understood as something that does not fit in the memory of a standard desktop computer. Added Bundle Methods for Regularized Risk Minimization (BMRM) allowing to choose from a list of loss functions and solvers (linear and quadratic). Added the following loss classes: BinaryClassificationLoss, HingeLoss, SquaredHingeLoss, ExponentialLoss, LogisticLoss, NoveltyLoss, LeastMeanSquareLoss, LeastAbsoluteDeviationLoss, QuantileRegressionLoss, EpsilonInsensitiveLoss, HuberRobustLoss, PoissonRegressionLoss, MultiClassLoss, WinnerTakesAllMultiClassLoss, ScaledSoftMarginMultiClassLoss, SoftmaxMultiClassLoss, MultivariateRegressionLoss Graphical User Interface provides now extensive documentation for each component explaining state variables and port descriptions. Changed saving and loading of experiments to XML (thereby avoiding storage of large input data structures). Unified automatic input checking via new static typing extending Python properties. Full support for recursive composition of larger components containing arbitrary statically typed state variables.

About: ELF provides many well implemented supervised learners for classification and regression tasks with an opportunity of ensemble learning. Changes:Initial Announcement on mloss.org.

About: ELKI is a framework for implementing datamining algorithms with support for index structures, that includes a wide variety of clustering and outlier detection methods. Changes:Additions and Improvements from ELKI 0.6.0:
Clustering algorithms: Kmeans
CLARA clustering Xmeans Hierarchical clustering
LSDBC clustering EM clustering was refactored and moved into its own package. The new version is much more extensible. Parallel computation framework, and some parallelized algorithms
Input:
Classification:
Evaluation: Internal cluster evaluation:
Statistical dependence measures:
Distance functions:
Preprocessing:
Indexing improvements:
Frequent Itemset Mining:
Uncertain clustering:
Outlier detection changes / smaller improvements:
Various:
