Projects running under platform independent.
Showing Items 1-20 of 94 on page 1 of 5: 1 2 3 4 5 Next

Logo ELKI 0.7.0

by erich - November 27, 2015, 18:23:16 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 15682 views, 2854 downloads, 4 subscriptions

About: ELKI is a framework for implementing data-mining algorithms with support for index structures, that includes a wide variety of clustering and outlier detection methods.


Additions and Improvements from ELKI 0.6.0:

ELKI is now available on Maven:|de.lmu.ifi.dbs.elki|elki|0.7.0|jar

Please clone for a minimal project example.

Uncertain data types, and clustering algorithms for uncertain data.

Major refactoring of distances - removal of Distance values and removed support for non-double-valued distance functions (in particular DoubleDistance was removed). While this reduces the generality of ELKI, we could remove about 2.5% of the codebase by not having to have optimized codepaths for double-distance anymore. Generics for distances were present in almost any distance-based algorithm, and we were also happy to reduce the use of generics this way. Support for non-double-valued distances can trivially be added again, e.g. by adding the specialization one level higher: at the query instead of the distance level, for example. In this process, we also removed the Generics from NumberVector. The object-based get was deprecated for a good reason long ago, and e.g. doubleValue are more efficient (even for non-DoubleVectors).

Dropped some long-deprecated classes.


  • speedups for some initialization heuristics.

  • K-means++ initialization no longer squares distances (again).

  • farthest-point heuristics now uses minimum instead of sum (renamed).

  • additional evaluation criteria.

  • Elkan's and Hamerly's faster k-means variants.

CLARA clustering.


Hierarchical clustering:

  • Renamed naive algorithm to AGNES.

  • Anderbergs algorithm (faster than AGNES, slower than SLINK).

  • CLINK for complete linkage clustering in O(n²) time, O(n) memory.

  • Simple extraction from HDBSCAN.

  • "Optimal" extraction from HDBSCAN.

  • HDBSCAN, in two variants.

LSDBC clustering.

EM clustering was refactored and moved into its own package. The new version is much more extensible.

OPTICS clustering:

  • Added a list-based variant of OPTICS to our heap-based.

  • FastOPTICS (contributed by Johannes Schneider).

  • Improved OPTICS Xi cluster extraction.

Outlier detection:

  • KDEOS outlier detection (SDM14).

  • k-means based outlier detection (distance to centroid) and Silhouette coefficient based approach (which does not work too well on the toy data sets - the lowest silhouette are usually where two clusters touch).

  • bug fix in kNN weight, when distances are tied and kNN yields more than k results.

  • kNN and kNN weight outlier have their k parameter changed: old 2NN outlier is now 1NN outlier, as commonly understood in classification literature (1 nearest neighbor other than the query object; whereas in database literature the 1NN is usually the query object itself). You can get the old result back by decreasing k by one easily.

  • LOCI implementation is now only O(n^3 log n) instead of O(n^4).

  • Local Isolation Coefficient (LIC).

  • IDOS outlier detection with intrinsic dimensionality.

  • Baseline intrinsic dimensionality outlier detection.

  • Variance-of-Volumes outlier detection (VOV).

Parallel computation framework, and some parallelized algorithms

  • Parallel k-means.

  • Parallel LOF and variants.

LibSVM format parser.

kNN classification (with index acceleration).

Internal cluster evaluation:

  • Silhouette index.

  • Simplified Silhouette index (faster).

  • Davis-Bouldin index.

  • PBM index.

  • Variance-Ratio-Criteria.

  • Sum of squared errors.

  • C-Index.

  • Concordant pair indexes (Gamma, Tau).

  • Different noise handling strategies for internal indexes.

Statistical dependence measures:

  • Distance correlation dCor.

  • Hoeffings D.

  • Some divergence / mutual information measures.

Distance functions:

  • Big refactoring.

  • Time series distances refactored, allow variable length series now.

  • Hellinger distance and kernel function.


  • Faster MDS implementation using power iterations.

Indexing improvements:

  • Precomputed distance matrix "index".

  • iDistance index (static only).

  • Inverted-list index for sparse data and cosine/arccosine distance.

  • Cover tree index (static only).

  • Additional LSH hash functions.

Frequent Itemset Mining:

  • Improved APRIORI implementation.

  • FP-Growth added.

  • Eclat (basic version only) added.

Uncertain clustering:

  • Discrete and continuous data models.

  • FDBSCAN clustering.

  • UKMeans clustering.

  • CKMeans clustering.

  • Representative Uncertain Clustering (Meta-algorithm).

  • Center-of-mass meta Clustering (allows using other clustering algorithms on uncertain objects).


  • Several estimators for intrinsic dimensionality.

MiniGUI has two "secret" new options: -minigui.last -minigui.autorun to load the last saved configuration and run it, for convenience.

Logging API has been extended, to make logging more convenient in a number of places (saving some lines for progress logging and timing).

Logo KeLP 2.0.0

by kelpadmin - November 26, 2015, 16:14:53 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 3963 views, 989 downloads, 3 subscriptions

About: Kernel-based Learning Platform (KeLP) is Java framework that supports the implementation of kernel-based learning algorithms, as well as an agile definition of kernel functions over generic data representation, e.g. vectorial data or discrete structures. The framework has been designed to decouple kernel functions and learning algorithms, through the definition of specific interfaces. Once a new kernel function has been implemented, it can be automatically adopted in all the available kernel-machine algorithms. KeLP includes different Online and Batch Learning algorithms for Classification, Regression and Clustering, as well as several Kernel functions, ranging from vector-based to structural kernels. It allows to build complex kernel machine based systems, leveraging on JSON/XML interfaces to instantiate classifiers without writing a single line of code.


This is a major release that includes brand new features as well as a renewed architecture of the entire project.

Now KeLP is organized in four maven projects:

  • kelp-core: it contains the infrastructure of abstract classes and interfaces to work with KeLP. Furthermore, some implementations of algorithms, kernels and representations are included, to provide a base operative environment.

  • kelp-additional-kernels: it contains several kernel functions that extend the set of kernels made available in the kelp-core project. Moreover, this project implements the specific representations required to enable the application of such kernels. In this project the following kernel functions are considered: Sequence kernels, Tree kernels and Graphs kernels.

  • kelp-additional-algorithms: it contains several learning algorithms extending the set of algorithms provided in the kelp-core project, e.g. the C-Support Vector Machine or ν-Support Vector Machine learning algorithms. In particular, advanced learning algorithms for classification and regression can be found in this package. The algorithms are grouped in: 1) Batch Learning, where the complete training dataset is supposed to be entirely available during the learning phase; 2) Online Learning, where individual examples are exploited one at a time to incrementally acquire the model.

  • kelp-full: this is the complete package of KeLP. It aggregates the previous modules in one jar. It contains also a set of fully functioning examples showing how to implement a learning system with KeLP. Batch learning algorithm as well as Online Learning algorithms usage is shown here. Different examples cover the usage of standard kernel, Tree Kernels and Sequence Kernel, with caching mechanisms.

Furthermore this new release includes:

  • CsvDatasetReader: it allows to read files in CSV format

  • DCDLearningAlgorithm: it is the implementation of the Dual Coordinate Descent learning algorithm

  • methods for checking the consistency of a dataset.

Check out this new version from our repositories. API Javadoc is already available. Your suggestions will be very precious for us, so download and try KeLP 2.0.0!

Logo KeBABS 1.4.1

by UBod - November 3, 2015, 11:33:46 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 8215 views, 1472 downloads, 3 subscriptions

About: Kernel-Based Analysis of Biological Sequences

  • new method to compute prediction profiles from models trained with mixture kernels
  • correction for position specific kernel with offsets
  • corrections for prediction profile of motif kernel
  • additional hint on help page of kbsvm

Logo BayesPy 0.4.1

by jluttine - November 2, 2015, 13:40:09 CET [ Project Homepage BibTeX Download ] 9842 views, 2331 downloads, 3 subscriptions

About: Variational Bayesian inference tools for Python

  • Define extra dependencies needed to build the documentation

Logo Cognitive Foundry 3.4.2

by Baz - October 30, 2015, 06:53:03 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 23279 views, 3909 downloads, 4 subscriptions

About: The Cognitive Foundry is a modular Java software library of machine learning components and algorithms designed for research and applications.

  • General:
    • Upgraded MTJ to 1.0.3.
  • Common:
    • Added package for hash function computation including Eva, FNV-1a, MD5, Murmur2, Prime, SHA1, SHA2
    • Added callback-based forEach implementations to Vector and InfiniteVector, which can be faster for iterating through some vector types.
    • Optimized DenseVector by removing a layer of indirection.
    • Added method to compute set of percentiles in UnivariateStatisticsUtil and fixed issue with percentile interpolation.
    • Added utility class for enumerating combinations.
    • Adjusted ScalarMap implementation hierarchy.
    • Added method for copying a map to VectorFactory and moved createVectorCapacity up from SparseVectorFactory.
    • Added method for creating square identity matrix to MatrixFactory.
    • Added Random implementation that uses a cached set of values.
  • Learning:
    • Implemented feature hashing.
    • Added factory for random forests.
    • Implemented uniform distribution over integer values.
    • Added Chi-squared similarity.
    • Added KL divergence.
    • Added general conditional probability distribution.
    • Added interfaces for Regression, UnivariateRegression, and MultivariateRegression.
    • Fixed null pointer exception that can happen in K-means with an empty cluster.
    • Fixed name of maxClusters property on AgglomerativeClusterer (was called maxMinDistance).
  • Text:
    • Improvements to LDA Gibbs sampler.

Logo MLweb 0.1.2

by lauerfab - October 9, 2015, 11:55:52 CET [ Project Homepage BibTeX Download ] 1463 views, 401 downloads, 3 subscriptions

About: MLweb is an open source project that aims at bringing machine learning capabilities into web pages and web applications, while maintaining all computations on the client side. It includes (i) a javascript library to enable scientific computing within web pages, (ii) a javascript library implementing machine learning algorithms for classification, regression, clustering and dimensionality reduction, (iii) a web application providing a matlab-like development environment.

  • Add Regression:AutoReg method
  • Add KernelRidgeRegression tuning function
  • More efficient predictions for KRR, SVM, SVR
  • Add BFGS optimization method
  • Faster QR, SVD and eigendecomposition
  • Better support for sparse vectors and matrices
  • Add linear algebra benchmark at
  • Fix plots in LALOlib/ML.js
  • Fix cross-origin issues in new MLlab()
  • Small bug fixes

Logo KEEL Knowledge Extraction based on Evolutionary Learning 3.0

by keel - September 18, 2015, 12:38:54 CET [ Project Homepage BibTeX Download ] 618 views, 184 downloads, 1 subscription

About: KEEL (Knowledge Extraction based on Evolutionary Learning) is an open source (GPLv3) Java software tool that can be used for a large number of different knowledge data discovery tasks. KEEL provides a simple GUI based on data flow to design experiments with different datasets and computational intelligence algorithms (paying special attention to evolutionary algorithms) in order to assess the behavior of the algorithms. It contains a wide variety of classical knowledge extraction algorithms, preprocessing techniques (training set selection, feature selection, discretization, imputation methods for missing values, among others), computational intelligence based learning algorithms, hybrid models, statistical methodologies for contrasting experiments and so forth. It allows to perform a complete analysis of new computational intelligence proposals in comparison to existing ones. Moreover, KEEL has been designed with a two-fold goal: research and educational. KEEL is also coupled with KEEL-dataset: a webpage that aims at providing to the machine learning researchers a set of benchmarks to analyze the behavior of the learning methods. Concretely, it is possible to find benchmarks already formatted in KEEL format for classification (such as standard, multi instance or imbalanced data), semi-supervised classification, regression, time series and unsupervised learning. Also, a set of low quality data benchmarks is maintained in the repository.


Initial Announcement on

About: Nowadays, this is very popular to use the deep architectures in machine learning. Deep Belief Networks (DBNs) are deep architectures that use stack of Restricted Boltzmann Machines (RBM) to create a powerful generative model using training data. DBNs have many ability like feature extraction and classification that are used in many applications like image processing, speech processing and etc. According to the results of the experiments conducted on MNIST (image), ISOLET (speech), and 20 Newsgroups (text) datasets, it was shown that the toolbox can learn automatically a good representation of the input from unlabeled data with better discrimination between different classes. In addition, the toolbox supports different sampling methods (e.g. Gibbs, CD, PCD and our new FEPCD method), different sparsity methods (quadratic, rate distortion and our new normal method), different RBM types (generative and discriminative), GPU, etc. The toolbox is a user-friendly open source software and is freely available on the website.


New in toolbox

  • Bug was fixed for computeBatchSize function in Linux.
  • Revision of some demo scripts. cardinal

Logo Java Data Mining Package 0.3.0

by arndt - August 19, 2015, 15:44:46 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 1129 views, 208 downloads, 3 subscriptions

About: A Java library for machine learning and data analytics


Initial Announcement on

Logo Universal Java Matrix Package 0.3.0

by arndt - July 31, 2015, 14:23:14 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 12332 views, 2339 downloads, 3 subscriptions

About: The Universal Java Matrix Package (UJMP) is a data processing tool for Java. Unlike JAMA and Colt, it supports multi-threading and is therefore much faster on current hardware. It does not only support matrices with double values, but instead handles every type of data as a matrix through a common interface, e.g. CSV files, Excel files, images, WAVE audio files, tables in SQL data bases, and much more.


Updated to version 0.3.0

Logo JMLR GPstuff 4.6

by avehtari - July 15, 2015, 15:08:06 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 26562 views, 6268 downloads, 2 subscriptions

Rating Whole StarWhole StarWhole StarWhole StarWhole Star
(based on 1 vote)

About: The GPstuff toolbox is a versatile collection of Gaussian process models and computational tools required for inference. The tools include, among others, various inference methods, sparse approximations and model assessment methods.


2015-07-09 Version 4.6

Development and release branches available at

New features

  • Use Pareto smoothed importance sampling (Vehtari & Gelman, 2015) for

  • importance sampling leave-one-out cross-validation (gpmc_loopred.m)

  • importance sampling integration over hyperparameters (gp_ia.m)

  • importance sampling part of the logistic Gaussian process density estimation (lgpdens.m)

  • references:

    • Aki Vehtari and Andrew Gelman (2015). Pareto smoothed importance sampling. arXiv preprint arXiv:1507.02646.
    • Aki Vehtari, Andrew Gelman and Jonah Gabry (2015). Efficient implementation of leave-one-out cross-validation and WAIC for evaluating fitted Bayesian models.
  • New covariance functions

    • gpcf_additive creates a mixture over products of kernels for each dimension reference: Duvenaud, D. K., Nickisch, H., & Rasmussen, C. E. (2011). Additive Gaussian processes. In Advances in neural information processing systems, pp. 226-234.
    • gpcf_linearLogistic corresponds to logistic mean function
    • gpcf_linearMichelismenten correpsonds Michelis Menten mean function

Improvements - faster EP moment calculation for lik_logit

Several minor bugfixes

Logo JMLR GPML Gaussian Processes for Machine Learning Toolbox 3.6

by hn - July 6, 2015, 12:31:28 CET [ Project Homepage BibTeX Download ] 28998 views, 6772 downloads, 4 subscriptions

Rating Whole StarWhole StarWhole StarWhole StarWhole Star
(based on 2 votes)

About: The GPML toolbox is a flexible and generic Octave 3.2.x and Matlab 7.x implementation of inference and prediction in Gaussian Process (GP) models.

  • added a new inference function infGrid_Laplace allowing to use non-Gaussian likelihoods for large grids

  • fixed a bug due to Octave evaluating norm([]) to a tiny nonzero value, modified all lik/lik*.m functions reported by Philipp Richter

  • small bugfixes in covGrid and infGrid

  • bugfix in predictive variance of likNegBinom due to Seth Flaxman

  • bugfix in infFITC_Laplace as suggested by Wu Lin

  • bugfix in covPP{iso,ard}

About: R package implementing statistical test and post hoc tests to compare multiple algorithms in multiple problems.


Initial Announcement on

Logo Simple Generalized Learning Vector Quantization 1.0

by fmschleif - June 4, 2015, 10:49:49 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 1306 views, 348 downloads, 2 subscriptions

About: Simple and hopefully clean and easy to follow implementation of the Generalized Learning Vector Quantizer (GLVQ) with variants for metric adaptation (RGLVQ, GMLVQ, LiRaM).


Initial Announcement on

Logo MIPS, The migrant implementation system 1.0

by thomasfannes - April 28, 2015, 15:07:05 CET [ Project Homepage BibTeX Download ] 1160 views, 352 downloads, 3 subscriptions

About: MIPS is a software library for state-of-the-art graph mining algorithms. The library is platform independent, written in C++(03), and aims at implementing generic and efficient graph mining algorithms.


description update

Logo Choquistic Utilitaristic Regression 1.00

by AliFall - April 17, 2015, 11:31:20 CET [ BibTeX BibTeX for corresponding Paper Download ] 981 views, 401 downloads, 2 subscriptions

About: This Matlab package implements a method for learning a choquistic regression model (represented by a corresponding Moebius transform of the underlying fuzzy measure), using the maximum likelihood approach proposed in [2], eqquiped by sigmoid normalization, see [1].


Initial Announcement on

Logo Blocks 0.1

by bartvm - March 30, 2015, 22:25:02 CET [ Project Homepage BibTeX Download ] 1173 views, 335 downloads, 3 subscriptions

About: A Theano framework for building and training neural networks


Initial Announcement on

Logo apsis 0.1.1

by fdiehl - March 17, 2015, 08:27:02 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 1369 views, 316 downloads, 2 subscriptions

About: A toolkit for hyperparameter optimization for machine learning algorithms.


Initial Announcement on

Logo JMLR Mulan 1.5.0

by lefman - February 23, 2015, 21:19:05 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 20147 views, 7255 downloads, 2 subscriptions

About: Mulan is an open-source Java library for learning from multi-label datasets. Multi-label datasets consist of training examples of a target function that has multiple binary target variables. This means that each item of a multi-label dataset can be a member of multiple categories or annotated by many labels (classes). This is actually the nature of many real world problems such as semantic annotation of images and video, web page categorization, direct marketing, functional genomics and music categorization into genres and emotions.



  • Added the MLCSSP algorithm (from ICML 2013)
  • Enhancements of multi-target regression capabilities
  • Improved CLUS support
  • Added pairwise classifier and pairwise transformation


  • Providing training data in the Evaluator is unnecessary in the case of specific measures.
  • Examples with missing ground truth are not skipped for measures that handle missing values.
  • Added logistics and squared error losses and measures

Bug fixes

  • IndexOutOfBounds in calculation of MiAP and GMiAP
  • Bug fix in
  • When in rank/score mode the meta-data contained additional unecessary attributes. (Newton Spolaor)

API changes

  • Upgrade to Java 7
  • Upgrade to Weka 3.7.10


  • Small changes and improvements in the wrapper classes for the CLUS library
  • (new experiment)
  • Enumeration is now used for specifying the type of meta-data. (Newton Spolaor)

Logo Machine Learning Support System MALSS 0.5.0

by canard0328 - February 20, 2015, 15:56:02 CET [ Project Homepage BibTeX Download ] 1164 views, 310 downloads, 1 subscription

About: MALSS is a python module to facilitate machine learning tasks.


Initial Announcement on

Showing Items 1-20 of 94 on page 1 of 5: 1 2 3 4 5 Next