All entries.
Showing Items 61-70 of 590 on page 7 of 59: First Previous 2 3 4 5 6 7 8 9 10 11 12 Next Last

Logo JMLR DLLearner 1.0

by Jens - February 13, 2015, 11:39:46 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 17039 views, 4201 downloads, 6 subscriptions

Rating Whole StarWhole StarWhole StarWhole Star1/2 Star
(based on 3 votes)

About: The DL-Learner framework contains several algorithms for supervised concept learning in Description Logics (DLs) and OWL.

Changes:

See http://dl-learner.org/development/changelog/.


Logo Auto encoder Based Data Clustering Toolkit 1.0

by openpr_nlpr - February 10, 2015, 08:30:55 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 1093 views, 196 downloads, 2 subscriptions

About: The auto-encoder based data clustering toolkit provides a quick start of clustering based on deep auto-encoder nets. This toolkit can cluster data in feature space with a deep nonlinear nets.

Changes:

Initial Announcement on mloss.org.


Logo Histogram of Oriented Gradient 1.0

by openpr_nlpr - February 10, 2015, 08:27:55 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 910 views, 176 downloads, 2 subscriptions

About: This is an exact implementation of Histogram of Oriented Gradient as mentioned in the paper by Dalal.

Changes:

Initial Announcement on mloss.org.


Logo JMLR Information Theoretical Estimators 0.61

by szzoli - February 8, 2015, 14:04:27 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 72180 views, 14501 downloads, 2 subscriptions

About: ITE (Information Theoretical Estimators) is capable of estimating many different variants of entropy, mutual information, divergence, association measures, cross quantities and kernels on distributions. Thanks to its highly modular design, ITE supports additionally (i) the combinations of the estimation techniques, (ii) the easy construction and embedding of novel information theoretical estimators, and (iii) their immediate application in information theoretical optimization problems.

Changes:
  • Explicit additive constant computation in generalized kNN based Renyi entropy estimators: enhancement suggestion has been added.

  • Analytical value computation of the exponentiated Jensen-Renyi kernel-2: simplified.


Logo JMLR SHOGUN 4.0.0

by sonne - February 5, 2015, 09:09:37 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 96299 views, 13549 downloads, 6 subscriptions

Rating Whole StarWhole StarWhole StarEmpty StarEmpty Star
(based on 6 votes)

About: The SHOGUN machine learning toolbox's focus is on large scale learning methods with focus on Support Vector Machines (SVM), providing interfaces to python, octave, matlab, r and the command line.

Changes:

This release features the work of our 8 GSoC 2014 students [student; mentors]:

  • OpenCV Integration and Computer Vision Applications [Abhijeet Kislay; Kevin Hughes]
  • Large-Scale Multi-Label Classification [Abinash Panda; Thoralf Klein]
  • Large-scale structured prediction with approximate inference [Jiaolong Xu; Shell Hu]
  • Essential Deep Learning Modules [Khaled Nasr; Sergey Lisitsyn, Theofanis Karaletsos]
  • Fundamental Machine Learning: decision trees, kernel density estimation [Parijat Mazumdar ; Fernando Iglesias]
  • Shogun Missionary & Shogun in Education [Saurabh Mahindre; Heiko Strathmann]
  • Testing and Measuring Variable Interactions With Kernels [Soumyajit De; Dino Sejdinovic, Heiko Strathmann]
  • Variational Learning for Gaussian Processes [Wu Lin; Heiko Strathmann, Emtiyaz Khan]

It also contains several cleanups and bugfixes:

Features

  • New Shogun project description [Heiko Strathmann]
  • ID3 algorithm for decision tree learning [Parijat Mazumdar]
  • New modes for PCA matrix factorizations: SVD & EVD, in-place or reallocating [Parijat Mazumdar]
  • Add Neural Networks with linear, logistic and softmax neurons [Khaled Nasr]
  • Add kernel multiclass strategy examples in multiclass notebook [Saurabh Mahindre]
  • Add decision trees notebook containing examples for ID3 algorithm [Parijat Mazumdar]
  • Add sudoku recognizer ipython notebook [Alejandro Hernandez]
  • Add in-place subsets on features, labels, and custom kernels [Heiko Strathmann]
  • Add Principal Component Analysis notebook [Abhijeet Kislay]
  • Add Multiple Kernel Learning notebook [Saurabh Mahindre]
  • Add Multi-Label classes to enable Multi-Label classification [Thoralf Klein]
  • Add rectified linear neurons, dropout and max-norm regularization to neural networks [Khaled Nasr]
  • Add C4.5 algorithm for multiclass classification using decision trees [Parijat Mazumdar]
  • Add support for arbitrary acyclic graph-structured neural networks [Khaled Nasr]
  • Add CART algorithm for classification and regression using decision trees [Parijat Mazumdar]
  • Add CHAID algorithm for multiclass classification and regression using decision trees [Parijat Mazumdar]
  • Add Convolutional Neural Networks [Khaled Nasr]
  • Add Random Forests algorithm for ensemble learning using CART [Parijat Mazumdar]
  • Add Restricted Botlzmann Machines [Khaled Nasr]
  • Add Stochastic Gradient Boosting algorithm for ensemble learning [Parijat Mazumdar]
  • Add Deep contractive and denoising autoencoders [Khaled Nasr]
  • Add Deep belief networks [Khaled Nasr]

Bugfixes

  • Fix reference counting bugs in CList when reference counting is on [Heiko Strathmann, Thoralf Klein, lambday]
  • Fix memory problem in PCA::apply_to_feature_matrix [Parijat Mazumdar]
  • Fix crash in LeastAngleRegression for the case D greater than N [Parijat Mazumdar]
  • Fix memory violations in bundle method solvers [Thoralf Klein]
  • Fix fail in library_mldatahdf5.cpp example when http://mldata.org is not working properly [Parijat Mazumdar]
  • Fix memory leaks in Vowpal Wabbit, LibSVMFile and KernelPCA [Thoralf Klein]
  • Fix memory and control flow issues discovered by Coverity [Thoralf Klein]
  • Fix R modular interface SWIG typemap (Requires SWIG >= 2.0.5) [Matt Huska]

Cleanup and API Changes

  • PCA now depends on Eigen3 instead of LAPACK [Parijat Mazumdar]
  • Removing redundant and fixing implicit imports [Thoralf Klein]
  • Hide many methods from SWIG, reducing compile memory by 500MiB [Heiko Strathmann, Fernando Iglesias, Thoralf Klein]

Logo Somoclu 1.4.1

by peterwittek - January 28, 2015, 13:19:36 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 7882 views, 1549 downloads, 2 subscriptions

About: Somoclu is a massively parallel implementation of self-organizing maps. It relies on OpenMP for multicore execution, MPI for distributing the workload, and it can be accelerated by CUDA on a GPU cluster. A sparse kernel is also included, which is useful for training maps on vector spaces generated in text mining processes. Apart from a command line interface, Python, R, and MATLAB are supported.

Changes:
  • Better support for ICC.
  • Faster code when compiling with GCC.
  • Building instructions and documentation improved.
  • Bug fixes: portability for R, using native R random number generator.

Logo Distributed Frank Wolfe Algorithm 0.02

by alirezabagheri - January 28, 2015, 00:35:03 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 1034 views, 286 downloads, 2 subscriptions

About: Distributed optimization: Support Vector Machines and LASSO regression on distributed data

Changes:

Initial Upload


Logo fertilized forests 1.0beta

by Chrisl_S - January 23, 2015, 16:04:31 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 1026 views, 250 downloads, 1 subscription

About: The fertilized forests project has the aim to provide an easy to use, easy to extend, yet fast library for decision forests. It summarizes the research in this field and provides a solid platform to extend it. Offering consistent interfaces to C++, Python and Matlab and being available for all major compilers gives the user high flexibility for using the library.

Changes:

Initial Announcement on mloss.org.


About: Learns dynamic network changes across conditions and visualize the results in Cytoscape.

Changes:

Initial Announcement on mloss.org.


Logo Hub Miner 1.1

by nenadtomasev - January 22, 2015, 16:33:51 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 1936 views, 406 downloads, 2 subscriptions

About: Hubness-aware Machine Learning for High-dimensional Data

Changes:
  • BibTex support for all algorithm implementations, making all of them easy to reference (via algref package).

  • Two more hubness-aware approaches (meta-metric-learning and feature construction)

  • An implementation of Hit-Miss networks for analysis.

  • Several minor bug fixes.

  • The following instance selection methods were added: HMScore, Carving, Iterative Case Filtering, ENRBF.

  • The following clustering quality indexes were added: Folkes-Mallows, Calinski-Harabasz, PBM, G+, Tau, Point-Biserial, Hubert's statistic, McClain-Rao, C-root-k.

  • Some more experimental scripts have been included.

  • Extensions in the estimation of hubness risk.

  • Alias and weighted reservoir methods for weight-proportional random selection.


Showing Items 61-70 of 590 on page 7 of 59: First Previous 2 3 4 5 6 7 8 9 10 11 12 Next Last