Projects that are tagged with tree.


Logo JMLR MLPACK 1.0.9

by rcurtin - July 28, 2014, 20:52:10 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 29883 views, 6008 downloads, 5 subscriptions

Rating Whole StarWhole StarWhole StarWhole Star1/2 Star
(based on 1 vote)

About: A scalable, fast C++ machine learning library, with emphasis on usability.

Changes:
  • GMM initialization is now safer and provides a working GMM when constructed with only the dimensionality and number of Gaussians (#314).
  • Check for division by 0 in Forward-Backward Algorithm in HMMs (#314).
  • Fix MaxVarianceNewCluster (used when re-initializing clusters for k-means) (#314).
  • Fixed implementation of Viterbi algorithm in HMM::Predict() (#316).
  • Significant speedups for dual-tree algorithms using the cover tree (#243, #329) including a faster implementation of FastMKS.
  • Fix for LRSDP optimizer so that it compiles and can be used (#325).
  • CF (collaborative filtering) now expects users and items to be zero-indexed, not one-indexed (#324).
  • CF::GetRecommendations() API change: now requires the number of recommendations as the first parameter. The number of users in the local neighborhood should be specified with CF::NumUsersForSimilarity().
  • Removed incorrect PeriodicHRectBound (#30).
  • Refactor LRSDP into LRSDP class and standalone function to be optimized (#318).
  • Fix for centering in kernel PCA (#355).
  • Added simulated annealing (SA) optimizer, contributed by Zhihao Lou.
  • HMMs now support initial state probabilities; these can be set in the constructor, trained, or set manually with HMM::Initial() (#315).
  • Added Nyström method for kernel matrix approximation by Marcus Edel.
  • Kernel PCA now supports using Nyström method for approximation.
  • Ball trees now work with dual-tree algorithms, via the BallBound<> bound structure (#320); fixed by Yash Vadalia.
  • The NMF class is now AMF<>, and supports far more types of factorizations, by Sumedh Ghaisas.
  • A QUIC-SVD implementation has returned, written by Siddharth Agrawal and based on older code from Mudit Gupta.
  • Added perceptron and decision stump by Udit Saxena (these are weak learners for an eventual AdaBoost class).
  • Sparse autoencoder added by Siddharth Agrawal.

Logo XGBoost v0.2

by crowwork - May 17, 2014, 07:27:59 CET [ Project Homepage BibTeX Download ] 1457 views, 231 downloads, 1 subscription

About: eXtreme gradient boosting (tree) library. Features: - Sparse feature format allows easy handling of missing values, and improve computation efficiency. - Efficient parallel implementation that optimizes memory and computation. - Python interface

Changes:

New features: - Python interface - New objectives: weighted training, pairwise rank, multiclass softmax - Comes with example script on Kaggle Higgs competition, 20 times faster than skilearn's GBRT