Showing Items 401-420 of 676 on page 21 of 34: First Previous 16 17 18 19 20 21 22 23 24 25 26 Next Last
About: Multivariate partitioning Changes:Fetched by r-cran-robot on 2013-04-01 00:00:06.387032
|
About: Matlab SVM toolbox for learning large margin filters in signal or images. Changes:Initial Announcement on mloss.org.
|
About: Locally Weighted Projection Regression (LWPR) is a recent algorithm that achieves nonlinear function approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its [...] Changes:Version 1.2.4
|
About: Infrastructure for representing, manipulating and analyzing transaction data and frequent patterns. Changes:Initial Announcement on mloss.org.
|
About: Trees WIth eXtra splits Changes:Fetched by r-cran-robot on 2012-02-01 00:00:12.077735
|
About: svmpath Changes:Fetched by r-cran-robot on 2012-02-01 00:00:11.755984
|
About: The SSA Toolbox is an efficient, platform-independent, standalone implementation of the Stationary Subspace Analysis algorithm with a friendly graphical user interface and a bridge to Matlab. Stationary Subspace Analysis (SSA) is a general purpose algorithm for the explorative analysis of non-stationary data, i.e. data whose statistical properties change over time. SSA helps to detect, investigate and visualize temporal changes in complex high-dimensional data sets. Changes:
|
About: Shrinkage Discriminant Analysis and CAT Score Variable Selection Changes:Fetched by r-cran-robot on 2012-02-01 00:00:11.559491
|
About: Neural Networks in R using the Stuttgart Neural Network Simulator (SNNS) Changes:Fetched by r-cran-robot on 2012-02-01 00:00:11.194183
|
About: R/Weka interface Changes:Fetched by r-cran-robot on 2012-02-01 00:00:11.330277
|
About: MATLAB toolbox for advanced Brain-Computer Interface (BCI) research. Changes:Initial Announcement on mloss.org.
|
About: This is a Matlab/C++ "toolbox" of code for learning and inference with graphical models. It is focused on parameter learning using marginalization in the high-treewidth setting. Changes:Initial Announcement on mloss.org.
|
About: Nonnegative Sparse Coding, Discriminative Semi-supervised Learning, sparse probability graph Changes:Initial Announcement on mloss.org.
|
About: The Kernel-Machine Library is a free (released under the LGPL) C++ library to promote the use of and progress of kernel machines. Changes:Updated mloss entry (minor fixes).
|
About: Python module to ease pattern classification analyses of large datasets. It provides high-level abstraction of typical processing steps (e.g. data preparation, classification, feature selection, [...] Changes:
This release aggregates all the changes occurred between official
releases in 0.4 series and various snapshot releases (in 0.5 and 0.6
series). To get better overview of high level changes see
:ref:
Also adapts changes from 0.4.6 and 0.4.7 (see corresponding changelogs).
This is a special release, because it has never seen the general public.
A summary of fundamental changes introduced in this development version
can be seen in the :ref: Most notably, this version was to first to come with a comprehensive two-day workshop/tutorial.
A bugfix release
A bugfix release
|
About: Bayesian treed Gaussian process models Changes:Fetched by r-cran-robot on 2012-02-01 00:00:11.834310
|
About: An annotated java framework for machine learning, aimed at making it really easy to access analytically functions. Changes:Now supports OLS and GLS regression and NaiveBayes classification
|
About: In this paper, we propose an improved principal component analysis based on maximum entropy (MaxEnt) preservation, called MaxEnt-PCA, which is derived from a Parzen window estimation of Renyi’s quadratic entropy. Instead of minimizing the reconstruction error either based on L2-norm or L1-norm, the MaxEnt-PCA attempts to preserve as much as possible the uncertainty information of the data measured by entropy. The optimal solution of MaxEnt-PCA consists of the eigenvectors of a Laplacian probability matrix corresponding to the MaxEnt distribution. MaxEnt-PCA (1) is rotation invariant, (2) is free from any distribution assumption, and (3) is robust to outliers. Extensive experiments on real-world datasets demonstrate the effectiveness of the proposed linear method as compared to other related robust PCA methods. Changes:Initial Announcement on mloss.org.
|
About: Metropolis-Hastings alogrithm is a Markov chain Monte Carlo method for obtaining a sequence of random samples from a probability distribution for which direct sampling is difficult. Thi sequence can be used to approximate the distribution. Changes:Initial Announcement on mloss.org.
|
About: This code is developed based on Uriel Roque's active set algorithm for the linear least squares problem with nonnegative variables in: Portugal, L.; Judice, J.; and Vicente, L. 1994. A comparison of block pivoting and interior-point algorithms for linear least squares problems with nonnegative variables. Mathematics of Computation 63(208):625-643.Ran He, Wei-Shi Zheng and Baogang Hu, "Maximum Correntropy Criterion for Robust Face Recognition," IEEE TPAMI, in press, 2011. Changes:Initial Announcement on mloss.org.
|