Showing Items 311320 of 536 on page 32 of 54: First Previous 27 28 29 30 31 32 33 34 35 36 37 Next Last
About: The KernelMachine Library is a free (released under the LGPL) C++ library to promote the use of and progress of kernel machines. Changes:Updated mloss entry (minor fixes).

About: Bayesian treed Gaussian process models Changes:Fetched by rcranrobot on 20120201 00:00:11.834310

About: An annotated java framework for machine learning, aimed at making it really easy to access analytically functions. Changes:Now supports OLS and GLS regression and NaiveBayes classification

About: In this paper, we propose an improved principal component analysis based on maximum entropy (MaxEnt) preservation, called MaxEntPCA, which is derived from a Parzen window estimation of Renyi’s quadratic entropy. Instead of minimizing the reconstruction error either based on L2norm or L1norm, the MaxEntPCA attempts to preserve as much as possible the uncertainty information of the data measured by entropy. The optimal solution of MaxEntPCA consists of the eigenvectors of a Laplacian probability matrix corresponding to the MaxEnt distribution. MaxEntPCA (1) is rotation invariant, (2) is free from any distribution assumption, and (3) is robust to outliers. Extensive experiments on realworld datasets demonstrate the effectiveness of the proposed linear method as compared to other related robust PCA methods. Changes:Initial Announcement on mloss.org.

About: MetropolisHastings alogrithm is a Markov chain Monte Carlo method for obtaining a sequence of random samples from a probability distribution for which direct sampling is difficult. Thi sequence can be used to approximate the distribution. Changes:Initial Announcement on mloss.org.

About: This code is developed based on Uriel Roque's active set algorithm for the linear least squares problem with nonnegative variables in: Portugal, L.; Judice, J.; and Vicente, L. 1994. A comparison of block pivoting and interiorpoint algorithms for linear least squares problems with nonnegative variables. Mathematics of Computation 63(208):625643.Ran He, WeiShi Zheng and Baogang Hu, "Maximum Correntropy Criterion for Robust Face Recognition," IEEE TPAMI, in press, 2011. Changes:Initial Announcement on mloss.org.

About: Urheen is a toolkit for Chinese word segmentation, Chinese pos tagging, English tokenize, and English pos tagging. The Chinese word segmentation and pos tagging modules are trained with the Chinese Tree Bank 7.0. The English pos tagging module is trained with the WSJ English treebank(0223). Changes:Initial Announcement on mloss.org.

About: OpenPRNBEM is an C++ implementation of Naive Bayes Classifier, which is a wellknown generative classification algorithm for the application such as text classification. The Naive Bayes algorithm requires the probabilistic distribution to be discrete. OpenPRNBEM uses the multinomial event model for representation. The maximum likelihood estimate is used for supervised learning, and the expectationmaximization estimate is used for semisupervised and unsupervised learning. Changes:Initial Announcement on mloss.org.

About: This is a class to calculate histogram of LBP (local binary patterns) from an input image, histograms of LBPTOP (local binary patterns on three orthogonal planes) from an image sequence, histogram of the rotation invariant VLBP (volume local binary patterns) or uniform rotation invariant VLBP from an image sequence. Changes:Initial Announcement on mloss.org.

About: This program implements a novel robust sparse representation method, called the twostage sparse representation (TSR), for robust recognition on a largescale database. Based on the divide and conquer strategy, TSR divides the procedure of robust recognition into outlier detection stage and recognition stage. The extensive numerical experiments on several public databases demonstrate that the proposed TSR approach generally obtains better classification accuracy than the stateoftheart Sparse Representation Classification (SRC). At the same time, by using the TSR, a significant reduction of computational cost is reached by over fifty times in comparison with the SRC, which enables the TSR to be deployed more suitably for largescale dataset. Changes:Initial Announcement on mloss.org.
