All entries.
Showing Items 201-210 of 579 on page 21 of 58: First Previous 16 17 18 19 20 21 22 23 24 25 26 Next Last

About: This letter proposes a new multiple linear regression model using regularized correntropy for robust pattern recognition. First, we motivate the use of correntropy to improve the robustness of the classicalmean square error (MSE) criterion that is sensitive to outliers. Then an l1 regularization scheme is imposed on the correntropy to learn robust and sparse representations. Based on the half-quadratic optimization technique, we propose a novel algorithm to solve the nonlinear optimization problem. Second, we develop a new correntropy-based classifier based on the learned regularization scheme for robust object recognition. Extensive experiments over several applications confirm that the correntropy-based l1 regularization can improve recognition accuracy and receiver operator characteristic curves under noise corruption and occlusion.

Changes:

Initial Announcement on mloss.org.


About: Robust sparse representation has shown significant potential in solving challenging problems in computer vision such as biometrics and visual surveillance. Although several robust sparse models have been proposed and promising results have been obtained, they are either for error correction or for error detection, and learning a general framework that systematically unifies these two aspects and explore their relation is still an open problem. In this paper, we develop a half-quadratic (HQ) framework to solve the robust sparse representation problem. By defining different kinds of half-quadratic functions, the proposed HQ framework is applicable to performing both error correction and error detection. More specifically, by using the additive form of HQ, we propose an L1-regularized error correction method by iteratively recovering corrupted data from errors incurred by noises and outliers; by using the multiplicative form of HQ, we propose an L1-regularized error detection method by learning from uncorrupted data iteratively. We also show that the L1-regularization solved by soft-thresholding function has a dual relationship to Huber M-estimator, which theoretically guarantees the performance of robust sparse representation in terms of M-estimation. Experiments on robust face recognition under severe occlusion and corruption validate our framework and findings.

Changes:

Initial Announcement on mloss.org.


Logo JMLR Jstacs 2.1

by keili - June 3, 2013, 07:32:55 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 16116 views, 3877 downloads, 2 subscriptions

About: A Java framework for statistical analysis and classification of biological sequences

Changes:

New classes:

  • MultipleIterationsCondition: Requires another TerminationCondition to fail a contiguous, specified number of times
  • ClassifierFactory: Allows for creating standard classifiers
  • SeqLogoPlotter: Plot PNG sequence logos from within Jstacs
  • MultivariateGaussianEmission: Multivariate Gaussian emission density for a Hidden Markov Model
  • MEManager: Maximum entropy model

New features and improvements:

  • Alignment: Added free shift alignment
  • PerformanceMeasure and sub-classes: Extension to weighted test data
  • AbstractClassifier, ClassifierAssessment and sub-classes: Adaption to weighted PerformanceMeasures
  • DNAAlphabet: Parser speed-up
  • PFMComparator: Extension to PFM from other sources/databases
  • ToolBox: New convenience methods for computing several statistics (e.g., median, correlation)
  • SignificantMotifOccurrencesFinder: New methods for computing PWMs and statistics from predictions
  • SequenceScore and sub-classes: New method toString(NumberFormat)
  • DataSet: Adaption to weighted data, e.g., partitioning
  • REnvironment: Changed several methods from String to CharSequence

Restructuring:

  • changed MultiDimensionalSequenceWrapperDiffSM to MultiDimensionalSequenceWrapperDiffSS

Several minor new features, bug fixes, and code cleanups


Logo r-cran-CoxBoost 1.4

by r-cran-robot - May 1, 2015, 00:00:04 CET [ Project Homepage BibTeX Download ] 19447 views, 3915 downloads, 3 subscriptions

About: Cox models by likelihood based boosting for a single survival endpoint or competing risks

Changes:

Fetched by r-cran-robot on 2015-05-01 00:00:04.536435


About: A fast and robust learning of Bayesian networks

Changes:

Initial Announcement on mloss.org.


Logo HLearn 1.0

by mikeizbicki - May 9, 2013, 05:58:18 CET [ Project Homepage BibTeX Download ] 3524 views, 878 downloads, 1 subscription

About: HLearn makes simple machine learning routines available in Haskell by expressing them according to their algebraic structure

Changes:

Updated to version 1.0


Logo OptWok 0.3.1

by ong - May 2, 2013, 10:46:11 CET [ Project Homepage BibTeX Download ] 8105 views, 1557 downloads, 1 subscription

About: A collection of python code to perform research in optimization. The aim is to provide reusable components that can be quickly applied to machine learning problems. Used in: - Ellipsoidal multiple instance learning - difference of convex functions algorithms for sparse classfication - Contextual bandits upper confidence bound algorithm (using GP) - learning output kernels, that is kernels between the labels of a classifier.

Changes:
  • minor bugfix

Logo KNIME 2.7.4

by toldo - April 29, 2013, 09:14:39 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 3237 views, 612 downloads, 1 subscription

About: A comprehensive data mining environment, with a variety of machine learning components.

Changes:

Modifications following feedback from Knime main Author.


Logo Intelligent Parameter Utilization Tool 0.4

by feldob - April 28, 2013, 18:05:45 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 1937 views, 477 downloads, 1 subscription

About: A descriptive and programming language independent format and API for the simplified configuration, documentation, and design of computer experiments.

Changes:

Initial Announcement on mloss.org.


Logo HDDM 0.5

by Wiecki - April 24, 2013, 02:53:07 CET [ Project Homepage BibTeX Download ] 4073 views, 1016 downloads, 1 subscription

About: HDDM is a python toolbox for hierarchical Bayesian parameter estimation of the Drift Diffusion Model (via PyMC). Drift Diffusion Models are used widely in psychology and cognitive neuroscience to study decision making.

Changes:
  • New and improved HDDM model with the following changes:
    • Priors: by default model will use informative priors (see http://ski.clps.brown.edu/hddm_docs/methods.html#hierarchical-drift-diffusion-models-used-in-hddm) If you want uninformative priors, set informative=False.
    • Sampling: This model uses slice sampling which leads to faster convergence while being slower to generate an individual sample. In our experiments, burnin of 20 is often good enough.
    • Inter-trial variablity parameters are only estimated at the group level, not for individual subjects.
    • The old model has been renamed to HDDMTransformed.
    • HDDMRegression and HDDMStimCoding are also using this model.
  • HDDMRegression takes patsy model specification strings. See http://ski.clps.brown.edu/hddm_docs/howto.html#estimate-a-regression-model and http://ski.clps.brown.edu/hddm_docs/tutorial_regression_stimcoding.html#chap-tutorial-hddm-regression
  • Improved online documentation at http://ski.clps.brown.edu/hddm_docs
  • A new HDDM demo at http://ski.clps.brown.edu/hddm_docs/demo.html
  • Ratcliff's quantile optimization method for single subjects and groups using the .optimize() method
  • Maximum likelihood optimization.
  • Many bugfixes and better test coverage.
  • hddm_fit.py command line utility is depracated.

Showing Items 201-210 of 579 on page 21 of 58: First Previous 16 17 18 19 20 21 22 23 24 25 26 Next Last