All entries.
Showing Items 151-160 of 534 on page 16 of 54: First Previous 11 12 13 14 15 16 17 18 19 20 21 Next Last

Logo JMLR Jstacs 2.1

by keili - June 3, 2013, 07:32:55 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 13220 views, 3099 downloads, 2 subscriptions

About: A Java framework for statistical analysis and classification of biological sequences

Changes:

New classes:

  • MultipleIterationsCondition: Requires another TerminationCondition to fail a contiguous, specified number of times
  • ClassifierFactory: Allows for creating standard classifiers
  • SeqLogoPlotter: Plot PNG sequence logos from within Jstacs
  • MultivariateGaussianEmission: Multivariate Gaussian emission density for a Hidden Markov Model
  • MEManager: Maximum entropy model

New features and improvements:

  • Alignment: Added free shift alignment
  • PerformanceMeasure and sub-classes: Extension to weighted test data
  • AbstractClassifier, ClassifierAssessment and sub-classes: Adaption to weighted PerformanceMeasures
  • DNAAlphabet: Parser speed-up
  • PFMComparator: Extension to PFM from other sources/databases
  • ToolBox: New convenience methods for computing several statistics (e.g., median, correlation)
  • SignificantMotifOccurrencesFinder: New methods for computing PWMs and statistics from predictions
  • SequenceScore and sub-classes: New method toString(NumberFormat)
  • DataSet: Adaption to weighted data, e.g., partitioning
  • REnvironment: Changed several methods from String to CharSequence

Restructuring:

  • changed MultiDimensionalSequenceWrapperDiffSM to MultiDimensionalSequenceWrapperDiffSS

Several minor new features, bug fixes, and code cleanups


Logo r-cran-ahaz 1.14

by r-cran-robot - June 3, 2013, 00:00:00 CET [ Project Homepage BibTeX Download ] 4271 views, 933 downloads, 0 subscriptions

About: Regularization for semiparametric additive hazards regression

Changes:

Fetched by r-cran-robot on 2014-07-01 00:00:03.834643


Logo Cognitive Foundry 3.3.3

by Baz - May 21, 2013, 05:59:37 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 15276 views, 2426 downloads, 2 subscriptions

About: The Cognitive Foundry is a modular Java software library of machine learning components and algorithms designed for research and applications.

Changes:
  • General:
    • Made code able to compile under both Java 1.6 and 1.7. This required removing some potentially unsafe methods that used varargs with generics.
    • Upgraded XStream dependency to 1.4.4.
    • Improved support for regression algorithms in learning.
    • Added general-purpose adapters to make it easier to compose learning algorithms and adapt their input or output.
  • Common Core:
    • Added isSparse, toArray, dotDivide, and dotDivideEquals methods for Vector and Matrix.
    • Added scaledPlus, scaledPlusEquals, scaledMinus, and scaledMinusEquals to Ring (and thus Vector and Matrix) for potentially faster such operations.
    • Fixed issue where matrix and dense vector equals was not checking for equal dimensionality.
    • Added transform, transformEquals, tranformNonZeros, and transformNonZerosEquals to Vector.
    • Made LogNumber into a signed version of a log number and moved the prior unsigned implementation into UnsignedLogNumber.
    • Added EuclideanRing interface that provides methods for times, timesEquals, divide, and divideEquals. Also added Field interface that provides methods for inverse and inverseEquals. These interfaces are now implemented by the appropriate number classes such as ComplexNumber, MutableInteger, MutableLong, MutableDouble, LogNumber, and UnsignedLogNumber.
    • Added interface for Indexer and DefaultIndexer implementation for creating a zero-based indexing of values.
    • Added interfaces for MatrixFactoryContainer and DivergenceFunctionContainer.
    • Added ReversibleEvaluator, which various identity functions implement as well as a new utility class ForwardReverseEvaluatorPair to create a reversible evaluator from a pair of other evaluators.
    • Added method to create an ArrayList from a pair of values in CollectionUtil.
    • ArgumentChecker now properly throws assertion errors for NaN values. Also added checks for long types.
    • Fixed handling of Infinity in subtraction for LogMath.
    • Fixed issue with angle method that would cause a NaN if cosine had a rounding error.
    • Added new createMatrix methods to MatrixFactory that initializes the Matrix with the given value.
    • Added copy, reverse, and isEmpty methods for several array types to ArrayUtil.
    • Added utility methods for creating a HashMap, LinkedHashMap, HashSet, or LinkedHashSet with an expected size to CollectionUtil.
    • Added getFirst and getLast methods for List types to CollectionUtil.
    • Removed some calls to System.out and Exception.printStackTrace.
  • Common Data:
    • Added create method for IdentityDataConverter.
    • ReversibleDataConverter now is an extension of ReversibleEvaluator.
  • Learning Core:
    • Added general learner transformation capability to make it easier to adapt and compose algorithms. InputOutputTransformedBatchLearner provides this capability for supervised learning algorithms by composing together a triplet. CompositeBatchLearnerPair does it for a pair of algorithms.
    • Added a constant and identity learners.
    • Added Chebyshev, Identity, and Minkowski distance metrics.
    • Added methods to DatasetUtil to get the output values for a dataset and to compute the sum of weights.
    • Made generics more permissive for supervised cost functions.
    • Added ClusterDistanceEvaluator for taking a clustering that encodes the distance from an input value to all clusters and returns the result as a vector.
    • Fixed potential round-off issue in decision tree splitter.
    • Added random subspace technique, implemented in RandomSubspace.
    • Separated functionality from LinearFunction into IdentityScalarFunction. LinearFunction by default is the same, but has parameters that can change the slope and offset of the function.
    • Default squashing function for GeneralizedLinearModel and DifferentiableGeneralizedLinearModel is now a linear function instead of an atan function.
    • Added a weighted estimator for the Poisson distribution.
    • Added Regressor interface for evaluators that are the output of (single-output) regression learning algorithms. Existing such evaluators have been updated to implement this interface.
    • Added support for regression ensembles including additive and averaging ensembles with and without weights. Added a learner for regression bagging in BaggingRegressionLearner.
    • Added a simple univariate regression class in UnivariateLinearRegression.
    • MultivariateDecorrelator now is a VectorInputEvaluator and VectorOutputEvaluator.
    • Added bias term to PrimalEstimatedSubGradient.
  • Text Core:
    • Fixed issue with the start position for tokens from LetterNumberTokenizer being off by one except for the first one.

About: A fast and robust learning of Bayesian networks

Changes:

Initial Announcement on mloss.org.


Logo HLearn 1.0

by mikeizbicki - May 9, 2013, 05:58:18 CET [ Project Homepage BibTeX Download ] 2337 views, 527 downloads, 1 subscription

About: HLearn makes simple machine learning routines available in Haskell by expressing them according to their algebraic structure

Changes:

Updated to version 1.0


Logo r-cran-bigRR 1.3-8

by r-cran-robot - May 8, 2013, 00:00:00 CET [ Project Homepage BibTeX Download ] 890 views, 248 downloads, 0 subscriptions

About: Generalized Ridge Regression (with special advantage for p >> n cases)

Changes:

Fetched by r-cran-robot on 2014-07-01 00:00:04.317556


Logo OptWok 0.3.1

by ong - May 2, 2013, 10:46:11 CET [ Project Homepage BibTeX Download ] 6441 views, 1248 downloads, 1 subscription

About: A collection of python code to perform research in optimization. The aim is to provide reusable components that can be quickly applied to machine learning problems. Used in: - Ellipsoidal multiple instance learning - difference of convex functions algorithms for sparse classfication - Contextual bandits upper confidence bound algorithm (using GP) - learning output kernels, that is kernels between the labels of a classifier.

Changes:
  • minor bugfix

Logo KNIME 2.7.4

by toldo - April 29, 2013, 09:14:39 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 2098 views, 410 downloads, 1 subscription

About: A comprehensive data mining environment, with a variety of machine learning components.

Changes:

Modifications following feedback from Knime main Author.


Logo Intelligent Parameter Utilization Tool 0.4

by feldob - April 28, 2013, 18:05:45 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 1193 views, 275 downloads, 1 subscription

About: A descriptive and programming language independent format and API for the simplified configuration, documentation, and design of computer experiments.

Changes:

Initial Announcement on mloss.org.


Logo HDDM 0.5

by Wiecki - April 24, 2013, 02:53:07 CET [ Project Homepage BibTeX Download ] 2986 views, 796 downloads, 1 subscription

About: HDDM is a python toolbox for hierarchical Bayesian parameter estimation of the Drift Diffusion Model (via PyMC). Drift Diffusion Models are used widely in psychology and cognitive neuroscience to study decision making.

Changes:
  • New and improved HDDM model with the following changes:
    • Priors: by default model will use informative priors (see http://ski.clps.brown.edu/hddm_docs/methods.html#hierarchical-drift-diffusion-models-used-in-hddm) If you want uninformative priors, set informative=False.
    • Sampling: This model uses slice sampling which leads to faster convergence while being slower to generate an individual sample. In our experiments, burnin of 20 is often good enough.
    • Inter-trial variablity parameters are only estimated at the group level, not for individual subjects.
    • The old model has been renamed to HDDMTransformed.
    • HDDMRegression and HDDMStimCoding are also using this model.
  • HDDMRegression takes patsy model specification strings. See http://ski.clps.brown.edu/hddm_docs/howto.html#estimate-a-regression-model and http://ski.clps.brown.edu/hddm_docs/tutorial_regression_stimcoding.html#chap-tutorial-hddm-regression
  • Improved online documentation at http://ski.clps.brown.edu/hddm_docs
  • A new HDDM demo at http://ski.clps.brown.edu/hddm_docs/demo.html
  • Ratcliff's quantile optimization method for single subjects and groups using the .optimize() method
  • Maximum likelihood optimization.
  • Many bugfixes and better test coverage.
  • hddm_fit.py command line utility is depracated.

Showing Items 151-160 of 534 on page 16 of 54: First Previous 11 12 13 14 15 16 17 18 19 20 21 Next Last