All entries.
Showing Items 21-30 of 609 on page 3 of 61: Previous 1 2 3 4 5 6 7 8 Next Last

Logo Milk 0.5

by luispedro - November 7, 2012, 13:08:28 CET [ Project Homepage BibTeX Download ] 27531 views, 6866 downloads, 1 subscription

Rating Whole StarWhole StarWhole StarEmpty StarEmpty Star
(based on 2 votes)

About: Python Machine Learning Toolkit

Changes:

Added LASSO (using coordinate descent optimization). Made SVM classification (learning and applying) much faster: 2.5x speedup on yeast UCI dataset.


Logo JMLR GPstuff 4.6

by avehtari - July 15, 2015, 15:08:06 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 28893 views, 6798 downloads, 2 subscriptions

Rating Whole StarWhole StarWhole StarWhole StarWhole Star
(based on 1 vote)

About: The GPstuff toolbox is a versatile collection of Gaussian process models and computational tools required for inference. The tools include, among others, various inference methods, sparse approximations and model assessment methods.

Changes:

2015-07-09 Version 4.6

Development and release branches available at https://github.com/gpstuff-dev/gpstuff

New features

  • Use Pareto smoothed importance sampling (Vehtari & Gelman, 2015) for

  • importance sampling leave-one-out cross-validation (gpmc_loopred.m)

  • importance sampling integration over hyperparameters (gp_ia.m)

  • importance sampling part of the logistic Gaussian process density estimation (lgpdens.m)

  • references:

    • Aki Vehtari and Andrew Gelman (2015). Pareto smoothed importance sampling. arXiv preprint arXiv:1507.02646.
    • Aki Vehtari, Andrew Gelman and Jonah Gabry (2015). Efficient implementation of leave-one-out cross-validation and WAIC for evaluating fitted Bayesian models.
  • New covariance functions

    • gpcf_additive creates a mixture over products of kernels for each dimension reference: Duvenaud, D. K., Nickisch, H., & Rasmussen, C. E. (2011). Additive Gaussian processes. In Advances in neural information processing systems, pp. 226-234.
    • gpcf_linearLogistic corresponds to logistic mean function
    • gpcf_linearMichelismenten correpsonds Michelis Menten mean function

Improvements - faster EP moment calculation for lik_logit

Several minor bugfixes


Logo JMLR Sally 1.0.0

by konrad - March 26, 2015, 17:01:35 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 34295 views, 6647 downloads, 3 subscriptions

About: A Tool for Embedding Strings in Vector Spaces

Changes:

Support for explicit selection of granularity added. Several minor bug fixes. We have reached 1.0


Logo r-cran-mboost 2.2-2

by r-cran-robot - February 8, 2013, 00:00:00 CET [ Project Homepage BibTeX Download ] 36255 views, 6578 downloads, 1 subscription

About: Model-Based Boosting

Changes:

Fetched by r-cran-robot on 2013-04-01 00:00:06.324985


Logo JMLR CARP 3.3

by volmeln - November 7, 2013, 15:48:06 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 19651 views, 6256 downloads, 1 subscription

About: CARP: The Clustering Algorithms’ Referee Package

Changes:

Generalized overlap error and some bugs have been fixed


Logo r-cran-party 1.0-6

by r-cran-robot - January 9, 2013, 00:00:00 CET [ Project Homepage BibTeX Download ] 26616 views, 6233 downloads, 1 subscription

About: A Laboratory for Recursive Partytioning

Changes:

Fetched by r-cran-robot on 2013-04-01 00:00:06.775432


Logo PyMVPA Multivariate Pattern Analysis in Python 2.0.0

by yarikoptic - December 22, 2011, 01:36:32 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 34803 views, 6208 downloads, 1 subscription

Rating Whole StarWhole StarWhole StarWhole StarEmpty Star
(based on 2 votes)

About: Python module to ease pattern classification analyses of large datasets. It provides high-level abstraction of typical processing steps (e.g. data preparation, classification, feature selection, [...]

Changes:
  • 2.0.0 (Mon, Dec 19 2011)

This release aggregates all the changes occurred between official releases in 0.4 series and various snapshot releases (in 0.5 and 0.6 series). To get better overview of high level changes see :ref:release notes for 0.5 <chap_release_notes_0.5> and :ref:0.6 <chap_release_notes_0.6> as well as summaries of release candidates below

  • Fixes (23 BF commits)

    • significance level in the right tail was fixed to include the value tested -- otherwise resulted in optimistic bias (or absurdly high significance in improbable case if all estimates having the same value)
    • compatible with the upcoming IPython 0.12 and renamed sklearn (Fixes #57)
    • do not double-train slave classifiers while assessing sensitivities (Fixes #53)
  • Enhancements (30 ENH + 3 NF commits)

    • resolving voting ties in kNN based on mean distance, and randomly in SMLR
    • :class:kNN's ca.estimates now contains dictionaries with votes for each class
    • consistent zscoring in :class:Hyperalignment
  • 2.0.0~rc5 (Wed, Oct 19 2011)

  • Major: to allow easy co-existence of stable PyMVPA 0.4.x, 0.6 development mvpa module was renamed into mod:mvpa2.

  • Fixes

    • compatible with the new Shogun 1.x series
    • compatible with the new h5py 2.x series
    • mvpa-prep-fmri -- various compatibility fixes and smoke testing
    • deepcopying :class:SummaryStatistics during add
  • Enhancements

    • tutorial uses :mod:mvpa2.tutorial_suite now
    • better suppression of R warnings when needed
    • internal attributes of many classes were exposed as properties
    • more unification of __repr__ for many classes
  • 0.6.0~rc4 (Wed, Jun 14 2011)

  • Fixes

    • Finished transition to :mod:nibabel conventions in plot_lightbox
    • Addressed :mod:matplotlib.hist API change
    • Various adjustments in the tests batteries (:mod:nibabel 1.1.0 compatibility, etc)
  • New functionality

    • Explicit new argument flatten to from_wizard -- default behavior changed if mapper was provided as well
  • Enhancements

    • Elaborated __str__ and __repr__ for some Classifiers and Measures
  • 0.6.0~rc3 (Thu, Apr 12 2011)

  • Fixes

    • Bugfixes regarding the interaction of FlattenMapper and BoxcarMapper that affected event-related analyses.
    • Splitter now handles attribute value None for splitting properly.
    • GNBSearchlight handling of
      roi_ids.
    • More robust detection of mod:scikits.learn and :mod:nipy externals.
  • New functionality

    • Added a Repeater node to yield a dataset multiple times and
      Sifter node to exclude some datasets. Consequently, the "nosplitting" mode of Splitter got removed at the same time.
    • :file:tools/niils -- little tool to list details (dimensionality, scaling, etc) of the files in nibabel-supported formats.
  • Enhancements

    • Numerous documentation fixes.
    • Various improvements and increased flexibility of null distribution estimation of Measures.
    • All attribute are now reported in sorted order when printing a dataset.
    • fmri_dataset now also stores the input image type.
    • Crossvalidation can now take a custom Splitter instance. Moreover, the default splitter of CrossValidation is more robust in terms of number and type of created splits for common usage patterns (i.e. together with partitioners).
    • CrossValidation takes any custom Node as errorfx argument.
    • ConfusionMatrix can now be used as an errorfx in Crossvalidation.
    • LOE(ACC): Linear Order Effect in ACC was added to
      ConfusionMatrix to detect trends in performances across splits.
    • A Node s postproc is now accessible as a property.
    • RepeatedMeasure has a new 'concat_as' argument that allows results to be concatenated along the feature axis. The default behavior, stacking as multiple samples, is unchanged.
    • Searchlight now has the ability to mark the center/seed of an ROI in with a feature attribute in the generated datasets.
    • debug takes args parameter for delayed string comprehensions. It should reduce run-time impact of debug() calls in regular, non -O mode of Python operation.
    • String summaries and representations (provided by __str__ and __repr__) were made more exhaustive and more coherent. Additional properties to access initial constructor arguments were added to variety of classes.
  • Internal changes

    • New debug target STDOUT to allow attaching metrics (e.g. traceback, timestamps) to regular output printed to stdout

    • New set of decorators to help with unittests

    • @nodebug to disable specific debug targets for the duration of the test.

    • @reseed_rng to guarantee consistent random data given initial seeding.

    • @with_tempfile to provide a tempfile name which would get removed upon completion (test success or failure)

    • Dropping daily testing of maint/0.5 branch -- RIP.

    • Collection s were provided with adequate (deep|)copy. And Dataset was refactored to use Collection s copy method.

    • update-* Makefile rules automatically should fast-forward corresponding website-updates branch

    • MVPA_TESTS_VERBOSITY controls also :mod:numpy warnings now.

    • Dataset.__array__ provides original array instead of copy (unless dtype is provided)

Also adapts changes from 0.4.6 and 0.4.7 (see corresponding changelogs).

  • 0.6.0~rc2 (Thu, Mar 3 2011)

  • Various fixes in the mvpa.atlas module.

  • 0.6.0~rc1 (Thu, Feb 24 2011)

  • Many, many, many

  • For an overview of the most drastic changes :ref:see constantly evolving release notes for 0.6 <chap_release_notes_0.6>

  • 0.5.0 (sometime in March 2010)

This is a special release, because it has never seen the general public. A summary of fundamental changes introduced in this development version can be seen in the :ref:release notes <chap_release_notes_0.5>.

Most notably, this version was to first to come with a comprehensive two-day workshop/tutorial.

  • 0.4.7 (Tue, Mar 07 2011) (Total: 12 commits)

A bugfix release

  • Fixed

    • Addressed the issue with input NIfTI files having scl_ fields set: it could result in incorrect analyses and map2nifti-produced NIfTI files. Now input files account for scaling/offset if scl_ fields direct to do so. Moreover upon map2nifti, those fields get reset.
    • :file:doc/examples/searchlight_minimal.py - best error is the minimal one
  • Enhancements

    • :class:~mvpa.clfs.gnb.GNB can now tolerate training datasets with a single label
    • :class:~mvpa.clfs.meta.TreeClassifier can have trailing nodes with no classifier assigned
  • 0.4.6 (Tue, Feb 01 2011) (Total: 20 commits)

A bugfix release

  • Fixed (few BF commits):

    • Compatibility with numpy 1.5.1 (histogram) and scipy 0.8.0 (workaround for a regression in legendre)
    • Compatibility with libsvm 3.0
    • :class:~mvpa.clfs.plr.PLR robustification
  • Enhancements

    • Enforce suppression of numpy warnings while running unittests. Also setting verbosity >= 3 enables all warnings (Python, NumPy, and PyMVPA)
    • :file:doc/examples/nested_cv.py example (adopted from 0.5)
    • Introduced base class :class:~mvpa.clfs.base.LearnerError for classifiers' exceptions (adopted from 0.5)
    • Adjusted example data to live upto nibabel's warranty of NIfTI standard-compliance
    • More robust operation of MC iterations -- skip iterations where classifier experienced difficulties and raise an exception (e.g. due to degenerate data)

About: The glm-ie toolbox contains scalable estimation routines for GLMs (generalised linear models) and SLMs (sparse linear models) as well as an implementation of a scalable convex variational Bayesian inference relaxation. We designed the glm-ie package to be simple, generic and easily expansible. Most of the code is written in Matlab including some MEX files. The code is fully compatible to both Matlab 7.x and GNU Octave 3.2.x. Probabilistic classification, sparse linear modelling and logistic regression are covered in a common algorithmical framework allowing for both MAP estimation and approximate Bayesian inference.

Changes:

added factorial mean field inference as a third algorithm complementing expectation propagation and variational Bayes

generalised non-Gaussian potentials so that affine instead of linear functions of the latent variables can be used


Logo JMLR Shark 2.3.0

by igel - October 24, 2009, 22:12:48 CET [ Project Homepage BibTeX Download ] 29962 views, 5927 downloads, 1 subscription

Rating Whole StarWhole StarWhole StarWhole StarEmpty Star
(based on 3 votes)

About: SHARK is a modular C++ library for the design and optimization of adaptive systems. It provides various machine learning and computational intelligence techniques.

Changes:
  • new build system
  • minor bug fixes

Logo JMLR MSVMpack 1.5

by lauerfab - July 3, 2014, 16:02:49 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 19172 views, 5906 downloads, 2 subscriptions

About: MSVMpack is a Multi-class Support Vector Machine (M-SVM) package. It is dedicated to SVMs which can handle more than two classes without relying on decomposition methods and implements the four M-SVM models from the literature: Weston and Watkins M-SVM, Crammer and Singer M-SVM, Lee, Lin and Wahba M-SVM, and the M-SVM2 of Guermeur and Monfrini.

Changes:
  • Windows binaries are now included (by Emmanuel Didiot)
  • MSVMpack can now be compiled on Windows (by Emmanuel Didiot)
  • Fixed polynomial kernel
  • Minor bug fixes

Showing Items 21-30 of 609 on page 3 of 61: Previous 1 2 3 4 5 6 7 8 Next Last