Projects that are tagged with regression.
Showing Items 1-20 of 39 on page 1 of 2: 1 2 Next

Logo JMLR dlib ml 18.12

by davis685 - December 20, 2014, 22:38:51 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 87834 views, 15198 downloads, 2 subscriptions

About: This project is a C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real world problems.

Changes:

This release adds tools for computing 2D FFTs, Hough transforms, image skeletonizations, and also a simple and type safe API for calling C++ code from MATLAB.


Logo WEKA 3.7.12

by mhall - December 17, 2014, 03:04:17 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 41650 views, 6152 downloads, 2 subscriptions

Rating Whole StarWhole StarWhole StarWhole StarEmpty Star
(based on 6 votes)

About: The Weka workbench contains a collection of visualization tools and algorithms for data analysis and predictive modelling, together with graphical user interfaces for easy access to this [...]

Changes:

In core weka:

  • GUIChooser now has a plugin exension point that allows implementations of GUIChooser.GUIChooserMenuPlugin to appear as entries in either the Tools or Visualization menus
  • SubsetByExpression filter now has support for regexp matching
  • weka.classifiers.IterativeClassifierOptimizer - a classifier that can efficiently optimize the number of iterations for a base classifier that implements IterativeClassifier
  • Speedup for LogitBoost in the two class case
  • weka.filters.supervised.instance.ClassBalancer - a simple filter to balance the weight of classes
  • New class hierarchy for stopwords algorithms. Includes new methods to read custom stopwords from a file and apply multiple stopwords algorithms
  • Ability to turn off capabilities checking in Weka algorithms. Improves runtime for ensemble methods that create a lot of simple base classifiers
  • Memory savings in weka.core.Attribute
  • Improvements in runtime for SimpleKMeans and EM
  • weka.estimators.UnivariateMixtureEstimator - new mixture estimator

In packages:

  • New discriminantAnalysis package. Provides an implementation of Fisher's linear discriminant analysis
  • Quartile estimators, correlation matrix heat map and k-means++ clustering in distributed Weka
  • Support for default settings for GridSearch via a properties file
  • Improvements in scripting with addition of the offical Groovy console (kfGroovy package) from the Groovy project and TigerJython (new tigerjython package) as the Jython console via the GUIChooser
  • Support for the latest version of MLR in the RPlugin package
  • EAR4 package contributed by Vahid Jalali
  • StudentFilters package contributed by Chris Gearhart
  • graphgram package contributed by Johannes Schneider

Logo JMLR GPML Gaussian Processes for Machine Learning Toolbox 3.5

by hn - December 8, 2014, 13:54:38 CET [ Project Homepage BibTeX Download ] 20578 views, 4806 downloads, 3 subscriptions

Rating Whole StarWhole StarWhole StarWhole StarWhole Star
(based on 2 votes)

About: The GPML toolbox is a flexible and generic Octave 3.2.x and Matlab 7.x implementation of inference and prediction in Gaussian Process (GP) models.

Changes:
  • mechanism for specifying hyperparameter priors (together with Roman Garnett and José Vallet)
  • new inference method inf/infGrid allowing efficient inference for data defined on a Cartesian grid (together with Andrew Wilson)
  • new mean/cov functions for preference learning: meanPref/covPref
  • new mean/cov functions for non-vectorial data: meanDiscrete/covDiscrete
  • new piecewise constant nearest neighbor mean function: meanNN
  • new mean functions being predictions from GPs: meanGP and meanGPexact
  • new covariance function for standard additive noise: covEye
  • new covariance function for factor analysis: covSEfact
  • new covariance function with varying length scale : covSEvlen
  • make covScale more general to scaling with a function instead of a scalar
  • bugfix in covGabor* and covSM (due to Andrew Gordon Wilson)
  • bugfix in lik/likBeta.m (suggested by Dali Wei)
  • bugfix in solve_chol.c (due to Todd Small)
  • bugfix in FITC inference mode (due to Joris Mooij) where the wrong mode for post.L was chosen when using infFITC and post.L being a diagonal matrix
  • bugfix in infVB marginal likelihood for likLogistic with nonzero mean function (reported by James Lloyd)
  • removed the combination likErf/infVB as it yields a bad posterior approximation and lacks theoretical justification
  • Matlab and Octave compilation for L-BFGS-B v2.4 and the more recent L-BFGS-B v3.0 (contributed by José Vallet)
  • smaller bugfixes in gp.m (due to Joris Mooij and Ernst Kloppenburg)
  • bugfix in lik/likBeta.m (due to Dali Wei)
  • updated use of logphi in lik/likErf
  • bugfix in util/solve_chol.c where a typing issue occured on OS X (due to Todd Small)
  • bugfix due to Bjørn Sand Jensen noticing that cov_deriv_sq_dist.m was missing in the distribution
  • bugfix in infFITC_EP for ttau->inf (suggested by Ryan Turner)

Logo The Statistical ToolKit 0.8.4

by joblion - December 5, 2014, 13:21:47 CET [ Project Homepage BibTeX Download ] 675 views, 197 downloads, 2 subscriptions

About: STK++: A Statistical Toolkit Framework in C++

Changes:

Inegrating openmp to the current release. Many enhancement in the clustering project. bug fix


Logo pyGPs 1.3.1

by mn - December 1, 2014, 17:36:32 CET [ Project Homepage BibTeX Download ] 3026 views, 722 downloads, 3 subscriptions

About: pyGPs is a Python package for Gaussian process (GP) regression and classification for machine learning.

Changes:

Changelog pyGPs v1.3.1

November 25th 2014

structural updates:

  • full inline documentation with input parameter and output specified

  • check for the inputs and provide diagnostic messages for some inputs

  • consistant naming in inline and online documentation

  • string representation for dnlZStruct and postStruct. Now you can do sth like:

nlZ, dnlZ, post = model.getPosterior(x,y)

print post

  • instead of a python object, we provide now a more informative description.

  • add optimization into unit test routines. Also add checking for cholesky decomposition and checking positive-definite property of kernel matrix.

  • add jitter to the digonal of linear, linARD, and poly covariance for numerical stability.

  • fix several minor problems in unit test framework

  • hierachically rearranged for online documentation

  • add several supplementary instruction in online documentation


Logo linearizedGP 1.0

by dsteinberg - November 28, 2014, 07:02:54 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 303 views, 48 downloads, 1 subscription

About: Gaussian processes with general nonlinear likelihoods using the unscented transform or Taylor series linearisation.

Changes:

Initial Announcement on mloss.org.


Logo Boosted Decision Trees and Lists 1.0.4

by melamed - July 25, 2014, 23:08:32 CET [ BibTeX Download ] 3182 views, 953 downloads, 3 subscriptions

About: Boosting algorithms for classification and regression, with many variations. Features include: Scalable and robust; Easily customizable loss functions; One-shot training for an entire regularization path; Continuous checkpointing; much more

Changes:
  • added ElasticNets as a regularization option
  • fixed some segfaults, memory leaks, and out-of-range errors, which were creeping in in some corner cases
  • added a couple of I/O optimizations

Logo JMLR GPstuff 4.5

by avehtari - July 22, 2014, 14:03:11 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 16111 views, 3893 downloads, 2 subscriptions

Rating Whole StarWhole StarWhole StarWhole StarWhole Star
(based on 1 vote)

About: The GPstuff toolbox is a versatile collection of Gaussian process models and computational tools required for inference. The tools include, among others, various inference methods, sparse approximations and model assessment methods.

Changes:

2014-07-22 Version 4.5

New features

  • Input dependent noise and signal variance.

    • Tolvanen, V., Jylänki, P. and Vehtari, A. (2014). Expectation Propagation for Nonstationary Heteroscedastic Gaussian Process Regression. In Proceedings of IEEE International Workshop on Machine Learning for Signal Processing, accepted for publication. Preprint http://arxiv.org/abs/1404.5443
  • Sparse stochastic variational inference model.

    • Hensman, J., Fusi, N. and Lawrence, N. D. (2013). Gaussian processes for big data. arXiv preprint http://arxiv.org/abs/1309.6835.
  • Option 'autoscale' in the gp_rnd.m to get split normal approximated samples from the posterior predictive distribution of the latent variable.

    • Geweke, J. (1989). Bayesian Inference in Econometric Models Using Monte Carlo Integration. Econometrica, 57(6):1317-1339.

    • Villani, M. and Larsson, R. (2006). The Multivariate Split Normal Distribution and Asymmetric Principal Components Analysis. Communications in Statistics - Theory and Methods, 35(6):1123-1140.

Improvements

  • New unit test environment using the Matlab built-in test framework (the old Xunit package is still also supported).
  • Precomputed demo results (including the figures) are now available in the folder tests/realValues.
  • New demos demonstrating new features etc.
    • demo_epinf, demonstrating the input dependent noise and signal variance model
    • demo_svi_regression, demo_svi_classification
    • demo_modelcomparison2, demo_survival_comparison

Several minor bugfixes


Logo Kernel Adaptive Filtering Toolbox 1.4

by steven2358 - May 26, 2014, 18:24:23 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 3701 views, 616 downloads, 1 subscription

About: A Matlab benchmarking toolbox for online and adaptive regression with kernels.

Changes:
  • Improvements and demo script for profiler
  • Initial version of documentation
  • Several new algorithms

Logo Hivemall 0.1

by myui - October 25, 2013, 08:43:12 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 3958 views, 608 downloads, 1 subscription

About: Hivemall is a scalable machine learning library running on Hive/Hadoop, licensed under the LGPL 2.1.

Changes:
  • Enhancement

    • Added AROW regression
    • Added AROW with a hinge loss (arowh_regress())
  • Bugfix

    • Fixed a bug of null feature handling in classification/regression

Logo MLDemos 0.5.1

by basilio - March 2, 2013, 16:06:13 CET [ Project Homepage BibTeX Download ] 19273 views, 4547 downloads, 2 subscriptions

About: MLDemos is a user-friendly visualization interface for various machine learning algorithms for classification, regression, clustering, projection, dynamical systems, reward maximisation and reinforcement learning.

Changes:

New Visualization and Dataset Features Added 3D visualization of samples and classification, regression and maximization results Added Visualization panel with individual plots, correlations, density, etc. Added Editing tools to drag/magnet data, change class, increase or decrease dimensions of the dataset Added categorical dimensions (indexed dimensions with non-numerical values) Added Dataset Editing panel to swap, delete and rename dimensions, classes or categorical values Several bug-fixes for display, import/export of data, classification performance

New Algorithms and methodologies Added Projections to pre-process data (which can then be classified/regressed/clustered), with LDA, PCA, KernelPCA, ICA, CCA Added Grid-Search panel for batch-testing ranges of values for up to two parameters at a time Added One-vs-All multi-class classification for non-multi-class algorithms Trained models can now be kept and tested on new data (training on one dataset, testing on another) Added a dataset generator panel for standard toy datasets (e.g. swissroll, checkerboard,...) Added a number of clustering, regression and classification algorithms (FLAME, DBSCAN, LOWESS, CCA, KMEANS++, GP Classification, Random Forests) Added Save/Load Model option for GMMs and SVMs Added Growing Hierarchical Self Organizing Maps (original code by Michael Dittenbach) Added Automatic Relevance Determination for SVM with RBF kernel (Thanks to Ashwini Shukla!)


Logo Orange 2.6

by janez - February 14, 2013, 18:15:08 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 12188 views, 2354 downloads, 1 subscription

Rating Whole StarWhole StarWhole StarWhole StarEmpty Star
(based on 1 vote)

About: Orange is a component-based machine learning and data mining software. It includes a friendly yet powerful and flexible graphical user interface for visual programming. For more advanced use(r)s, [...]

Changes:

The core of the system (except the GUI) no longer includes any GPL code and can be licensed under the terms of BSD upon request. The graphical part remains under GPL.

Changed the BibTeX reference to the paper recently published in JMLR MLOSS.


About: This local and parallel computation toolbox is the Octave and Matlab implementation of several localized Gaussian process regression methods: the domain decomposition method (Park et al., 2011, DDM), partial independent conditional (Snelson and Ghahramani, 2007, PIC), localized probabilistic regression (Urtasun and Darrell, 2008, LPR), and bagging for Gaussian process regression (Chen and Ren, 2009, BGP). Most of the localized regression methods can be applied for general machine learning problems although DDM is only applicable for spatial datasets. In addition, the GPLP provides two parallel computation versions of the domain decomposition method. The easiness of being parallelized is one of the advantages of the localized regression, and the two parallel implementations will provide a good guidance about how to materialize this advantage as software.

Changes:

Initial Announcement on mloss.org.


Logo MLPY Machine Learning Py 3.5.0

by albanese - March 15, 2012, 09:52:41 CET [ Project Homepage BibTeX Download ] 51872 views, 9874 downloads, 2 subscriptions

Rating Whole StarWhole StarWhole Star1/2 StarEmpty Star
(based on 3 votes)

About: mlpy is a Python module for Machine Learning built on top of NumPy/SciPy and of GSL.

Changes:

New features:

  • LibSvm(): pred_probability() now returns probability estimates; pred_values() added
  • LibLinear(): pred_values() and pred_probability() added
  • dtw_std: squared Euclidean option added
  • LCS for series composed by real values (lcs_real()) added
  • Documentation

Fix:

  • wavelet submodule: cwt(): it returned only real values in morlet and poul
  • IRelief(): remove np. in learn()
  • fix rfe_kfda and rfe_w2 when p=1

Logo JMLR LWPR 1.2.4

by sklanke - February 6, 2012, 19:55:41 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 27880 views, 3473 downloads, 1 subscription

About: Locally Weighted Projection Regression (LWPR) is a recent algorithm that achieves nonlinear function approximation in high dimensional spaces with redundant and irrelevant input dimensions. At its [...]

Changes:

Version 1.2.4

  • Corrected typo in lwpr.c (wrong function name for multi-threaded helper function on Unix systems) Thanks to Jose Luis Rivero

Logo Kernel Machine Library 0.2

by pawelm - December 27, 2011, 17:14:01 CET [ Project Homepage BibTeX BibTeX for corresponding Paper ] 3636 views, 143 downloads, 1 subscription

About: The Kernel-Machine Library is a free (released under the LGPL) C++ library to promote the use of and progress of kernel machines.

Changes:

Updated mloss entry (minor fixes).


Logo PyMVPA Multivariate Pattern Analysis in Python 2.0.0

by yarikoptic - December 22, 2011, 01:36:32 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 30203 views, 5526 downloads, 1 subscription

Rating Whole StarWhole StarWhole StarWhole StarEmpty Star
(based on 2 votes)

About: Python module to ease pattern classification analyses of large datasets. It provides high-level abstraction of typical processing steps (e.g. data preparation, classification, feature selection, [...]

Changes:
  • 2.0.0 (Mon, Dec 19 2011)

This release aggregates all the changes occurred between official releases in 0.4 series and various snapshot releases (in 0.5 and 0.6 series). To get better overview of high level changes see :ref:release notes for 0.5 <chap_release_notes_0.5> and :ref:0.6 <chap_release_notes_0.6> as well as summaries of release candidates below

  • Fixes (23 BF commits)

    • significance level in the right tail was fixed to include the value tested -- otherwise resulted in optimistic bias (or absurdly high significance in improbable case if all estimates having the same value)
    • compatible with the upcoming IPython 0.12 and renamed sklearn (Fixes #57)
    • do not double-train slave classifiers while assessing sensitivities (Fixes #53)
  • Enhancements (30 ENH + 3 NF commits)

    • resolving voting ties in kNN based on mean distance, and randomly in SMLR
    • :class:kNN's ca.estimates now contains dictionaries with votes for each class
    • consistent zscoring in :class:Hyperalignment
  • 2.0.0~rc5 (Wed, Oct 19 2011)

  • Major: to allow easy co-existence of stable PyMVPA 0.4.x, 0.6 development mvpa module was renamed into mod:mvpa2.

  • Fixes

    • compatible with the new Shogun 1.x series
    • compatible with the new h5py 2.x series
    • mvpa-prep-fmri -- various compatibility fixes and smoke testing
    • deepcopying :class:SummaryStatistics during add
  • Enhancements

    • tutorial uses :mod:mvpa2.tutorial_suite now
    • better suppression of R warnings when needed
    • internal attributes of many classes were exposed as properties
    • more unification of __repr__ for many classes
  • 0.6.0~rc4 (Wed, Jun 14 2011)

  • Fixes

    • Finished transition to :mod:nibabel conventions in plot_lightbox
    • Addressed :mod:matplotlib.hist API change
    • Various adjustments in the tests batteries (:mod:nibabel 1.1.0 compatibility, etc)
  • New functionality

    • Explicit new argument flatten to from_wizard -- default behavior changed if mapper was provided as well
  • Enhancements

    • Elaborated __str__ and __repr__ for some Classifiers and Measures
  • 0.6.0~rc3 (Thu, Apr 12 2011)

  • Fixes

    • Bugfixes regarding the interaction of FlattenMapper and BoxcarMapper that affected event-related analyses.
    • Splitter now handles attribute value None for splitting properly.
    • GNBSearchlight handling of
      roi_ids.
    • More robust detection of mod:scikits.learn and :mod:nipy externals.
  • New functionality

    • Added a Repeater node to yield a dataset multiple times and
      Sifter node to exclude some datasets. Consequently, the "nosplitting" mode of Splitter got removed at the same time.
    • :file:tools/niils -- little tool to list details (dimensionality, scaling, etc) of the files in nibabel-supported formats.
  • Enhancements

    • Numerous documentation fixes.
    • Various improvements and increased flexibility of null distribution estimation of Measures.
    • All attribute are now reported in sorted order when printing a dataset.
    • fmri_dataset now also stores the input image type.
    • Crossvalidation can now take a custom Splitter instance. Moreover, the default splitter of CrossValidation is more robust in terms of number and type of created splits for common usage patterns (i.e. together with partitioners).
    • CrossValidation takes any custom Node as errorfx argument.
    • ConfusionMatrix can now be used as an errorfx in Crossvalidation.
    • LOE(ACC): Linear Order Effect in ACC was added to
      ConfusionMatrix to detect trends in performances across splits.
    • A Node s postproc is now accessible as a property.
    • RepeatedMeasure has a new 'concat_as' argument that allows results to be concatenated along the feature axis. The default behavior, stacking as multiple samples, is unchanged.
    • Searchlight now has the ability to mark the center/seed of an ROI in with a feature attribute in the generated datasets.
    • debug takes args parameter for delayed string comprehensions. It should reduce run-time impact of debug() calls in regular, non -O mode of Python operation.
    • String summaries and representations (provided by __str__ and __repr__) were made more exhaustive and more coherent. Additional properties to access initial constructor arguments were added to variety of classes.
  • Internal changes

    • New debug target STDOUT to allow attaching metrics (e.g. traceback, timestamps) to regular output printed to stdout

    • New set of decorators to help with unittests

    • @nodebug to disable specific debug targets for the duration of the test.

    • @reseed_rng to guarantee consistent random data given initial seeding.

    • @with_tempfile to provide a tempfile name which would get removed upon completion (test success or failure)

    • Dropping daily testing of maint/0.5 branch -- RIP.

    • Collection s were provided with adequate (deep|)copy. And Dataset was refactored to use Collection s copy method.

    • update-* Makefile rules automatically should fast-forward corresponding website-updates branch

    • MVPA_TESTS_VERBOSITY controls also :mod:numpy warnings now.

    • Dataset.__array__ provides original array instead of copy (unless dtype is provided)

Also adapts changes from 0.4.6 and 0.4.7 (see corresponding changelogs).

  • 0.6.0~rc2 (Thu, Mar 3 2011)

  • Various fixes in the mvpa.atlas module.

  • 0.6.0~rc1 (Thu, Feb 24 2011)

  • Many, many, many

  • For an overview of the most drastic changes :ref:see constantly evolving release notes for 0.6 <chap_release_notes_0.6>

  • 0.5.0 (sometime in March 2010)

This is a special release, because it has never seen the general public. A summary of fundamental changes introduced in this development version can be seen in the :ref:release notes <chap_release_notes_0.5>.

Most notably, this version was to first to come with a comprehensive two-day workshop/tutorial.

  • 0.4.7 (Tue, Mar 07 2011) (Total: 12 commits)

A bugfix release

  • Fixed

    • Addressed the issue with input NIfTI files having scl_ fields set: it could result in incorrect analyses and map2nifti-produced NIfTI files. Now input files account for scaling/offset if scl_ fields direct to do so. Moreover upon map2nifti, those fields get reset.
    • :file:doc/examples/searchlight_minimal.py - best error is the minimal one
  • Enhancements

    • :class:~mvpa.clfs.gnb.GNB can now tolerate training datasets with a single label
    • :class:~mvpa.clfs.meta.TreeClassifier can have trailing nodes with no classifier assigned
  • 0.4.6 (Tue, Feb 01 2011) (Total: 20 commits)

A bugfix release

  • Fixed (few BF commits):

    • Compatibility with numpy 1.5.1 (histogram) and scipy 0.8.0 (workaround for a regression in legendre)
    • Compatibility with libsvm 3.0
    • :class:~mvpa.clfs.plr.PLR robustification
  • Enhancements

    • Enforce suppression of numpy warnings while running unittests. Also setting verbosity >= 3 enables all warnings (Python, NumPy, and PyMVPA)
    • :file:doc/examples/nested_cv.py example (adopted from 0.5)
    • Introduced base class :class:~mvpa.clfs.base.LearnerError for classifiers' exceptions (adopted from 0.5)
    • Adjusted example data to live upto nibabel's warranty of NIfTI standard-compliance
    • More robust operation of MC iterations -- skip iterations where classifier experienced difficulties and raise an exception (e.g. due to degenerate data)

Logo Rudder 0.1

by dmcnelis - December 16, 2011, 22:00:45 CET [ Project Homepage BibTeX Download ] 3578 views, 1174 downloads, 1 subscription

About: An annotated java framework for machine learning, aimed at making it really easy to access analytically functions.

Changes:

Now supports OLS and GLS regression and NaiveBayes classification


Logo RRforest 2002-03-13

by zenog - September 21, 2011, 14:23:44 CET [ Project Homepage BibTeX Download ] 1860 views, 479 downloads, 1 subscription

About: Regression forests, Random Forests for regression. Original implementation by Leo Breiman.

Changes:

Initial Announcement on mloss.org.


Logo Cubist 2.07

by zenog - September 2, 2011, 14:52:17 CET [ Project Homepage BibTeX Download ] 2452 views, 638 downloads, 1 subscription

About: Cubist is the regression counterpart to the C5.0 decision tree tool.

Changes:

Initial Announcement on mloss.org.


Showing Items 1-20 of 39 on page 1 of 2: 1 2 Next