All entries.
Showing Items 301-310 of 622 on page 31 of 63: First Previous 26 27 28 29 30 31 32 33 34 35 36 Next Last

Logo EANT Without Structural Optimization 1.0

by yk - September 28, 2009, 12:34:38 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 5827 views, 1890 downloads, 1 subscription

About: EANT Without Structural Optimization is used to learn a policy in either complete or partially observable reinforcement learning domains of continuous state and action space.

Changes:

Initial Announcement on mloss.org.


Logo JINSECT 1.0

by ggianna - February 25, 2010, 19:03:06 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 5791 views, 1984 downloads, 1 subscription

About: The JINSECT toolkit is a Java-based toolkit and library that supports and demonstrates the use of n-gram graphs within Natural Language Processing applications, ranging from summarization and summary evaluation to text classi?cation and indexing.

Changes:
  • Added java doc to downloadable files.
  • Created SourceForge wiki page at http://sourceforge.net/apps/mediawiki/jinsect/index.php?title=Main_Page.

Logo HDDM 0.5

by Wiecki - April 24, 2013, 02:53:07 CET [ Project Homepage BibTeX Download ] 5777 views, 1426 downloads, 1 subscription

About: HDDM is a python toolbox for hierarchical Bayesian parameter estimation of the Drift Diffusion Model (via PyMC). Drift Diffusion Models are used widely in psychology and cognitive neuroscience to study decision making.

Changes:
  • New and improved HDDM model with the following changes:
    • Priors: by default model will use informative priors (see http://ski.clps.brown.edu/hddm_docs/methods.html#hierarchical-drift-diffusion-models-used-in-hddm) If you want uninformative priors, set informative=False.
    • Sampling: This model uses slice sampling which leads to faster convergence while being slower to generate an individual sample. In our experiments, burnin of 20 is often good enough.
    • Inter-trial variablity parameters are only estimated at the group level, not for individual subjects.
    • The old model has been renamed to HDDMTransformed.
    • HDDMRegression and HDDMStimCoding are also using this model.
  • HDDMRegression takes patsy model specification strings. See http://ski.clps.brown.edu/hddm_docs/howto.html#estimate-a-regression-model and http://ski.clps.brown.edu/hddm_docs/tutorial_regression_stimcoding.html#chap-tutorial-hddm-regression
  • Improved online documentation at http://ski.clps.brown.edu/hddm_docs
  • A new HDDM demo at http://ski.clps.brown.edu/hddm_docs/demo.html
  • Ratcliff's quantile optimization method for single subjects and groups using the .optimize() method
  • Maximum likelihood optimization.
  • Many bugfixes and better test coverage.
  • hddm_fit.py command line utility is depracated.

Logo dANN AI Library 1.3

by freemo - April 25, 2010, 05:36:46 CET [ Project Homepage BibTeX Download ] 5772 views, 1346 downloads, 1 subscription

About: dANN is an Artificial Intelligence and Artificial Genetics library targeted at employing conventional techniques as well as acting as a platform for research & development of novel techniques. As new techniques are developed and proven to be effective they will be integrated into the core library. It is currently written in Java, C++, and C#. However only the java version is currently in active development. If you want to obtain a version other than the java version you will need to get it directly from GIT.

Changes:

Please get the version in GIT only, the released version is old.


Logo Chalearn gesture challenge code by jun wan 2.0

by joewan - September 29, 2015, 08:50:22 CET [ BibTeX BibTeX for corresponding Paper Download ] 5718 views, 1381 downloads, 2 subscriptions

About: This code is provided by Jun Wan. It is used in the Chalearn one-shot learning gesture challenge (round 2). This code includes: bag of features, 3D MoSIFT-based features (i.e. 3D MoSIFT, 3D EMoSIFT and 3D SMoSIFT), and the MFSK feature.

Changes:

Initial Announcement on mloss.org.


Logo Kernel Multiple Logistic Regression 1.0

by mseeger - November 10, 2007, 22:16:50 CET [ Project Homepage BibTeX Download ] 5697 views, 1308 downloads, 0 subscriptions

About: Efficient implementation of penalized multiple logistic regression (aka multi-class) with Mercer kernels, aka MAP approximation to the multi-class Gaussian process model. This includes [...]

Changes:

Initial Announcement on mloss.org.


Logo Neural network designer 1.1.1

by bragi - December 28, 2012, 11:38:10 CET [ Project Homepage BibTeX Download ] 5655 views, 1327 downloads, 1 subscription

About: a dbms for resonating neural networks. Create and use different types of machine learning algorithms.

Changes:

AIML compatible (AIML files can be imported); new 'Grid channel' for developing board games; improved topics editor; new demo project: ALice (from AIML); lots of bug-fixes and speed improvements


About: PALMA computes the optimal spliced alignment of a mRNA sequence to a genomic sequence. The main python script takes two FASTA files containing the target (e.g. a DNA sequence, part of the genome) [...]

Changes:

Initial Announcement on mloss.org.


Logo Java Optimized Processor for Embedded Machine Learning 1

by rasped - December 15, 2009, 12:51:26 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 5614 views, 1163 downloads, 1 subscription

Rating Whole StarWhole StarWhole Star1/2 StarEmpty Star
(based on 1 vote)

About: JOP is a Java virtual machine implemented in hardware. It is a hard real-time open source multicore processor capable of worst case execution time analysis of Java code.

Changes:

Initial Announcement on mloss.org.


About: Stochastic neighbor embedding originally aims at the reconstruction of given distance relations in a low-dimensional Euclidean space. This can be regarded as general approach to multi-dimensional scaling, but the reconstruction is based on the definition of input (and output) neighborhood probability alone. The present implementation also allows for handling dissimilarity or score-induced neighborhood topologies and makes use of quasi 2nd order gradient-based (l-)BFGS optimization.

Changes:
  • gradient in xsne_fun.m fixed! (constant factor m was missing)

  • symmetry option re-introduced allowing for enabling symmetric and asymmetric versions of SNE and t-SNE


Showing Items 301-310 of 622 on page 31 of 63: First Previous 26 27 28 29 30 31 32 33 34 35 36 Next Last