20 projects found that use the gnu gpl v3 license.
Showing Items 1-20 of 41 on page 1 of 3: 1 2 3 Next

Logo Somoclu 1.7.5

by peterwittek - March 1, 2018, 23:30:34 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 39660 views, 7179 downloads, 3 subscriptions

About: Somoclu is a massively parallel implementation of self-organizing maps. It relies on OpenMP for multicore execution, MPI for distributing the workload, and it can be accelerated by CUDA on a GPU cluster. A sparse kernel is also included, which is useful for training maps on vector spaces generated in text mining processes. Apart from a command line interface, Python, Julia, R, and MATLAB are supported.

Changes:
  • New: A Makefile for mingw to build on Windows.
  • Changed: PR #94 added a much more efficient sparse kernel.
  • Changed: boilerplate code for Julia greatly improved.
  • Changed: Code cleanup, pre-processor macros simplified.
  • Changed: Adapted to Seaborn API changes in plotting heatmaps.

Logo python weka wrapper3 0.1.4

by fracpete - February 18, 2018, 04:54:03 CET [ Project Homepage BibTeX Download ] 6381 views, 1503 downloads, 3 subscriptions

About: A thin Python3 wrapper that uses the javabridge Python library to communicate with a Java Virtual Machine executing Weka API calls.

Changes:
  • upgraded to Weka 3.9.2
  • properly initializing package support now, rather than adding package jars to classpath
  • added weka.core.ClassHelper Java class for obtaining classes and static fields, as javabridge only uses the system class loader

Logo python weka wrapper 0.3.12

by fracpete - February 18, 2018, 04:29:24 CET [ Project Homepage BibTeX Download ] 62930 views, 12845 downloads, 3 subscriptions

About: A thin Python wrapper that uses the javabridge Python library to communicate with a Java Virtual Machine executing Weka API calls.

Changes:
  • upgraded to Weka 3.9.2
  • properly initializing package support now, rather than adding package jars to classpath
  • added weka.core.ClassHelper Java class for obtaining classes and static fields, as javabridge only uses the system class loader

Logo ADAMS 17.12.0

by fracpete - December 20, 2017, 09:38:32 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 36314 views, 6474 downloads, 3 subscriptions

About: The Advanced Data mining And Machine learning System (ADAMS) is a flexible workflow engine aimed at quickly building and maintaining data-driven, reactive workflows, easily integrated into business processes.

Changes:

Some highlights:

  • Code base was moved to Github
  • Nearly 90 new actors, 25 new conversions
  • much improved deeplearning4j module
  • experimental support for Microsoft's CNTK deep learning framework
  • rsync module
  • MEKA webservice module
  • improved support for image annotations
  • improved LaTeX support
  • Websocket support

About: A non-iterative, incremental and hyperparameter-free learning method for one-layer feedforward neural networks without hidden layers. This method efficiently obtains the optimal parameters of the network, regardless of whether the data contains a greater number of samples than variables or vice versa. It does this by using a square loss function that measures errors before the output activation functions and scales them by the slope of these functions at each data point. The outcome is a system of linear equations that obtain the network's weights and that is further transformed using Singular Value Decomposition.

Changes:

Initial Announcement on mloss.org.


About: A non-iterative learning method for one-layer (no hidden layer) neural networks, where the weights can be calculated in a closed-form manner, thereby avoiding low convergence rate and also hyperparameter tuning. The proposed learning method, LANN-SVD in short, presents a good computational efficiency for large-scale data analytic.

Changes:

Initial Announcement on mloss.org.


About: An open-source framework for benchmarking of feature selection algorithms and cost functions.

Changes:

Initial Announcement on mloss.org.


Logo OpenNN 3.1

by Sergiointelnics - March 3, 2017, 17:17:45 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 14026 views, 2141 downloads, 4 subscriptions

About: OpenNN is a software library written in C++ for advanced analytics. It implements neural networks, the most successful machine learning method. The library has been designed to learn from both data sets and mathematical models.

Changes:

New algorithms, correction of bugs.


Logo opusminer 0.1-0

by opusminer - February 23, 2017, 01:01:18 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 2105 views, 361 downloads, 3 subscriptions

About: The new R package opusminer provides an R interface to the OPUS Miner algorithm (implemented in C++) for finding the key associations in transaction data efficiently, in the form of self-sufficient itemsets, using either leverage or lift.

Changes:

Initial Announcement on mloss.org.


Logo LogRegCrowds, Logistic Regression from Crowds 1.0

by fmpr - January 16, 2017, 18:10:57 CET [ Project Homepage BibTeX Download ] 5323 views, 1338 downloads, 3 subscriptions

About: LogReg-Crowds is a collection of Julia implementations of various approaches for learning a logistic regression model multiple annotators and crowds, namely the works of Raykar et al. (2010), Rodrigues et al. (2013) and Dawid and Skene (1979).

Changes:

Initial Announcement on mloss.org. Added GitHub page.


Logo Java Statistical Analysis Tool 0.0.7

by EdwardRaff - January 15, 2017, 22:21:50 CET [ Project Homepage BibTeX Download ] 4832 views, 1162 downloads, 2 subscriptions

About: General purpose Java Machine Learning library for classification, regression, and clustering.

Changes:

See github release tab for change info


Logo slim for matlab 0.2

by ustunb - August 23, 2016, 20:27:00 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 3713 views, 730 downloads, 3 subscriptions

About: learn optimized scoring systems using MATLAB and the CPLEX Optimization Studio

Changes:

Initial Announcement on mloss.org.


Logo Sparse Compositional Metric Learning v1.11

by bellet - August 2, 2016, 11:43:03 CET [ BibTeX BibTeX for corresponding Paper Download ] 8253 views, 2451 downloads, 3 subscriptions

About: Scalable learning of global, multi-task and local metrics from data

Changes:

Minor bug fix in multi-task objective computation (thanks to Junjie Hu).


Logo Multiagent Decision Process Toolbox 0.4

by faoliehoek - June 2, 2016, 17:38:59 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 5958 views, 1352 downloads, 3 subscriptions

About: The Multiagent decision process (MADP) Toolbox is a free C++ software toolbox for scientific research in decision-theoretic planning and learning in multiagent systems.

Changes:

-Includes freshly written spirit parser for .pomdp files. -Includes new code for pruning POMDP vectors; obviates dependence on Cassandra's code and old LP solve version. -Includes new factor graph solution code -Generalized firefighting CGBG domain added -Simulation class for Factored Dec-POMDPs and TOI Dec-MDPs -Approximate BG clustering methods and kGMAA with clustering.


Logo AutoWEKA 2.0

by larsko - May 19, 2016, 19:58:41 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 3698 views, 1114 downloads, 3 subscriptions

About: Automatically finds the best model with its best parameter settings for a given classification or regression task.

Changes:

Initial Announcement on mloss.org.


Logo JaTeCS 1.0.0

by aesuli - April 5, 2016, 17:23:12 CET [ Project Homepage BibTeX Download ] 3707 views, 828 downloads, 2 subscriptions

About: Jatecs is an open source Java library focused on automatic text categorization.

Changes:

Initial Announcement on mloss.org.


Logo A Pattern Recognizer In Lua with ANNs v0.4.1

by pakozm - December 3, 2015, 15:01:36 CET [ Project Homepage BibTeX Download ] 12698 views, 2851 downloads, 2 subscriptions

About: APRIL-ANN toolkit (A Pattern Recognizer In Lua with Artificial Neural Networks). This toolkit incorporates ANN algorithms (as dropout, stacked denoising auto-encoders, convolutional neural networks), with other pattern recognition methods as hidden makov models (HMMs) among others.

Changes:
  • Updated home repository link to follow april-org github organization.
  • Improved serialize/deserialize functions, reimplemented all the serialization procedure.
  • Added exceptions support to LuaPkg and APRIL-ANN, allowing to capture C++ errors into Lua code.
  • Added set class.
  • Added series class.
  • Added data_frame class, similar to Python Pandas DataFrame.
  • Serialization and deserilization have been updated with more robust and reusable API, implemented in util.serialize() and util.deserialize() functions.
  • Added matrix.ext.broadcast utility (similar to broadcast in numpy).
  • Added ProbablisitcMatrixANNComponent, which allow to implement probabilistic mixtures of posteriors and/or likelihoods.
  • Added batch normalization ANN component.
  • Allowing matrix.join to add new axis.
  • Added methods prod(), cumsum() and cumprod() at matrix classes.
  • Added methods count_eq() and count_neq() at matrix classes.
  • Serializable objects API have been augmented with methods ctor_name() and
    ctor_params() in Lua, refered to luaCtorName() and luaCtorParams() in C++.
  • Added cast.to to dynamic cast C++ objects pushed into Lua, allowing to convert base class objects into any of its derived classes.
  • Added matrix.sparse as valid values for targets in ann.loss.mse and
    ann.loss.cross_entropy.
  • Changed matrix metamethods __index and __newindex, allowing to use
    matrix objects with standard Lua operator[].
  • Added matrix.masked_fill and matrix.masked_copy matrix.
  • Added matrix.indexed_fill and matrix.indexed_copy matrix.
  • Added ann.components.probabilistic_matrix, and its corresponding specializations ann.components.left_probabilistic_matrix and
    ann.components.right_probabilistic_matrix.
  • Added operator[] in the right side of matrix operations.
  • Added ann.components.transpose.
  • Added max_gradients_norm in traianble.supervised_trainer, to avoid gradients exploding.
  • Added ann.components.actf.sparse_logistic a logistic activation function with sparsity penalty.
  • Simplified math.add, math.sub, ... and other math extensions for reductions, their original behavior can be emulated by using bind function.
  • Added bind function to freeze any positional argument of any Lua function.
  • Function stats.boot uses multiple_unpack to allow a table of sizes and the generation of multiple index matrices.
  • Added multiple_unpack Lua function.
  • Added __tostring metamethod to numeric memory blocks in Lua.
  • Added dataset.token.sparse_matrix, a dataset which allow to traverse by rows a sparse matrix instance.
  • Added matrix.sparse.builders.dok, a builder which uses the Dictionary-of-Keys format to construct a sparse matrix from scratch.
  • Added method data to numeric matrix classes.
  • Added methods values, indices, first_index to sparse matrix class.
  • Fixed bugs when reading bad formed CSV files.
  • Fixed bugs at statistical distributions.
  • FloatRGB bug solved on equal (+=, -=, ...) operators. This bug affected ImageRGB operations such as resize.
  • Solved problems when chaining methods in Lua, some objects end to be garbage collected.
  • Improved support of strings in auto-completion (rlcompleter package).
  • Solved bug at SparseMatrix<T> when reading it from a file.
  • Solved bug in Image<T>::rotate90_cw methods.
  • Solved bug in SparseMatrix::toDense() method.

C/C++

  • Better LuaTable accessors, using [] operator.
  • Implementation of matrix __index, __newindex and __call metamethods in C++.
  • Implementation of matProd(), matCumSum() and matCumProd() functions.
  • Implementation of matCountEq() and matCountNeq() functions for
    Matrix<T>.
  • Updated matrix_ext_operations.h to change API of matrix operations. All functions have been overloaded to accept an in-place operation and another version which receives a destination matrix.
  • Adding iterators to language models.
  • Added MatrixScalarMap2 which receives as input2 a SparaseMatrix instance. This functions needs to be generalized to work with CPU and CUDA.
  • The method SparseMatrix<T>::fromDenseMatrix() uses a DOKBuilder object to build the sparse matrix.
  • The conversion of a Matrix<T> into a SparseMatrix<T> has been changed from a constructor overload to the static method
    SparseMatrix<T>::fromDenseMatrix().
  • Added support for IPyLua.
  • Optimized matrix access for confusion matrix.
  • Minor changes in class.lua.
  • Improved binding to avoid multiple object copies when pushing C++ objects.
  • Added Git commit hash and compilation time.

Logo A Library for Online Streaming Feature Selection 1.0

by ykui713 - November 25, 2015, 13:23:01 CET [ BibTeX Download ] 2425 views, 988 downloads, 1 subscription

About: LOFS is a software toolbox for online streaming feature selection

Changes:

Initial Announcement on mloss.org.


Logo SALSA.jl 0.0.5

by jumutc - September 28, 2015, 17:28:56 CET [ Project Homepage BibTeX Download ] 2969 views, 649 downloads, 1 subscription

About: SALSA (Software lab for Advanced machine Learning with Stochastic Algorithms) is an implementation of the well-known stochastic algorithms for Machine Learning developed in the high-level technical computing language Julia. The SALSA software package is designed to address challenges in sparse linear modelling, linear and non-linear Support Vector Machines applied to large data samples with user-centric and user-friendly emphasis.

Changes:

Initial Announcement on mloss.org.


Logo KEEL Knowledge Extraction based on Evolutionary Learning 3.0

by keel - September 18, 2015, 12:38:54 CET [ Project Homepage BibTeX Download ] 3553 views, 851 downloads, 1 subscription

About: KEEL (Knowledge Extraction based on Evolutionary Learning) is an open source (GPLv3) Java software tool that can be used for a large number of different knowledge data discovery tasks. KEEL provides a simple GUI based on data flow to design experiments with different datasets and computational intelligence algorithms (paying special attention to evolutionary algorithms) in order to assess the behavior of the algorithms. It contains a wide variety of classical knowledge extraction algorithms, preprocessing techniques (training set selection, feature selection, discretization, imputation methods for missing values, among others), computational intelligence based learning algorithms, hybrid models, statistical methodologies for contrasting experiments and so forth. It allows to perform a complete analysis of new computational intelligence proposals in comparison to existing ones. Moreover, KEEL has been designed with a two-fold goal: research and educational. KEEL is also coupled with KEEL-dataset: a webpage that aims at providing to the machine learning researchers a set of benchmarks to analyze the behavior of the learning methods. Concretely, it is possible to find benchmarks already formatted in KEEL format for classification (such as standard, multi instance or imbalanced data), semi-supervised classification, regression, time series and unsupervised learning. Also, a set of low quality data benchmarks is maintained in the repository.

Changes:

Initial Announcement on mloss.org.


Showing Items 1-20 of 41 on page 1 of 3: 1 2 3 Next