20 projects found that use the apache 2.0 license.
Showing Items 1-20 of 33 on page 1 of 2: 1 2 Next

Logo revrand 0.4.1

by dsteinberg - June 24, 2016, 05:58:05 CET [ Project Homepage BibTeX Download ] 2881 views, 564 downloads, 3 subscriptions

About: A library of scalable Bayesian generalised linear models with fancy features

Changes:
  • Allow for non-learnable likelihood arguments (per datum) in the glm
  • Hotfix for glm prediction sampling functions

Logo AMIDST Toolbox 0.4.1

by ana - April 20, 2016, 09:44:23 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 1868 views, 262 downloads, 4 subscriptions

About: A java library for the analysis of data streams with probabilistic graphical models. AMIDST provides parallel multi-core implementations of Bayesian parameter learning algorithms using variational message passing and importance sampling for static and dynamic Bayesian networks. Additionally, AMIDST efficiently leverages existing functionalities and algorithms by interfacing to software tools such as Weka, Moa, HUGIN and R.

Changes:
  • Bugs fixed.
  • Improved usability.

Logo Toupee 0.1

by nitbix - March 7, 2016, 20:29:59 CET [ Project Homepage BibTeX Download ] 660 views, 159 downloads, 3 subscriptions

About: A Python based library for running experiments with Deep Learning and Ensembles on GPUs.

Changes:

Initial Announcement on mloss.org.


Logo KeLP 2.0.2

by kelpadmin - February 17, 2016, 09:03:46 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 8081 views, 2059 downloads, 3 subscriptions

About: Kernel-based Learning Platform (KeLP) is Java framework that supports the implementation of kernel-based learning algorithms, as well as an agile definition of kernel functions over generic data representation, e.g. vectorial data or discrete structures. The framework has been designed to decouple kernel functions and learning algorithms, through the definition of specific interfaces. Once a new kernel function has been implemented, it can be automatically adopted in all the available kernel-machine algorithms. KeLP includes different Online and Batch Learning algorithms for Classification, Regression and Clustering, as well as several Kernel functions, ranging from vector-based to structural kernels. It allows to build complex kernel machine based systems, leveraging on JSON/XML interfaces to instantiate prediction models without writing a single line of code.

Changes:

In addition to minor bug fixes, this release includes:

  • the Nystrom method for linearizing instances and allowing a large scale kernel learning

  • New examples for the usage of the Smoothed Partial Tree Kernel and the Compositionally Smoothed Partial Tree Kernel.

Check out this new version from our repositories. API Javadoc is already available. Your suggestions will be very precious for us, so download and try KeLP 2.0.2!


Logo PROFET 1.0.0

by Hamda - November 26, 2015, 13:20:28 CET [ Project Homepage BibTeX Download ] 974 views, 263 downloads, 2 subscriptions

About: Software for Automatic Construction and Inference of DBNs Based on Mathematical Models

Changes:

Initial Announcement on mloss.org.


About: Efficient and Flexible Distributed/Mobile Deep Learning Framework, for python, R, Julia and more

Changes:

This version comes with Distributed and Mobile Examples


Logo Apache Mahout 0.11.1

by gsingers - November 9, 2015, 16:12:06 CET [ Project Homepage BibTeX Download ] 21330 views, 5558 downloads, 3 subscriptions

About: Apache Mahout is an Apache Software Foundation project with the goal of creating both a community of users and a scalable, Java-based framework consisting of many machine learning algorithm [...]

Changes:

Apache Mahout introduces a new math environment we call Samsara, for its theme of universal renewal. It reflects a fundamental rethinking of how scalable machine learning algorithms are built and customized. Mahout-Samsara is here to help people create their own math while providing some off-the-shelf algorithm implementations. At its core are general linear algebra and statistical operations along with the data structures to support them. You can use is as a library or customize it in Scala with Mahout-specific extensions that look something like R. Mahout-Samsara comes with an interactive shell that runs distributed operations on a Spark cluster. This make prototyping or task submission much easier and allows users to customize algorithms with a whole new degree of freedom. Mahout Algorithms include many new implementations built for speed on Mahout-Samsara. They run on Spark 1.3+ and some on H2O, which means as much as a 10x speed increase. You’ll find robust matrix decomposition algorithms as well as a Naive Bayes classifier and collaborative filtering. The new spark-itemsimilarity enables the next generation of cooccurrence recommenders that can use entire user click streams and context in making recommendations.


Logo XGBoost v0.4.0

by crowwork - May 12, 2015, 08:57:16 CET [ Project Homepage BibTeX Download ] 13356 views, 2467 downloads, 3 subscriptions

About: xgboost: eXtreme Gradient Boosting It is an efficient and scalable implementation of gradient boosting framework. The package includes efficient linear model solver and tree learning algorithm. The package can automatically do parallel computation with OpenMP, and it can be more than 10 times faster than existing gradient boosting packages such as gbm or sklearn.GBM . It supports various objective functions, including regression, classification and ranking. The package is made to be extensible, so that user are also allowed to define there own objectives easily. The newest version of xgboost now supports distributed learning on various platforms such as hadoop, mpi and scales to even larger problems

Changes:
  • Distributed version of xgboost that runs on YARN, scales to billions of examples

  • Direct save/load data and model from/to S3 and HDFS

  • Feature importance visualization in R module, by Michael Benesty

  • Predict leaf index

  • Poisson regression for counts data

  • Early stopping option in training

  • Native save load support in R and python

  • xgboost models now can be saved using save/load in R

  • xgboost python model is now pickable

  • sklearn wrapper is supported in python module

  • Experimental External memory version


Logo streamDM 0.0.1

by abifet - April 28, 2015, 12:34:00 CET [ Project Homepage BibTeX Download ] 1735 views, 662 downloads, 1 subscription

About: streamDM is a new open source data mining and machine learning library, designed on top of Spark Streaming, an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of data streams.

Changes:

Initial Announcement on mloss.org.


Logo Hivemall 0.3

by myui - March 13, 2015, 17:08:22 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 9420 views, 1640 downloads, 3 subscriptions

About: Hivemall is a scalable machine learning library running on Hive/Hadoop.

Changes:
  • Supported Matrix Factorization
  • Added a support for TF-IDF computation
  • Supported AdaGrad/AdaDelta
  • Supported AdaGradRDA classification
  • Added normalization scheme

Logo Rabit 0.1.0

by crowwork - January 21, 2015, 18:48:46 CET [ Project Homepage BibTeX Download ] 1566 views, 521 downloads, 1 subscription

About: rabit (Reliable Allreduce and Broadcast Interface) is a light weight library that provides a fault tolerant interface of Allreduce and Broadcast for portable , scalable and reliable distributed machine learning programs. Rabit programs can run on various platforms such as Hadoop, MPI and no installation is needed. Rabit now support kmeans clustering, and distributed xgboost: an extremely efficient disrtibuted boosted tree(GBDT) toolkit.

Changes:

Initial Announcement on mloss.org.


Logo ExtRESCAL 0.7.2

by nzhiltsov - January 20, 2015, 00:35:15 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 7499 views, 1425 downloads, 2 subscriptions

About: Scalable tensor factorization

Changes:
  • Improve (speed up) initialization of A by summation

Logo Semi Stochastic Gradient Descent 1.0

by konkey - July 9, 2014, 04:28:47 CET [ BibTeX BibTeX for corresponding Paper Download ] 2772 views, 775 downloads, 1 subscription

About: Efficient implementation of Semi-Stochastic Gradient Descent algorithm (S2GD) for training logistic regression (L2-regularized).

Changes:

Initial Announcement on mloss.org.


Logo Encog Machine Learning Framework 3.2

by jeffheaton - July 5, 2014, 23:47:06 CET [ Project Homepage BibTeX Download ] 6225 views, 2214 downloads, 1 subscription

About: Encog is a Machine Learning framework for Java, C#, Javascript and C/C++ that supports SVM's, Genetic Programming, Bayesian Networks, Hidden Markov Models and other algorithms.

Changes:

Changes for Encog 3.2:

Issue #53: Fix Out Of Range Bug In BasicMLSequenceSet. Issue #52: Unhandled exception in Encog.Util.File.ResourceLoader.CreateStream (ResourceLoader.cs) Issue #50: Concurrency bugs in PruneIncremental Issue #48: Unit Tests Failing - TestHessian Issue #46: Couple of small fixes - Temporal DataSet and SCG training Issue #45: Fixed EndMinutesStrategy to correctly evaluate ShouldStop after the specified number of minutes have elapsed. Issue #44: Encog.ML.Data.Basic.BasicMLDataPairCentroid.Add() & .Remove() Issue #43: Unit Tests Failing - Matrix not full rank Issue #42: Nuget - NuSpec Issue #36: Load Examples easier


About: RLLib is a lightweight C++ template library that implements incremental, standard, and gradient temporal-difference learning algorithms in Reinforcement Learning. It is an optimized library for robotic applications and embedded devices that operates under fast duty cycles (e.g., < 30 ms). RLLib has been tested and evaluated on RoboCup 3D soccer simulation agents, physical NAO V4 humanoid robots, and Tiva C series launchpad microcontrollers to predict, control, learn behaviors, and represent learnable knowledge. The implementation of the RLLib library is inspired by the RLPark API, which is a library of temporal-difference learning algorithms written in Java.

Changes:

Current release version is v2.0.


Logo MShadow 1.0

by antinucleon - April 10, 2014, 02:57:54 CET [ Project Homepage BibTeX Download ] 2304 views, 666 downloads, 1 subscription

About: Lightweight CPU/GPU Matrix/Tensor Template Library in C++/CUDA. Support element-wise expression expand in high performance. Code once, run smoothly on both GPU and CPU

Changes:

Initial Announcement on mloss.org.


Logo CXXNET 0.1

by antinucleon - April 10, 2014, 02:47:08 CET [ Project Homepage BibTeX Download ] 2983 views, 697 downloads, 1 subscription

About: CXXNET (spelled as: C plus plus net) is a neural network toolkit build on mshadow(https://github.com/tqchen/mshadow). It is yet another implementation of (convolutional) neural network. It is in C++, with about 1000 lines of network layer implementations, easily configuration via config file, and can get the state of art performance.

Changes:

Initial Announcement on mloss.org.


Logo SAMOA 0.0.1

by gdfm - April 2, 2014, 17:09:08 CET [ Project Homepage BibTeX Download ] 1985 views, 566 downloads, 2 subscriptions

About: SAMOA is a platform for mining big data streams. It is a distributed streaming machine learning (ML) framework that contains a programing abstraction for distributed streaming ML algorithms.

Changes:

Initial Announcement on mloss.org.


Logo HierLearning 1.0

by neville - March 2, 2014, 04:24:37 CET [ BibTeX BibTeX for corresponding Paper Download ] 2373 views, 665 downloads, 1 subscription

About: HierLearning is a C++11 implementation of a general-purpose, multi-agent, hierarchical reinforcement learning system for sequential decision problems.

Changes:

Initial Announcement on mloss.org.


Logo A Parallel LDA Learning Toolbox 1.0

by yanjianfeng - January 24, 2014, 11:48:07 CET [ BibTeX Download ] 2707 views, 1020 downloads, 1 subscription

About: We introduces PLL, a parallel LDA learning toolbox for big topic modeling.

Changes:

Fix some compiling errors.


Showing Items 1-20 of 33 on page 1 of 2: 1 2 Next