mloss.org new softwarehttp://mloss.orgUpdates and additions to mloss.orgenFri, 20 Oct 2017 11:39:59 -0000bufferkdtree 1.3http://mloss.org/revision/view/2123/<html><p>The bufferkdtree package is a Python library that aims at accelerating nearest neighbor computations using both k-d trees and modern many-core devices such as graphics processing units (GPUs). The implementation is based on OpenCL.
</p>
<p>The buffer k-d tree technique can be seen as an intermediate version between a standard parallel k-d tree traversal (on multi-core systems) and a massively-parallel brute-force implementation for nearest neighbor search. In particular, it makes use of the top of a standard k-d tree (which induces a spatial subdivision of the space) and resorts to a simple yet efficient brute-force implementation for processing chunks of "big" leaves. The implementation is well-suited for data sets with a large reference set (e.g., 1,000,000 points) and a huge query set (e.g., 10,000,000 points) given a moderate dimensionality of the search space (e.g., from d=5 to d=50).
</p></html>fgieseke, christian igel, Cosmin Oancea, Justin HeinermannFri, 20 Oct 2017 11:39:59 -0000http://mloss.org/software/rss/comments/2123http://mloss.org/revision/view/2123/large scalenearest neighborsgpuhigh performance computingbig dataAboleth 0.6.2http://mloss.org/revision/view/2122/<html><p>A bare-bones TensorFlow framework for Bayesian deep learning and Gaussian process approximation with stochastic gradient variational Bayes inference.
</p>
<p>Some of the features of Aboleth:
</p>
<ul>
<li><p>Bayesian fully-connected, embedding and convolutional layers using SGVB for inference.
</p>
</li>
<li><p>Random Fourier and arc-cosine features for approximate Gaussian processes. Optional variational optimisation of these feature weights.
</p>
</li>
<li><p>Imputation layers with parameters that are learned as part of a model.
</p>
</li>
<li><p>Very flexible construction of networks, e.g. multiple inputs, ResNets etc.
</p>
</li>
<li><p>Optional maximum-likelihood type II inference for model parameters such as weight priors/regularizers and regression observation noise.
</p>
</li>
</ul></html>daniel steinberg, lachlan mccalman, louis tiao, , simon ocallaghan, alistair reidFri, 13 Oct 2017 01:21:35 -0000http://mloss.org/software/rss/comments/2122http://mloss.org/revision/view/2122/deep learningvariational inferencegaussian processtensorflowr-cran-CoxBoost 1.4http://mloss.org/revision/view/1313/<html><p>Cox models by likelihood based boosting for a single survival endpoint or competing risks: This package provides routines for fitting Cox models by likelihood based boosting for a single endpoint or in presence of competing risks
</p></html>Harald BinderSun, 01 Oct 2017 00:00:08 -0000http://mloss.org/software/rss/comments/1313http://mloss.org/revision/view/1313/r-cranr-cran-e1071 1.6-8http://mloss.org/revision/view/2061/<html><p>Misc Functions of the Department of Statistics, Probability Theory Group (Formerly: E1071), TU Wien: Functions for latent class analysis, short time Fourier transform, fuzzy clustering, support vector machines, shortest path computation, bagged clustering, naive Bayes classifier, ...
</p></html>David Meyer [aut, cre], Evgenia Dimitriadou [aut, cph], Kurt Hornik [aut], Andreas Weingessel [aut], Friedrich Leisch [aut], Chih-Chung Chang [ctb, cph] (libsvm C++-code), Chih-Chen Lin [ctb, cph] (liSun, 01 Oct 2017 00:00:08 -0000http://mloss.org/software/rss/comments/2061http://mloss.org/revision/view/2061/r-cranr-cran-Boruta 5.2.0http://mloss.org/revision/view/2053/<html><p>Wrapper Algorithm for All Relevant Feature Selection: An all relevant feature selection wrapper algorithm. It finds relevant features by comparing original attributes' importance with importance achievable at random, estimated using their permuted copies.
</p></html>Miron Bartosz Kursa [aut, cre], Witold Remigiusz Rudnicki [aut]Sun, 01 Oct 2017 00:00:05 -0000http://mloss.org/software/rss/comments/2053http://mloss.org/revision/view/2053/r-cranr-cran-caret 6.0-77http://mloss.org/revision/view/2120/<html><p>Classification and Regression Training: Misc functions for training and plotting classification and regression models.
</p></html>Max Kun, Jed Wing, Steve Weston, Andre Williams, Chris Keefer, Allan Engelhardt, Tony Cooper, Zachary Mayer, Brenton Kenkel, the R Core Team, Michael Benesty, Reynald Lescarbeau, Andrew Ziem, Luca ScrSun, 01 Oct 2017 00:00:05 -0000http://mloss.org/software/rss/comments/2120http://mloss.org/revision/view/2120/r-cranAika 0.8http://mloss.org/revision/view/2117/<html><p>Aika is a Java library that automatically extracts and annotates semantic information into text. In case this information is ambiguous, Aika will generate several hypothetical interpretations concerning the meaning of the text and pick the most likely one. The Aika algorithm is based on various ideas and approaches from the field of AI such as artificial neural networks, frequent pattern mining and logic based expert systems. It can be applied to a broad spectrum of text analysis task and combines these concepts in a single algorithm.
</p>
<p>Aika allows to model linguistic concepts like words, word meanings (entities), categories (e.g. person name, city), grammatical word types and so on as neurons in a neural network. By choosing appropriate synapse weights, these neurons can take on different functions within the network. For instance neurons whose synapse weights are chosen to mimic a logical AND can be used to match an exact phrase. On the other hand neurons with an OR characteristic can be used to connect a large list of word entity neurons to determine a category like 'city' or 'profession'.
</p>
<p>Aika is based on non-monotonic logic, meaning that it first draws tentative conclusions only. In other words, Aika is able to generate multiple mutually exclusive interpretations of a word, phrase, or sentence, and select the most likely interpretation. For example a neuron representing a specific meaning of a given word can be linked through a negatively weighted synapse to a neuron representing an alternative meaning of this word. In this case these neurons will exclude each other. These synapses might even be cyclic. Aika can resolve such recurrent feedback links by making tentative assumptions and starting a search for the highest ranking interpretation.
</p>
<p>In contrast to conventional neural networks, Aika propagates activations objects through its network, not just activation values. These activation objects refer to a text segment and an interpretation.
</p>
<p>Aika consists of two layers. The neural layer, containing all the neurons and continuously weighted synapses and underneath that the discrete logic layer, containing a boolean representation of all the neurons. The logic layer uses a frequent pattern lattice to efficiently store the individual logic nodes. This architecture allows Aika to process extremely large networks since only neurons that are activated by a logic node need to compute their weighted sum and their activation value. This means that the fast majority of neurons stays inactive during the processing of a given text.
</p>
<p>To prevent that the whole network needs to stay in memory during processing, Aika uses the provider pattern to suspend individual neurons or logic nodes to an external storage like a mongo db.
</p></html>Lukas MolzbergerTue, 19 Sep 2017 18:10:43 -0000http://mloss.org/software/rss/comments/2117http://mloss.org/revision/view/2117/information extractioninferenceneural networktext miningdlib ml 19.7http://mloss.org/revision/view/2116/<html><p>A C++ toolkit containing machine learning algorithms and tools that facilitate creating complex software in C++ to solve real world problems.
</p>
<p>The library provides efficient implementations of the following algorithms:
</p>
<ul>
<li>
Deep neural networks
</li>
<li>
support vector machines for classification, regression, and ranking
</li>
<li>
reduced-rank methods for large-scale classification and regression.<br />
This includes an SVM implementation and a method for performing
kernel ridge regression with efficient LOO cross-validation.
</li>
<li>
multi-class SVM
</li>
<li>
structural SVM (modes: single-threaded, multi-threaded, and fully distributed)
</li>
<li>
sequence labeling using structured SVMs
</li>
<li>
relevance vector machines for regression and classification
</li>
<li>
reduced set approximation of SV decision surfaces
</li>
<li>
online kernel RLS regression
</li>
<li>
online kernelized centroid estimation/one class classifier
</li>
<li>
online SVM classification
</li>
<li>
kernel k-means clustering
</li>
<li>
radial basis function networks
</li>
<li>
kernelized recursive feature ranking
</li>
<li>
Bayesian network inference using junction trees or MCMC
</li>
<li>
General purpose unconstrained non-linear optimization algorithms using the conjugate gradient, BFGS, and L-BFGS techniques
</li>
<li>
Levenberg-Marquardt for solving non-linear least squares problems
</li>
<li>
A general purpose cutting plane optimizer.
</li>
</ul>
<p>The library also comes with extensive documentation and example programs that walk the user through the use of these machine learning techniques.<br />
</p>
<p>Finally, dlib includes a fast matrix library that lets the user use a simple Matlab like syntax. It is also capable of using BLAS and LAPACK libraries such as ATLAS or the Intel MKL when available. Additionally, the use of BLAS and LAPACK is transparent to the user, that is, the dlib matrix object uses BLAS and LAPACK internally to optimize various operations while still allowing the user to use a simple MATLAB like syntax.
</p></html>Davis KingSun, 17 Sep 2017 15:10:23 -0000http://mloss.org/software/rss/comments/2116http://mloss.org/revision/view/2116/svmclassificationclusteringregressionkernel methodsmatrix librarykkmeansoptimizationalgorithmsexact bayesian methodsapproximate inferencebayesian networksjunction treer-cran-effects 4.0-0http://mloss.org/revision/view/2121/<html><p>Effect Displays for Linear, Generalized Linear, and Other Models: Graphical and tabular effect displays, e.g., of interactions, for various statistical models with linear predictors.
</p></html>John Fox [aut, cre], Sanford Weisberg [aut], Michael Friendly [aut], Jangman Hong [aut], Robert Andersen [ctb], David Firth [ctb], Steve Taylor [ctb], R Core Team [ctb]Thu, 14 Sep 2017 00:00:00 -0000http://mloss.org/software/rss/comments/2121http://mloss.org/revision/view/2121/r-cranJstacs 2.3http://mloss.org/revision/view/2115/<html><p>Sequence analysis is one of the major subjects of bioinformatics. Several existing libraries combine the representation of biological sequences with exact and approximate pattern matching as well as alignment algorithms. We present Jstacs, an open source Java library, which focuses on the statistical analysis of biological sequences instead. Jstacs comprises an efficient representation of sequence data and provides implementations of many statistical models with generative and discriminative approaches for parameter learning. Using Jstacs, classifiers can be assessed and compared on test datasets or by cross-validation experiments evaluating several performance measures. Due to its strictly object-oriented design Jstacs is easy to use and readily extensible.
</p></html>Jens Keilwagen, Jan Grau, Andre GohrWed, 13 Sep 2017 14:25:38 -0000http://mloss.org/software/rss/comments/2115http://mloss.org/revision/view/2115/bioinformaticsrclassificationmachine learningbayesian networksmarkov random fieldssupervised learningemmixture modelsjavalearning principlesprobabilistic modelsmotif discovery