All entries.
Showing Items 641-648 of 648 on page 65 of 65: First Previous 60 61 62 63 64 65

Logo r-cran-deepnet 0.2

by r-cran-robot - March 20, 2014, 00:00:00 CET [ Project Homepage BibTeX Download ] 260 views, 77 downloads, 0 subscriptions

About: deep learning toolkit in R

Changes:

Fetched by r-cran-robot on 2017-08-01 00:00:04.472713


Logo r-cran-darch 0.12.0

by r-cran-robot - July 19, 2016, 00:00:00 CET [ Project Homepage BibTeX Download ] 247 views, 72 downloads, 0 subscriptions

About: Package for Deep Architectures and Restricted Boltzmann Machines

Changes:

Fetched by r-cran-robot on 2017-08-01 00:00:04.361485


About: An open-source framework for benchmarking of feature selection algorithms and cost functions.

Changes:

Initial Announcement on mloss.org.


About: A non-iterative learning method for one-layer (no hidden layer) neural networks, where the weights can be calculated in a closed-form manner, thereby avoiding low convergence rate and also hyperparameter tuning. The proposed learning method, LANN-SVD in short, presents a good computational efficiency for large-scale data analytic.

Changes:

Initial Announcement on mloss.org.


Logo diffpriv 0.4.2

by brubinstein - July 18, 2017, 16:09:59 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ] 413 views, 57 downloads, 3 subscriptions

About: Easy differential privacy

Changes:

Initial Announcement on mloss.org.


Logo r-cran-effects 3.1-2

by r-cran-robot - September 16, 2016, 00:00:00 CET [ Project Homepage BibTeX Download ] 108 views, 27 downloads, 0 subscriptions

About: Effect Displays for Linear, Generalized Linear, and Other Models

Changes:

Fetched by r-cran-robot on 2017-08-01 00:00:04.701471


Logo LHOTSE 0.14

by mseeger - November 26, 2007, 21:12:19 CET [ Project Homepage BibTeX ] 5047 views, 27 downloads, 0 comments, 2 subscriptions

About: *LHOTSE* is a C++ class library designed for the implementation of large, efficient scientific applications in Machine Learning and Statistics.

Changes:

Initial Announcement on mloss.org.


About: A non-iterative, incremental and hyperparameter-free learning method for one-layer feedforward neural networks without hidden layers. This method efficiently obtains the optimal parameters of the network, regardless of whether the data contains a greater number of samples than variables or vice versa. It does this by using a square loss function that measures errors before the output activation functions and scales them by the slope of these functions at each data point. The outcome is a system of linear equations that obtain the network's weights and that is further transformed using Singular Value Decomposition.

Changes:

Initial Announcement on mloss.org.


Showing Items 641-648 of 648 on page 65 of 65: First Previous 60 61 62 63 64 65