-
- Description:
In machine learning literature, and especially in the literature referring to artificial neural networks, most methods are iterative and operate in batch mode. However, many of the standard algorithms are not suitable for efficiently managing the emerging large-scale data sets obtained from new real-world applications. Novel proposals to address these challenges are mainly iterative approaches based on incremental or distributed learning algorithms. However, the state-of-the-art is such that there are few learning methods based on non-iterative approaches, which have certain advantages over iterative models in dealing more efficiently with these new challenges. We have developed a non-iterative, incremental and hyperparameter-free learning method for one-layer feedforward neural networks without hidden layers. This method efficiently obtains the optimal parameters of the network, regardless of whether the data contains a greater number of samples than variables or vice versa. It does this by using a square loss function that measures errors before the output activation functions and scales them by the slope of these functions at each data point. The outcome is a system of linear equations that obtain the network's weights and that is further transformed using Singular Value Decomposition. Experimental results demonstrate that the proposed method appropriately solves a wide range of classification problems and is able to efficiently deal with large-scale tasks.
- Changes to previous version:
Initial Announcement on mloss.org.
- BibTeX Entry: Download
- Corresponding Paper BibTeX Entry: Download
- Supported Operating Systems: Linux, Windows, Macos
- Data Formats: Any Format Supported By Matlab
- Tags: Neural Networks, Singular Value Decomposition, Incremental Learning, Noniterative Learning
- Archive: download here
Comments
No one has posted any comments yet. Perhaps you'd like to be the first?
Leave a comment
You must be logged in to post comments.