-
- Description:
EANT Without Structural Optimization is used to learn a policy in either complete or partially observable domains of continuous state and action space through evolution strategies. The structure of the policy to be learned is determined by the domain expert, where he/she specifies the number of hidden neurons that the policy to be evolved should have. The algorithm has been tested on pole balancing benchmark problems, and its performance on the benchmark problems is significantly better than the state-of-the-art methods in neuroevolutionary computation.
- Changes to previous version:
Initial Announcement on mloss.org.
- BibTeX Entry: Download
- Corresponding Paper BibTeX Entry: Download
- Supported Operating Systems: Linux, Windows
- Data Formats: None
- Tags: Reinforcement Learning, Genetic Algorithms
- Archive: download here
Comments
No one has posted any comments yet. Perhaps you'd like to be the first?
Leave a comment
You must be logged in to post comments.