Project details for RL Glue and Codecs

Logo RL Glue and Codecs -- Glue 3.0 RC3 and Codecs R402

by btanner - February 14, 2009, 04:41:30 CET [ Project Homepage BibTeX Download ]

view (5 today), download ( 0 today ), 1 subscription


RL-Glue allows agents, environments, and experiments written in Java, C/C++, Matlab, Python, and Lisp to inter operate, accelerating research by promoting software re-use in the community.

Note: This is a release candidate, not a final release. We are actively soliciting feedback from the community about any problems with the software or documentation that can be improved before a final release in January 31 2009.

Update posted: Dec 10/2008. Some updates to the protocol, notably removing the seed/key methods. This should be the last release candidate before the final release on January 31 2009.

Update posted: Oct 11/2008, Big change for main RL-Glue and C-Codec to use const-pointers instead of structs by value in parameters and return types. Breaks backward compatibility. See the tech manual in docs.

Update posted: Oct 8/2008, fixed memory leak in RL-Glue, fixed skeleton experiment build on Linux, updated some Cygwin compatibility stuff.


Inspired by related psychological theory, in computer science, reinforcement learning is a sub-area of machine learning concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward. Reinforcement learning algorithms attempt to find a policy that maps states of the world to the actions the agent ought to take in those states... -- Wikipedia Reinforcement Learning Article

RL-Glue is a set of common guidelines for the reinforcement learning community to follow to allow us to share and compare agents and environments with greater ease. The software implementation of RL-Glue is the reusable glue interface to connect the basic parts of a learning experiment.

RL-Glue supports interaction between agents, environments, and experiment programs in two different modes. In direct-compile mode, all three modules are written in C/C++ and compiled together into a single executable program.

In the more flexible socket mode: agents, environments, and experiments use inter-process communication through sockets, either locally on one computer or over the network or Internet. In socket mode, agents, environments, and experiments written in a variety of languages can interact with each other transparently. The language-specific software that allows creations from a particular language to connect to RL-Glue is called a codec. We currently have codecs for:

  • C/C++
  • Java
  • Python
  • Matlab
  • Lisp

Members of the reinforcement learning community are welcome to write their own language-or-project specific codecs to use with RL-Glue. The Lisp codec is an example of a user-contributed codec. There are currently codecs in development to connect projects as diverse as: a real-time strategy game, an atari emulator, and a robot to RL-Glue.

The RL-Glue software project, combined with the RL-Glue codecs are a powerful tool that allows members of the reinforcement learning community to re-use each others agents, environments, and experiment programs to help quicken the pace of research. Before RL-Glue most researchers implemented their own experiment protocol, making collaboration difficult.

RL-Glue has been the base for the last few reinforcement learning competitions, and that trend will continue with the 2009 Reinforcement Learning Competition.

What's new in RL-Glue 3.0

- A new homepage:

  • Revamped build system (autotools) for maximum cross-platform install compatibility (Linux, Unix, MacOS, Cygwin) -Installing has never been simpler: >$ ./configure >$ make >$ sudo make install

  • RL-Glue now installs to /usr/local

    • Headers and libs in standard search paths
      • Compiling agents/environments/experiments has never been easier: >$ gcc MyAgent.c -lrlagent -o myAgent.exe
  • Codecs for C/C++, Java, Python, MATLAB AND LISP <--- MATLAB AND LISP!

  • charArray (String) observation and action types!

  • Documentation

    • General RL-Glue Overview
    • RL-Glue Technical Manual
    • A manual for each codec!

History of RL-Glue

We can trace RL-Glue back as far as 1996 through a project by Rich Sutton and Juan Carlos Santamaria called RL-Interface. Since then, the project has gone through several designs and languages. Over time the objectives of the project became more ambitious - it grew from being a convenient calling convention within a single language to a complete protocol allowing all sorts of various languages to communicate with each other.

Changes to previous version:

Initial Announcement on

BibTeX Entry: Download
URL: Project Homepage
Supported Operating Systems: Cygwin, Linux, Macosx, Unix
Data Formats: None
Tags: Control, Reinforcement Learning, Nips2008
Archive: download here

Other available revisons

Version Changelog Date
-- Glue 3.x and Codecs

RL-Glue paper has been published in JMLR.

October 10, 2009, 22:44:00
-- Glue 3.0 RC3 and Codecs R402

Initial Announcement on

October 6, 2008, 07:12:05


No one has posted any comments yet. Perhaps you'd like to be the first?

Leave a comment

You must be logged in to post comments.