Strict Standards: Redefining already defined constructor for class Object in /www/htdocs/w00b2db0/cake/libs/object.php on line 63

Strict Standards: Non-static method Configure::getInstance() should not be called statically in /www/htdocs/w00b2db0/cake/bootstrap.php on line 47
Research Projects

Research Projects

The Python Experiment Suite

Another by-product of my scientific work is a useful tool called the Python Experiment Suite. It's an open source software tool written in Python, that supports scientists, engineers and others to conduct automated generic software experiments on a larger scale with numerous features: parameter ranges and combinations can be evaluated automatically, where different experiment architectures (e.g. grid search) are available. The suite also takes care of logging results into files, can handle experiment interruption and continuation, for instance after process termination by power failure, supports execution on multiple cores and contains a convenient Python interface to retrieve the stored results. Configuration files ease the setup of complex experiments without modifying code, and various run-time options allow for a variety of use cases.

There is also a little example available in the Snippets section, demonstrating how to implement the main methods for the Experiment Suite.


PyBrain is a modular Machine Learning Library for Python. The acronym PyBrain stands for "Python-based Reinforcement Learning, Artificial Intelligence and Neural Networks". Its goal is to offer flexible, easy-to-use yet still powerful algorithms for Machine Learning Tasks and a variety of predefined environments to test and compare your algorithms. PyBrain is open source and I'm one of the main developers, mostly responsible for the Reinforcement Learning part. More information on how to download and use it can be found on the PyBrain website.

Artificial Curiosity

I am currently working on artificial curiosity for robots. The goal here is to let the robot explore its environment independently to improve its model of the world. It receives positive feedback whenever the model improves, thus leading it to situations where it can still learn something new about the world. Already familiar places quickly become boring and will be left for more interesting new teritory. You can learn more about this on Jürgen Schmidhuber's website about artificial curiosity.

Policy Gradients for Robots

In order to use reinforcement learning on real robots, a method for dealing with continuous states and actions is needed. Policy Gradients is one such method, and it works quite well in our simulations (check out the videos). However, the current exploration strategy is somewhat inefficient, as it adds some randomness to the action in each single time-step. I am currently developing a more global exploration strategy that uses random strategies rather than single actions. In the videos below, you can see both methods compared to each other. Or download the 12 minute uncut learning video (Quicktime Movie format, 30MB).

Random Exploration

State-Dependent Exploration