EVALUATING SIMPLE REACTIVE AGENTS IN VISUAL REINFORCEMENT LEARNING TASKS
Abstract
Visual formulations of reinforcement learning tasks are potentially challenging because (1) the state space is large and composed from pixels (so unlikely to be directly correlated with actions), (2) the underlying task might be partially observable despite the high dimensionality, and (3) rewards can be sparse, so do not necessarily discriminate between useful and not useful decisions. In this thesis we compare the classic deep Q-network (a temporal difference reinforcement learning approach) with tangled program graphs (TPG) (a genetic programming approach) under complete and partially observable visual reinforcement learning tasks from ViZDoom. We demonstrate that TPG is particularly effective at imparting structure on the partially observable task (resulting in a general policy for navigating a labyrinth), but is relatively poor at solving a fully observable (aiming) task. Conversely, DQN is very effective when presented with the complete information aiming task, but is unable to discover general solutions to the partially observable navigation task. We attribute these preferences to the different approaches TPG and DQN assume for addressing representation/feature construction versus credit assignment.