Evolving Policies to Solve the Rubik's Cube: Experiments with Ideal and Approximate Performance Functions
Date
2016-08-26T15:50:57Z
Authors
Smith, Robert
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
This work reports on an approach to direct policy discovery (a form of reinforcement learning) using genetic programming (GP) for the 3 x 3 x 3 Rubik's Cube. Specifically, a synthesis of two approaches is proposed: 1) a previous group theoretic formulation is used to suggest a sequence of objectives for developing solutions to different stages of the overall task; and 2) a hierarchical formulation of GP policy search is utilized in which policies adapted for an earlier objective are explicitly transferred to aid the construction of policies for the next objective. The resulting hierarchical organization of policies into a policy tree explicitly demonstrates task decomposition and policy reuse. Algorithmically, the process makes use of a recursive call to a common approach for maintaining a diverse population of GP individuals and then learns how to reuse subsets of programs (policies) developed against the earlier objective. Other than the two objectives, we do not explicitly identify how to decompose the task or mark specific policies for transfer. Moreover, at the end of evolution we return a population solving 100% of 17,675,698 different initial Cubes for the two objectives currently in use.
A second set of experiments are then performed to qualify the relative contributions for two components for discovering policy trees: Policy diversity maintenance and Competitive coevolution. Both components prove to be fundamental. Without support for each, performance only reaches ~55% and ~23% respectively.
Description
Keywords
Machine learning