The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Pitfalls of Learning a Reward Function Online

Workshop Paper by Stuart Armstrong, Jan Leike, Laurent Orseau, Shane Legg

Pitfalls of Learning a Reward Function Online. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence Main track. Pages 1592-1600.

Abstract: In some agent designs like inverse reinforcement learning an agent needs to learn its own reward function. Learning the reward function and optimising for it are typically two different processes, usually performed at different stages. We consider a continual ('one life') learning approach where the agent both learns the reward function and optimises for it at the same time. We show that this comes with a number of pitfalls, such as deliberately manipulating the learning process in one direction, refusing to learn,'learning' facts already known to the agent, and making decisions that are strictly dominated (for all relevant reward functions). We formally introduce two desirable properties: the first is 'unriggability', which prevents the agent from steering the learning process in the direction of a reward function that is easier to optimise. The second is 'uninfluenceability', whereby the reward-function learning process operates by learning facts about the environment. We show that an uninfluenceable process is automatically unriggable, and if the set of possible environments is sufficiently large, the converse is true too.

Download Workshop Paper