Optimal Farsighted Agents Tend to Seek Power

12/03/2019
by   Alexander Matt Turner, et al.
0

Some researchers have speculated that capable reinforcement learning (RL) agents pursuing misspecified objectives are often incentivized to seek resources and power in pursuit of those objectives. An agent seeking power is incentivized to behave in undesirable ways, including rationally preventing deactivation and correction. Others have voiced skepticism: humans seem idiosyncratic in their urges to power, which need not be present in the agents we design. We formalize a notion of power within the context of finite deterministic Markov decision processes (MDPs). We prove that, with respect to a wide class of reward function distributions, optimal policies tend to seek power over the environment.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset