Continuous control of an underground loader using deep reinforcement learning

03/01/2021
by   Sofi Backman, et al.
0

Reinforcement learning control of an underground loader is investigated in simulated environment, using a multi-agent deep neural network approach. At the start of each loading cycle, one agent selects the dig position from a depth camera image of the pile of fragmented rock. A second agent is responsible for continuous control of the vehicle, with the goal of filling the bucket at the selected loading point, while avoiding collisions, getting stuck, or losing ground traction. It relies on motion and force sensors, as well as on camera and lidar. Using a soft actor-critic algorithm the agents learn policies for efficient bucket filling over many subsequent loading cycles, with clear ability to adapt to the changing environment. The best results, on average 75 of the max capacity, are obtained when including a penalty for energy usage in the reward.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset