Speeding Up the Convergence of Value Iteration in Partially Observable Markov Decision Processes

06/01/2011
by   N. L. Zhang, et al.
0

Partially observable Markov decision processes (POMDPs) have recently become popular among many AI researchers because they serve as a natural model for planning under uncertainty. Value iteration is a well-known algorithm for finding optimal policies for POMDPs. It typically takes a large number of iterations to converge. This paper proposes a method for accelerating the convergence of value iteration. The method has been evaluated on an array of benchmark problems and was found to be very effective: It enabled value iteration to converge after only a few iterations on all the test problems.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset