D3PI: Data-Driven Distributed Policy Iteration for Homogeneous Interconnected Systems
Control of large-scale networked systems often necessitates the availability of complex models for the interactions amongst the agents. While building accurate models of these interactions could become prohibitive in many applications, data-driven control methods can circumvent model complexities by directly synthesizing a controller from the observed data. In this paper, we propose the Data-Driven Distributed Policy Iteration (D3PI) algorithm to design a feedback mechanism for a potentially large system that enjoys an underlying graph structure characterizing communications among the agents. Rather than having access to system parameters, our algorithm requires temporary "auxiliary" links to boost information exchange of a small portion of the graph during the learning phase. Therein, the costs are partitioned for learning and non-learning agents in order to ensure consistent control of the entire network. After the termination of the learning process, a distributed policy is proposed for the entire networked system by leveraging estimated components obtained in the learning phase. We provide extensive stability and convergence guarantees of the proposed distributed controller throughout the learning phase by exploiting the structure of the system parameters that occur due to the graph topology and existence of the temporary links. The practicality of our method is then illustrated with a simulation.
READ FULL TEXT