Distributed Continuous time Convex Optimization With Time varying Cost Functions

Distributed Convex Optimization of Time-Varying Cost Functions with Swarm Tracking Behavior for Continuous-time Dynamics

Salar Rahili, Wei Ren, Sheida Ghapani Salar Rahili and Wei Ren are with the Department of Electrical Engineering, University of California, Riverside, CA, USA. Email: , ,

Abstract

In this paper, a distributed convex optimization problem with swarm tracking behavior is studied for continuous-time multi-agent systems. The agents' task is to drive their center to track an optimal trajectory which minimizes the sum of local time-varying cost functions through local interaction, while maintaining connectivity and avoiding inter-agent collision. Each local cost function is only known to an individual agent and the team's optimal solution is time-varying. Here two cases are considered, single-integrator dynamics and double-integrator dynamics. For each case, a distributed convex optimization algorithm with swarm tracking behavior is proposed where each agent relies only on its own position and the relative positions (and velocities in the double-integrator case) between itself and its neighbors. It is shown that the center of the agents tracks the optimal trajectory, the the connectivity of the agents will be maintained and inter-agent collision is avoided. Finally, numerical examples are included for illustration.

I Introduction

Flocking or swarm tracking of a leader has received significant attention in the literature [1, 2, 3]. The goal is that a group of agents tracks a leader only with local interaction while maintaining connectivity and avoiding inter-agent collision. A swarm tracking algorithm is studied in [1], where it is assumed that the leader has a constant velocity and is a neighbor of all followers. The result in [1] has been extended in [2] for the leader with a time-varying velocity, where it requires the leader to be a neighbor of all followers too. In [3], a swarm tracking algorithm via a variable structure approach is introduced, where the leader is a neighbor of only a subset of the followers. In the aforementioned studies, the leader plans the trajectory for the team and no optimal criterion is defined. However, in many multi-agents applications it is relevant for the agents to cooperatively optimize a certain criterion.

In the distributed convex optimization literature, there exists a significant interest in a class of problems, where the goal is to minimize the sum of local cost functions, each of which is known to only an individual agent. Recently some remarkable results based on the combination of consensus and subgradient algorithms have been published [4, 5, 6]. For example, this combination is used in [4] for solving the coupled optimization problem with a fixed undirected topology. A projected subgradient algorithm is proposed in [5], where each agent is required to lie in its own convex set. It is shown that all agents can reach an optimal point in the intersection of all agents' convex sets for a time-varying communication graph with doubly stochastic edge weight matrices.

However, all the aforementioned works are based on discrete-time algorithms. Recently, some new research is conducted on distributed optimization problems for multi-agent systems with continuous-time dynamics. Such a scheme has applications in motion coordination of multi-agent systems. For example, multiple physical vehicles modelled by continuous-time dynamics might need to rendezvous at a team optimal location. In [7], a generalized class of zero-gradient sum controllers for twice differentiable strongly convex functions with an undirected graph is introduced. In [8], a continuous version of [5] is studied, where it is assumed that each agent is aware of the convex solution set of its own cost function and the intersection of all these sets is nonempty. In [9], the convergence rate and error bounds of a continuous-time distributed optimization algorithm has been derived. In [10], an approach is given to address the problem of distributed convex optimization with equality and inequality constraints. A proportional-integral algorithm is introduced in [11, 12, 13], where [12] considers strongly connected weight balanced directed graphs and [13] extends these results using a discrete-time communications scheme. A distributed optimization problem for single-integrator agents is studied in [14] with the adaptivity and finite-time convergence properties.

Having time-invariant cost functions is a common assumption in the literature. However, in many applications the local cost functions are time varying, reflecting the fact that the optimal point could be changing over time and creates a trajectory. In addition, in continuous-time optimization problems, the agents are usually assumed to have single-integrator dynamics. However, a broad class of vehicles requires double-integrator dynamic models. In our early work [15], a preliminary attempt for time-varying cost function is made. However, in all articles on distributed optimization mentioned above, the agents will eventually approach a common optimal point. While the algorithms can be applied to rendezvous problems, they are not applicable to more complicated swarm tracking problems.

In this paper, we study a distributed convex optimization problem with swarm tracking behavior for continuous-time multi-agent systems. There exist significant challenges in the study due to the coexistence of nonlinear swarm behavior, optimization objectives, and time-varying cost functions under the constraints of local information and local interaction. The center of the agents will track an optimal trajectory which minimizes the sum of local time-varying cost functions through local interaction. The agents will maintain connectivity and avoid inter-agent collision. Each local cost function is only known to an individual agent and the team's optimal solution is time-varying. Both cases of single-integrator dynamics and double-integrator dynamics will be considered.

The remainder of this paper is organized as follows: In Section II, the notation and preliminaries used throughout this paper are introduced. In Section III and IV, two distributed convex optimization algorithms with swarm tracking behavior and time-varying cost functions for respectively single-integrator and double-integrator dynamics are designed. Finally in Section V, numerical examples are given for illustration.

Ii notations and preliminaries

The following notations are adopted throughout this paper. denotes the positive real numbers. denotes the index set ; The transpose of matrix and vector are shown as and , respectively. denotes the p-norm of the vector . Let and denote the column vectors of ones and zeros, respectively and denote the identity matrix. For matrix and , the Kronecker product is denoted by . The gradient and Hessian of function are denoted by and , respectively. The matrix inequality means that is positive (semi-)definite.

Let a triplet be an undirected graph, where is the node set and is the edge set, and is a weighted adjacency matrix. An edge between agents and , denoted by , means that they can obtain information from each other. The weighted adjacency matrix is defined as , and if and , otherwise. The set of neighbors of agent is denoted by . A sequence of edges of the form where is called a path. The graph is connected, if there is a path from every node to every other node. The incidence matrix associated with the graph is represented as . Let the Laplacian matrix associated with the graph be defined as and for . The Laplacian matrix is symmetric positive semidefinite. We denote the eigenvalues of by . The undirected graph is connected if and only if has a simple zero eigenvalue with the corresponding eigenvector and all other eigenvalues are positive [16]. When the graph is connected, we order the eigenvalues of as . Note that .

Lemma II.1

[17] Let be a continuously differentiable convex function. is minimized if and only if .

Definition II.1

[17] is m-strongly convex if and only if

for . If is m-strongly convex and twice differentiable on , then .

Lemma II.2

[18] The second smallest eigenvalue of the Laplacian matrix associated with the undirected connected graph satisfies .

Lemma II.3

[19] The symmetric matrix

is positive definite if and only if one of the following conditions holds: (i) ; or (ii)

Assumption II.2

The function is m-strongly convex and continuously twice differentiable with respect to .

Iii Time-Varying Convex Optimization with Swarm Tracking For Single-Integrator Dynamics

Consider a multi-agent system consisting of physical agents with an interaction topology described by the undirected graph . Suppose that the agents satisfy the continuous-time single-integrator dynamics

where is the position of agent , and is the control input of agent . As and are functions of time, we will write them as and for ease of notation. A time-varying local cost function is assigned to agent which is known to only agent . The team cost function is denoted by

Our objective is to design for (1) using only local information and local interaction with neighbors such that the center of all agents tracks the optimal state , and the agents maintain connectivity while avoiding inter-agent collision. Here is the minimizer of the time-varying convex optimization problem

The problem defined in (3) is equivalent to

Intuitively, the problem is deformed as a consensus problem and a minimization problem on the team cost function (2).

In our proposed algorithm, each agent has access to only its own position and the relative positions between itself and its neighbors. To solve this problem, we propose the algorithm

where

is a potential function between agents and to be designed, is non-negative, is positive, and sgn( ) is the signum function defined componentwise. It is worth mentioning that depends on only agent 's position. We assume that each agent has a radius of communication/sensing , where if agent and become neighbors. Our proposed algorithm guarantees connectivity maintenance which means that if the graph is connected, then for all , will remain connected. Before our main result, we need to define the potential function .

Definition III.1

[3] The potential function is a differentiable nonnegative function of which satisfy the following conditions

  • has a unique minimum in , where is a desired distance between agents and and ,

  • if .

  • , where is a constant.

Theorem III.2

Suppose that graph is connected, Assumption II.2 holds for each agent's cost function and the gradient of the cost functions can be written as . If , and , for system (1) with algorithm (5), the center of the agents tracks the optimal trajectory while maintaining connectivity and avoiding inter-agent collision.

Proof: Define the positive semi-definite Lyapanov function candidate

The time derivative of is obtained as

where in the second equality, Lemma 3.1 in [3] has been used. Now, rewriting along with the close-loop system (5) and (1) we have

It is easy to see that if then is negative. Therefore, having and , we can conclude that . Since is bounded, based on Definition III.1, it is guaranteed that there will not be a inter-agent collision and the connectivity is maintained.

In what follows, we focus on finding the relation between the optimal trajectory and the agents' positions. Define the Lyapanov candidate function

where is positive semi-definite. The time derivative of can be obtained as

Because , we know and we obtain

Based on Definition III.1, we can obtain

Now, by summing both sides of the closed-loop system (1) with control algorithm (5), for , we have . Hence we can rewrite ( 6 ) as

Therefore, for . This guarantees that will asymptomatically converge to zero. Because , we have . On the other hand, using Lemma II.1, we know . Hence, the optimal trajectory is

which implies that

where we have shown that the center of the agents will track the team cost function minimizer.

Iv Time-Varying Convex Optimization with Swarm Tracking Behavior For Double-Integrator Dynamics

In this section, we study distributed convex optimization of time-varying cost functions with swarm behavior for double-integrator dynamics. Suppose that the agents satisfy the continuous-time double-integrator dynamics

where are, respectively, the position and velocity of agent , and is the control input of agent .

We will propose an algorithm, where each agent has access to only its own position and the relative positions and velocities between itself and its neighbors. We propose the algorithm

where

The potential function is introduced in Definition III.1, and are positive coefficients.

Theorem IV.1

Suppose that graph is connected, Assumption II.2 holds for each agent's cost function and the gradient of the cost functions can be written as . If , for system (10) with algorithm (11), the center of the agents tracks the optimal trajectory and the agents' velocities track the optimal velocity while maintaining connectivity and avoiding inter-agent collision.

Proof: The closed-loop system (10) with control input (11) can be recast into a compact form as

where and are, respectively, positions and velocities of agents and . It is preferred to rewrite Eq. (12) in terms of the consensus error. Therefore, we define and , where . Note that has one simple zero eigenvalue with as its right eigenvector and has as its other eigenvalue with the multiplicity . Then it is easy to see that if and only if . Thus the agents' velocities reach consensus if and only if converge to zero asymptotically. Rewriting (12) we have

where we recall the elements of and as and . Define the positive definite Lyapanov function candidate

The time derivative of along (13) can be obtained as

where in the first equality, Lemma 3.1 in [3] has been used. We also have

where in the last inequality Lemma II.2 has been used. Now, it is easy to see that if then is negative semi-definite. Therefore, having and , we can conclude that . By integrating both sides of (14), we can see that . Now, applying Barbalat's Lemma[20], we obtain that asymptotically converges to zero, which means that the agents' velocities reach consensus as . On the other hand, since is bounded, based on Definition III.1, it is guaranteed that there will not be inter-agent collision and the connectivity is maintained.

In what follows, we focus on finding the relation between the optimal trajectory of the team cost function and the agents' states. Define the Lyapanov candidate function

where . The time derivative of along the system defined in (10) and (11) can be obtained as

where in the second equality, we have used the fact that by summing both sides of the closed-loop system (10) with controller (11) for , and using (7), we obtain that . Because , we know . Hence, we have

Therefore, for . Having and , we can conclude that . By integrating both sides of , we can see that . Now, applying Barbalat's Lemma, we obtain that will asymptomatically converge to zero. Now, under the assumption that , we have and .

On the other hand, using Lemma II.1, we know . Hence, the optimal trajectory is

Now using (18), we can conclude that

Particularly, we have shown that the average of agents' state, positions and velocities, tracks the optimal trajectory. We also have shown that the agents' velocities reach consensus as . Thus we have approaches as . This completes the proof.

V Simulation and Discussion

In this section, we present two simulations to illustrate the theoretical results in previous sections. Consider a team of six agents. We assumed that , which means that two agents are neighbors if their this distance is less than . The agents' goal is to have their center minimize the team cost function where is the coordinate of agent in plane.

In our first example, we apply algorithm (5) for single-integrator dynamics (1). The local cost function for agent is chosen as

For local cost functions (20), Assumption II.2 and the conditions for agents' cost function in Remark III.4 hold and the gradient of the cost functions can be rewritten as . To guarantee the collision avoidance and connectivity maintenance, the potential function partial derivatives is chosen as Eqs. (36) and (37) in [3], where . Choosing the coefficients in algorithm (5) as , and , and the results are shown in Fig. 1.

Trajectories of all agents along with optimal trajectory using algorithm (

Fig. 1: Trajectories of all agents along with optimal trajectory using algorithm (5) for local cost function (20)

In our next illustration, the swarm control algorithm (11) is employed for double-integrator dynamic system (10) to minimize the team cost function where the local cost functions are defined as

In this case, the parameters of control algorithm (11) are chosen as and . Fig. 2 shows that the center of the agents' positions tracks the optimal trajectory while the agents remain connected and avoid collisions.

Trajectories of all agents using algorithm (

Fig. 2: Trajectories of all agents using algorithm (11) for local cost function (21)

Vi Conclusions

In this paper, a distributed convex optimization problem with swarm tracking behavior was studied for continuous-time multi-agent systems. The agents' task is to drive their center to minimize the sum of local time-varying cost functions through local interaction, while maintaining connectivity and avoiding inter-agent collision. Each local cost function is known to only an individual agent. Two cases were considered, single-integrator dynamics and double-integrator dynamics. In each case, a distributed algorithm was proposed where each agent relies only on its own position and the relative positions (and velocities in double-integrator case) between itself and its neighbors. Using these algorithms, it was proved that for both cases the center of agents tracks the optimal trajectory while the connectivity of the agents was maintained and agents avoided inter-agent collision.

References

  • [1] R. Olfati-Saber, "Flocking for multi-agent dynamic systems: algorithms and theory," Automatic Control, IEEE Transactions on, vol. 51, no. 3, pp. 401–420, March 2006.
  • [2] H. Su, X. Wang, and Z. Lin, "Flocking of multi-agents with a virtual leader," Automatic Control, IEEE Transactions on, vol. 54, no. 2, pp. 293–307, Feb 2009.
  • [3] Y. Cao and W. Ren, "Distributed coordinated tracking with reduced interaction via a variable structure approach," Automatic Control, IEEE Transactions on, vol. 57, no. 1, pp. 33–48, Jan 2012.
  • [4] B. Johansson, T. Keviczky, M. Johansson, and K. Johansson, "Subgradient methods and consensus algorithms for solving convex optimization problems," in Decision and Control, 2008. CDC 2008. 47th IEEE Conference on, Dec 2008, pp. 4185–4190.
  • [5] A. Nedic, A. Ozdaglar, and P. Parrilo, "Constrained consensus and optimization in multi-agent networks," Automatic Control, IEEE Transactions on, vol. 55, no. 4, pp. 922–938, April 2010.
  • [6] D. Yuan, S. Xu, and H. Zhao, "Distributed primal-dual subgradient method for multiagent optimization via consensus algorithms," Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 41, no. 6, pp. 1715–1724, Dec 2011.
  • [7] J. Lu and C. Y. Tang, "Zero-gradient-sum algorithms for distributed convex optimization: The continuous-time case," in American Control Conference (ACC), 2011, June 2011, pp. 5474–5479.
  • [8] G. Shi, K. Johansson, and Y. Hong, "Reaching an optimal consensus: Dynamical systems that compute intersections of convex sets," Automatic Control, IEEE Transactions on, vol. 58, no. 3, pp. 610–622, March 2013.
  • [9] K. Kvaternik and L. Pavel, "A continuous-time decentralized optimization scheme with positivity constraints," in Decision and Control (CDC), 2012 IEEE 51st Annual Conference on, Dec 2012, pp. 6801–6807.
  • [10] J. Wang and N. Elia, "A control perspective for centralized and distributed convex optimization," in Decision and Control and European Control Conference (CDC-ECC), 2011 50th IEEE Conference on, Dec 2011, pp. 3800–3805.
  • [11] G. Droge, H. Kawashima, and M. Egerstedt, "Proportional-integral distributed optimization for networked systems," CoRR, vol. abs/1309.6613, 2013.
  • [12] B. Gharesifard and J. Cortes, "Distributed continuous-time convex optimization on weight-balanced digraphs," Automatic Control, IEEE Transactions on, vol. 59, no. 3, pp. 781–786, March 2014.
  • [13] S. Kia, J. Cortes, and S. Martinez, "Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication," Automatica, to be appear 2014.
  • [14] P. Lin, W. Ren, Y. Song, and J. Farrell, "Distributed optimization with the consideration of adaptivity and finite-time convergence," in American Control Conference (ACC), 2014, June 2014, pp. 3177–3182.
  • [15] S. Rahili, W. Ren, and P. Lin, "Distributed convex optimization of time-varying cost functions for double-integrator systems using nonsmooth algorithms," in American Control Conference (ACC), July 2015.
  • [16] F. Chung, Spectral Graph Theory, ser. CBMS Regional Conference Series.   Conference Board of the Mathematical Sciences, no. no. 92.
  • [17] S. Boyd and L. Vandenberghe, Convex Optimization.   New York, NY, USA: Cambridge University Press, 2004.
  • [18] R. Olfati-Saber and R. Murray, "Consensus problems in networks of agents with switching topology and time-delays," Automatic Control, IEEE Transactions on, vol. 49, no. 9, pp. 1520–1533, Sept 2004.
  • [19] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear Matrix Inequalities in System and Control Theory, ser. Studies in Applied Mathematics.   Philadelphia, PA: SIAM, Jun. 1994, vol. 15.
  • [20] J. Slotine and W. Li, Applied Nonlinear Control.   Prentice Hall, 1991.

jacksonkrounist44.blogspot.com

Source: https://www.arxiv-vanity.com/papers/1507.04773/

0 Response to "Distributed Continuous time Convex Optimization With Time varying Cost Functions"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel