Control and Signal Processing Lab Seminar - Acceleration of Distributed Optimisation Methods
Richard Newton Room, Level 5
Electrical and Electronic Engineering Building, Building Number 193
T: (03) 9035 8028
See more events from
In this talk we consider the settings where an optimisation problem is to be solved distributedly. Particularly, we address the problem of choosing the parameters of the algorithm in a way to achieve the best convergence properties. Initially, we explore the application of multi-step methods to networked optimisation and develop techniques to formulate accelerated primal and the dual algorithms for network-constrained optimization of strongly convex objective functions with Lipschitz continuous gradients. Given the condition number of the Hessian of the cost function and the structure of the underlying network, we determine the algorithm parameters that guarantee the best convergence factor and characterise scenarios when significant speedups can be obtained over the standard gradient descent. Moreover, we study the effects of uncertainty in problem parameters on the performance of both gradient and accelerated iterations and conclude that in most cases our proposed method outperforms gradient descent. We provide three case studies that accelerated methods are applicable and compare their performances with other techniques in the literature. Later, we shift our focus to the alternating direction method of multipliers. While applications of this technique have received a lot of attention, there is a lack of theoretical support for how to set the algorithm parameters, and its step-size is typically tuned experimentally. We consider three different formulations of the algorithm and present explicit expressions for the step-size that improves the convergence properties of the algorithm.