This paper provides a unified stochastic operator framework to analyze the convergence of iterative optimization algorithms for both static problems and online optimization and learning . In the online context, the operator changes at each iteration to reflect changesin the underlying optimization problem . Results in terms of convergence in mean and in high-probability are presented when the errors affecting the operator follow a sub-Weibulldistribution and when updates $T_i x$ are performed based on a Bernoulli random variable . The results do not assumeevanishing errors or vanishing parameters of the operator, as typical in the literature . This case is subsumed by the proposed framework, and links withexiting results in terms . to almost sure convergence are provided . In particular, the results are derived for the cases where $T$ iscontractive and averaged in terms to the unique fixed point and cumulative fixed-point residual, respectively. This results are based on an online Banach-Picarditeration, and similar results are provided. The results are drawn where the bounds for the convergence further depend on the evolution of fixed points(i.e., optimal solutions of the time-varying optimization problems)

Author(s) : Nicola Bastianello, Liam Madden, Ruggero Carli, Emiliano Dall'Anese

Links : PDF - Abstract

Code :
Coursera

Keywords : results - operator - optimization - convergence - online -

Leave a Reply

Your email address will not be published. Required fields are marked *