We consider the problem of finding a saddle point for the convex-concave objective min(x) max(y) f(x) + < Ax, y > - g*(y), where f is a convex function with locally Lipschitz gradient and g is convex and possibly non-smooth. We propose an adaptive version of the Condat-V (u) over tilde algorithm, which alternates between primal gradient steps and dual proximal steps. The method achieves stepsize adaptivity through a simple rule involving ||A|| and the norm of recently computed gradients of f. Under standard assumptions, we prove an O(k(-1)) ergodic convergence rate. Furthermore, when f is also locally strongly convex and A has full row rank we show that our method converges with a linear rate. Numerical experiments are provided for illustrating the practical performance of the algorithm.
Funding Agencies|European Research Council (ERC) under the European Union [725594]; Wallenberg Al, Autonomous Systems and Software Program (WASP) - Knut and Alice Wallenberg Foundation [305286]; Department of the Navy, Office of Naval Research (ONR) [N62909-17-1-2111]; Army Research Office [W911NF-19-1-0404]; Hasler Foundation Program: Cyber Human Systems [16066]; Swiss National Science Foundation (SNSF) [200021_178865/1]