Abstract

In this paper, we focus on the relaxed proximal point algorithm (RPPA) for solving convex (possibly nonsmooth) optimization problems. We conduct a comprehensive study on three types of relaxation schedules: (i) constant schedule with relaxation parameter [Formula: see text], (ii) a dynamic schedule put forward by Teboulle and Vaisbourd, and (iii) the silver step-size schedule proposed by Altschuler and Parrilo. The latter two schedules were initially investigated for the gradient descent (GD) method and are extended to the RPPA in this paper. For type (i), we establish tight nonergodic [Formula: see text] convergence rate results measured by function value residual and subgradient norm, where N denotes the iteration counter. For type (ii), we establish a convergence rate that is tight and approximately [Formula: see text] times better than the constant schedule of type (i). For type (iii), aside from the original silver step-size schedule proposed previously, we propose two new modified silver step-size schedules, and for all the three silver step-size schedules, [Formula: see text] accelerated convergence rate results with respect to three different performance metrics are established. Furthermore, our research affirms a previous conjecture by Luner and Grimmer on the GD method with the original silver step-size schedule. Funding: B. Wang, J. Yang, and D. Zhou were supported by the National Natural Science Foundation of China [Grants 12431011 and 12371301] and the Key Laboratory of Numerical Simulation of Large Scale Complex Systems of the Ministry of Education of China. S. Ma was supported in part by the National Science Foundation [Grants CCF-2311275 and ECCS-2326591].

Affiliated Institutions

Related Publications

Publication Info

Year
2025
Type
article
Citations
0
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

0
OpenAlex

Cite This

Bofan Wang, Shiqian Ma, Junfeng Yang et al. (2025). Relaxed Proximal Point Algorithm: Tight Complexity Bounds and Acceleration Without Momentum. INFORMS Journal on Optimization . https://doi.org/10.1287/ijoo.2025.0075

Identifiers

DOI
10.1287/ijoo.2025.0075