ProfOptimization2016

Seminar 2025-2

Organized by Glaydston Bento
------------------------------------------------------------------------------------------------------------------------------------------

The seminars will be held in the Lecture Room of IME/UFG. All interested are very welcome to attend.

------------------------------------------------------------------------------------------------------------------------------------------

Date: August 21

Speaker: Claudemir Rodrigues Santiago

Title: On the Convergence of Proximal Gradient Methods with Explicit Linesearch for Composite Optimization Problems

Abstract: In this talk, we will present results based on the work of Bello Cruz, J. Y., and Nghia, T. T. A., On the convergence of the forward–backward splitting method with linesearches, Optimization Methods and Software, vol. 31, no. 6, 2016. DOI: 10.1080/10556788.2016.1214959. Specifically, we will discuss convergence results for one of the methods proposed in the paper, designed to solve composite optimization problems where both component functions $f$ and $g$ are convex. The method incorporates explicit linesearch procedures to remove the commonly imposed Lipschitz continuity assumption on the gradient of $f$, thereby ensuring weak convergence of the generated sequences to optimal solutions.

------------------------------------------------------------------------------------------------------------------------------------------

Date: August 28

Speaker: Jurandir Lopes

Title: A Refined Proximal Algorithm for Nonconvex Multiobjective Optimization in Hilbert Spaces

Abstract: This paper is devoted to general nonconvex problems of multiobjective optimization in Hilbert spaces. Based on Mordukhovich’s limiting
subgradients, we define a new notion of Pareto critical points for such problems, establish necessary optimality conditions for them, and then employ these conditions to develop a refined version of the vectorial proximal point algorithm with providing its detailed convergence analysis. The obtained results largely extend those initiated by Bonnel, Iusem and Svaiter [4] for convex vector optimization problems and by Bento et al. [3] for nonconvex finite-dimensional problems in terms of Clarke’s generalized gradients. The obtained results largely extend those initiated by Bonnel et al. [SIAM J Optim, 15 (2005), pp. 953–970] for convex vector optimization problems, specifically in the case where the codomain is an m-dimensional space and by Bento et al. [SIAM J Optim, 28 (2018), pp. 1104-1120] for nonconvex finite-dimensional problems in terms of Clarke’s generalized gradients.

------------------------------------------------------------------------------------------------------------------------------------------

Date: September 04

Speaker: Orizon Ferreira

Title: On the Frank--Wolfe method for DC-optimization  with piecewise star-convex objectives

Abstract: We present a projection-free Frank–Wolfe method for difference-of-convex optimization with piecewise-star-convex objectives. The algorithm calls a linear minimization oracle, uses adaptive backtracking, preserves feasibility, and provides a standard Frank–Wolfe dual-gap certificate. Under mild assumptions, it matches the iteration complexity of the convex setting.

------------------------------------------------------------------------------------------------------------------------------------------

Date: September 25

Speaker: Max Leandro

Title: Sub-sampled Trust-Region Methods with Deterministic Worst-Case Complexity

Abstract: In this talk, we discuss and analyze sub-sampled trust-region methods for solving finite-sum optimization problems. These methods employ subsampling strategies to approximate the gradient and Hessian of the objective function, significantly reducing the overall computational cost. We propose a novel adaptive procedure for deterministically adjusting the sample size used for gradient (or gradient and Hessian) approximations. Furthermore, we establish worst-case iteration complexity bounds for obtaining approximate stationary points. More specifically, for a given $\varepsilon_g, \varepsilon_H\in (0,1)$, it is shown that an $\varepsilon_g$-approximate first-order stationary point is reached in at most $\mathcal{O}({\varepsilon_g}^{-2} )$ iterations, whereas an $(\varepsilon_g,\varepsilon_H)$-approximate second-order stationary point is reached in at most $\mathcal{O}(\max\{\varepsilon_{g}^{-2}\varepsilon_{H}^{-1},\varepsilon_{H}^{-3}\})$ iterations. Finally, numerical experiments illustrate the effectiveness of our new subsampling technique.

------------------------------------------------------------------------------------------------------------------------------------------

Date: October 02

Speaker: Maurício Louzeiro

Title: 

Abstract: 

------------------------------------------------------------------------------------------------------------------------------------------

Date: October 16

Speaker: Jurandir Lopes/Glaydston Bento

Title: 

Abstract:  

------------------------------------------------------------------------------------------------------------------------------------------

Date: October 23

Speaker: Alejandra Muñoz González

Title: 

Abstract:

------------------------------------------------------------------------------------------------------------------------------------------

Date: October 30

Speaker: Layane Rodrigues

Title: 

Abstract: 

------------------------------------------------------------------------------------------------------------------------------------------

Date: November 13

Speaker: Jose Roberto Ribeiro Junior

Title: 

Abstract: 

------------------------------------------------------------------------------------------------------------------------------------------

Date:November 27

Speaker: Vilmar Gehlen Filho

Title: 

Abstract: 

------------------------------------------------------------------------------------------------------------------------------------------

Date: December 04

Speaker: Iago Victor Pires de Souza Nunes

Title: 

Abstract: