ProfOptimization2016

Seminar 2025-1

Organized by Leandro Prudente
------------------------------------------------------------------------------------------------------------------------------------------

The seminars will be held in the Lecture Room of IME/UFG. All interested are very welcome to attend.

------------------------------------------------------------------------------------------------------------------------------------------

Date: March 20

Speaker: Layane Rodrigues de Souza Queiroz

Title: Duas heurísticas para o problema da mochila bidimensional

Abstract: Neste trabalho duas heurísticas são propostas para o problema da mochila bidimensional. A primeira é uma heurística baseada no algoritmo genético de chaves aleatórias viciadas, enquanto a segunda implementa uma busca em vizinhança variável. As heurísticas consideram formas diferentes de conduzir o processo de otimização, a primeira sobre múltiplas trajetórias (isto é, uma população de indivíduos) e a segunda considerando uma trajetória única. Propõe-se na implementação da primeira heurística que a probabilidade viciada de herdar informações dos melhores indivíduos faz parte do próprio indivíduo, ao invés de ser um parâmetro constante durante a otimização. Adota-se para as heurísticas que uma solução do problema é codificada por um vetor de inteiros. Ao invés do vetor conter apenas os números que identificam os itens, propõe-se incluir outros números para auxiliar durante o posicionamento dos itens. Dessa forma, não necessariamente um item vai ser posicionado na primeira posição válida do recipiente, ajudando a escapar de possíveis ótimos locais. O posicionamento de itens no recipiente segue a sequência obtida do vetor que representa a solução, adotando-se três regras, uma que é a bottom-left e outras duas propostas com base nela. As duas heurísticas são comparadas com a heurística geral de Mundim et al. (2018), até então estado-da-arte para o problema da mochila, conseguindo obter resultados iguais ou melhores para todas as instâncias. A primeira heurística é capaz de obter soluções cuja ocupação do recipiente é quase 7% maior, na média, enquanto a segunda heurística traz uma melhora na ocupação do recipiente de quase 6%, na média, comparado com Mundim et al. (2018). Para uma instância é possível melhorar a ocupação em quase 18%.

------------------------------------------------------------------------------------------------------------------------------------------

Date: March 27

Speaker: Max Leandro Nobre Gonçalves

Title: A fully subsampled cubic regularization method for finite-sum minimization

Abstract: In this talk, we propose and analyse a fully subsampled Cubic Regularization Method (F-CRM) for solving finite-sum optimization problems. The proposed method uses a general  random subsampling techniques to approximate the functions, gradients and Hessians in order to reduce the overall computational cost of the CRM. Under suitable hypotheses, first- and second-order iteration-complexity bounds and global convergence analyses are presented. 

------------------------------------------------------------------------------------------------------------------------------------------

Date: April 3

Speaker: Tiago Sousa Mota

Title: Proximal Alternating Minimization  for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality

Abstract: In this talk, we will present the convergence properties of an alternating proximal minimization algorithm for non-convex structured functions of the type: L(x, y) = f (x) + Q(x, y) + g(y) , where f and g are proper lower semicontinuous functions defined on Euclidean spaces and Q is a smooth function coupling the variables x and y. The results are in a non-convex setting, assuming only that the function L satisfies the Kurdyka-Łojasiewicz inequality. The main result can be stated as follows: If L has the Kurdyka-Łojasiewicz property, then every bounded sequence generated by the algorithm converges to a critical point of L. This result is completed by the study of the convergence rate of the algorithm, which depends on the geometric properties of the function L around its critical points.

------------------------------------------------------------------------------------------------------------------------------------------

Date: April 10

Speaker: Kelvin Rodrigues Couto

Title: Global convergence of an augmented Lagrangian method from a Riemannian perspective

Abstract: Considering a standard nonlinear programming problem, it is possible to interpret a subset of equality constraints as an embedded Riemannian manifold. This presentation examines the distinctions between the Euclidean and Riemannian approaches to this class of problems. It is well-established that the linear independence constraint qualification is equivalent under both methodologies. However, when examining the recently proposed constant rank constraint qualifications, the Riemannian framework yields a weaker condition, as it requires that the gradients' rank remains constant only within the manifold itself. Conversely, the Euclidean method mandates constant rank properties throughout a full-dimensional neighborhood of the ambient space. Consequently, by applying a Riemannian augmented Lagrangian method to a standard nonlinear programming problem, standard global convergence to a Karush/Kuhn-Tucker point is achieved under a newly introduced weaker constant rank condition that considers only lower-dimensional neighborhoods. Thus, the Riemannian viewpoint demonstrates the potential to generate novel and stronger outcomes for classical optimization problems traditionally solved through Euclidean methods. Additionally, two distinct augmented Lagrangian algorithms are evaluated via an extensive computational analysis, revealing several classes of problems for which the Riemannian approach significantly outperforms the Euclidean method, producing solutions of superior quality.

------------------------------------------------------------------------------------------------------------------------------------------

Date: April 17

Speaker: Claudemir Rodrigues Santiago

Title: Convergence of Inexact Proximal Gradient Methods for Composite Optimization Problems

Abstract: In this talk, we present results based on the work by Huang et al., Inexact Proximal Gradient Methods for Non-Convex and Non-Smooth Optimization, Proceedings of the AAAI Conference on Artificial Intelligence, 32(1), 2018. We will discuss recent advances in the development and
theoretical analysis of inexact proximal gradient methods applied to composite optimization problems, both convex and non-convex. These methods are widely used in settings involving non-smooth regularization, particularly in machine learning. We introduce three variants of inexact proximal gradient algorithms (including a basic version and an accelerated version inspired by Nesterov) and present convergence analysis of the accelerated algorithm for the convex case.

------------------------------------------------------------------------------------------------------------------------------------------

Date: April 24

Speaker: Leandro da Fonseca Prudente

Title: A Proximal Gradient method with an explicit line search for multiobjective optimization

Abstract: We present a proximal gradient method for solving convex multiobjective optimization problems, where each objective function is the sum of two convex functions, with one assumed to be continuously differentiable. The algorithm incorporates a backtracking line search procedure that requires solving only one proximal subproblem per iteration, and is exclusively applied to the differentiable part of the objective functions. Under mild assumptions, we show that the sequence generated by the method convergences to a weakly Pareto optimal point of the problem. Additionally, we establish an iteration complexity bound by showing that the method finds an ε-approximate weakly Pareto point in at most O(1/ε) iterations. 

------------------------------------------------------------------------------------------------------------------------------------------

Date: May 15

Speaker: Mauricio Silva Louzeiro

Title: Busemann function on the set of symmetric positive definite matrices

Abstract: In this seminar, we will discuss the role of the Busemann function in optimization on manifolds. Specifically, we will present a new explicit expression for the Busemann function and its gradient in the case where the manifold is the set of symmetric positive definite matrices. We will also outline the main steps involved in deriving these expressions.

------------------------------------------------------------------------------------------------------------------------------------------

Date: May 22

Speaker: Tiago Sousa Mota

Title: Convergence Analysis of Descent Optimization Algorithms under Polyak-Łojasiewicz-Kurdyka Conditions

Abstract: In this talk, we present a comprehensive convergence analysis of generic classes of descent algorithms in nonconvex optimization under various Polyak-Łojasiewicz-Kurdyka (PLK) conditions. In particular, we revisit and extend the results on convergence rates presented by Khanh, Mordukhovich, and Tran (J. Optim. Theory Appl., 2023), refining the understanding of the zero exponent in smooth KL functions and extending the discussion on the inconsistency between the lower exponent KL property and the continuity of Lipschitz gradients to more general settings. Among other contributions, we establish finite termination of generic algorithms under lower exponent PLK conditions and superlinear convergence when a modified relative error condition can be ensured. Furthermore, we derive new convergence rates for inexact reduced gradient methods and certain variants of the boosted algorithm in DC programming. Notably, we reveal that for a broad class of difference programs, the lower-exponent PLK conditions are inherently incompatible with Lipschitz continuity of the gradient of the DC function near a local minimizer. However, we demonstrate that this inconsistency may not hold if Lipschitz continuity is replaced by gradient continuity alone.

------------------------------------------------------------------------------------------------------------------------------------------

Date: May 29

Speaker: Jefferson D. Gonçalves de Melo

Title: Improved convergence rates for the multiobjective Frank-Wolfe method

Abstract: In this talk, we  analyze the convergence rates of the Frank-Wolfe method for solving convex constrained multiobjective optimization. We establish improved convergence rates under different  assumptions on the objective function, the feasible set, and the localization of the limit point of the sequence generated by the method. In particular, linear converge is obtained under some special assumptions. 

------------------------------------------------------------------------------------------------------------------------------------------

Date: June 5

Speaker: Glaydston de Carvalho Bento

Title: Busemann Functions and Some Recent Advances in Optimization on Riemannian Manifolds

Abstract: In this lecture, I intend to present some recent advances in optimization on Riemannian manifolds. Taking into account the absence of non-constant affine functions in complete and connected $n$-dimensional Riemannian manifolds of negative Ricci curvature on some open set, we explore theoretical aspects related to the applicability of the proximal point method to equilibrium problems, as well as the Fenchel-type conjugate defined as the supremum of convex functions, both via Busemann functions. In particular, we discuss the existence of non-trivial affine functions on Riemannian manifolds.

------------------------------------------------------------------------------------------------------------------------------------------

Date: June 12

Speaker: Alejandra Muñoz González

Title: Método do Gradiente Projetado em variedades Hadamard

Abstract: Neste seminário, abordaremos o problema da minimização de funções diferenciáveis em subconjuntos geodesicamente convexos e fechados de variedades de Hadamard. Para resolver esse problema, propomos o Método do Gradiente Projetado Riemanniano (MGPR), uma adaptação do método do gradiente projetado ao contexto Riemanniano. Analisamos as propriedades de convergência do método, estendendo resultados clássicos da otimização com restrições no espaço euclidiano para variedades de Hadamard e demonstramos sua convergência sob hipóteses adequadas. 

------------------------------------------------------------------------------------------------------------------------------------------

Date: June 26

Speaker: Jose Roberto Ribeiro Junior

Title: On FISTA methods with inexact relative rules

Abstract: In this talk, we discuss some recent versions of FISTA method under different relative rules.