Seminar 2023-2


Organized by Jefferson G. Melo

The seminars in this semester will be held in the Lecture Room of IME/UFG, unless otherwise stated. All interested are very welcome to attend.

Date:  January 18

Speaker: Claudemir Rodrigues Santiago

Title: Taxa de Convergência de um Algoritmo do Ponto Proximal para Problemas de Minimização Não-Convexos
Abstract: Nesta palestra analisaremos os resultados de convergência do trabalho de Fukushima M. e Mine H. “A Generalized Proximal Point Algorithm for Certain non-Convex Minimization Problems“; tal método foi proposto para minimizar uma função objetivo não-convexa que pode ser descrita como a soma de uma função continuamente diferenciável e uma função convexa não necessariamente diferenciável. Apresentaremos o algoritmo e algumas propriedades necessárias para a análise da taxa de convergência. Provaremos que a sequência gerada pelo algoritmo converge linearmente para uma solução do problema sob certas condições. Na convergência do algoritmo, assume-se que a sequência dos parâmetros proximais {c_k} seja somente limitada, porém, na análise da taxa de convergência consideraremos o caso especial em que {c_k} é constante. Por fim, discutiremos algumas possibilidades de alterações nas funções que compõem a função objetivo, além de perspectivas para pesquisas futuras.
Date:  January  11 

Speaker: Paulo César

Title:  Local Convergence of Quasi-Newton Methods under Metric Regularity.
Abstract: The talk considers quasi-Newton methods for generalized equations in Euclidean spaces under metric regularity and provides a sufficient condition for q-linear convergence. Additionally, we demonstrate that the Broyden update satisfies this sufficient condition.
Date:  Dezember 14 

Speaker: Di  Liu -  IMPA

Title: A successive centralized circumcentered-reflection method for the convex feasibility problem
Abstract:In this talk, we present a successive centralization process for the circumcentered-reflection method with several control sequences for solving the convex feasibility problem in Euclidean space. Assuming that a standard error bound holds, we prove the linear convergence of the method with the most violated constraint control sequence. Moreover, under additional smoothness assumptions on the target sets, we establish the superlinear convergence. Numerical experiments confirm the efficiency of our method.
Date:  Dezember 07 

Speaker: Claudemir Rodrigues Santiago

Title:Um algoritmo do ponto proximal para problemas de minimização não-convexos
Abstract: Nesta palestra apresentamos o algoritmo bem como alguns resultados do trabalho de Fukushima M. e Mine H.: “A Generalized Proximal Point Algorithm for Certain non-Convex Minimization Problems“. Nessa proposta a função objetivo é a soma de uma função continuamente diferenciável e uma função convexa. A classe de tais problemas contém como caso especial o de minimização de uma função continuamente diferenciável sobre um conjunto convexo fechado. Este método pode ser considerado como uma generalização do algoritmo do ponto proximal para lidar com a não-convexidade da função objetivo linearizando o termo diferenciável em cada iteração. 
Date:  November 30 

Speaker: Thiago Motta

Title: The steepest descent method for multicriteria optimization:
Towards an add and drop products theory of the multiproduct firm
 Exame de Qualificação
Date:  November 23 

Speaker: Prof. Orizon P.  Ferreira

Title: On projection mappings and the gradient projection method  on hyperbolic space forms 

In this talk we examine the gradient projection method as a solution approach for constrained optimization problems in $\kappa$-hyperbolic space forms, particularly for potentially non-convex objective functions. We consider both constant and backtracking step sizes in our analysis. Our studies are based on the hyperboloid model, commonly referred to as the Lorentz model. In our investigation, we present several innovative properties of the intrinsic $\kappa$-projection onto convex sets of $\kappa$-hyperbolic space forms. These properties are crucial for analyzing the method and also hold independent significance. We discuss the relationship between the intrinsic $\kappa$-projection and the Euclidean orthogonal projection as well as the Lorentz projection. Moreover, we provide formulas for the intrinsic $\kappa$-projection into specific convex sets using the Euclidean orthogonal projection and the Lorentz projection.  Regarding the convergence results of the gradient projection method, we establish two main findings. Firstly, we demonstrate that every accumulation point of the sequence generated by the method with backtracking step sizes is a stationary point for the given problem. Secondly, assuming the Lipschitz continuity of the gradient of the objective function, we show that each accumulation point of the sequence generated by the gradient projection method with a constant step size is also a stationary point. Additionally, we provide an iteration complexity bound that characterizes the number of iterations needed to achieve a suitable measure of stationarity for both step sizes. Finally, we explore the properties of the constrained Fermat-Weber problem, demonstrating that the sequence generated by the gradient projection method converges to its unique solution.
Date:  November 16

Speaker: Max Leandro Nobre Gonçalves

Title: On the convergence rate of the conditional gradient method 

In this talk, we discuss convergence rate results related to the scalar conditional gradient method as well as its multiobjective version.
Date:  November 09
Speaker: Prof. Maurício Silva Louzeiro
Title:  A projected subgradient method for the computation of adapted metrics for dynamical systems
Abstract: In this talk, we extend a recently established subgradient method
for the computation of Riemannian metrics that optimizes certain singular value
functions associated with dynamical systems. This extension is threefold.
First, we introduce a projected subgradient method which results in Riemannian
metrics whose parameters are confined to a compact convex set and we can thus prove
that a minimizer exists; second, we allow inexact subgradients and study the effect of the
errors on the computed metrics; and third, we analyze the subgradient algorithm for three
different choices of step sizes: constant, exogenous and Polyak. 

Date: October 26

Speaker: Erik Papa

Title: Iteration Complexity of the Proximal Point Method for Quasiconvex Functions on Hadamard Manifolds

Abstract: The papers of Baygorrea et al. (2016, 2017) have been proved the global convergence and  rate of convergence of an inexact proximal point algorithm to find critical points of minimization problems with quasiconvex objective functions on Hadamard manifolds (in particular in the Euclidean space). In this paper, we present some examples of quasiconvex problems in these manifolds and  prove the iteration complexity of $O(1/\epsilon)$ to obtain an $\epsilon-$solution with respect to the values of the function of that algorithm. Finally, we particularize all our results on the exact proximal point algorithm for quasiconvex functions to obtain new results in the literature.