Seminar 2020.2

The seminars will be held remotely on Thursday at 8:00 am via Google Meet, unless otherwise stated. All interested are very welcome. To attend, please contact Prof. Jefferson D. G. Melo by e-mail


DATA: 10/06

Title: On the inexact scaled gradient projection method

Speaker: Prof. Max Valério Lemes (IME-UFG)

Abstract:  In this talk, it will be to presented an inexact version of the scaled gradient projection method on a convex set, which  is inexact in two sense.  First, an inexact projection on the feasible set is computed  allowing  an appropriate relative error tolerance. Second, an  inexact non-monotone line search scheme is employed to compute  a step size which defines the next iteration. It is shown that the proposed  method has similar  asymptotic convergence properties and iteration-complexity bounds as the usual  scaled gradient projection method employing monotone line searches.


DATA: 27/05

Título: A boosted DC algorithm for non-smooth DC components with non-monotone line search.

Palestrante: Elianderson Meneses Santos (Aluno do IME-UFG)

Resumo: A boosted difference of convex functions algorithm (BDCA) was recently proposed by Aragón Artacho and Vuong in [SIAM J. Optim., 30.1 (2020), pp. 980-1006] for minimizing the difference of two convex functions (DC functions), where the first DC component is differentiable and the second one is possibly non-smooth. However, if the first DC component is non-smooth, then the search direction generated by the subproblem of BDCA may not be a descent direction, and a monotone line search can not be performed. On the other hand, this drawback is overcome by using a non-monotone line search strategy. In this sense, we propose a new boosted DC algorithm with non-monotone line search (nmBDCA) to solve DC problems where the DC components are both possibly non-smooth.





DATA: 20/05

Title: A conjugate directions-type procedure for quadratic multiobjective optimization

Palestrante: Ademir Alves Aguiar

In this seminar we will present the article "Fukuda, E. H; Drummond, L. M. G and Masuda, A. M.: A conjugate directions-type procedure for quadratic multiojective optimization. Optimization, 2021". Our objective is to present an extension of the real-valued conjugate directions method for unconstrained quadratic multiojective problems. We are going to show that the multicriteria version computes the steplenght by means of the unconstrained minimization of a single-variable strongly convex function at each iteration.  





DATA: 13/05

Title: Conditional gradient method  for split  multiobjective problem

Speaker: Pedro Bonfim de Assunção Filho
Abstract: In this paper we  analyze the  conditional gradient method,  also known as Frank-Wolfe method, for sum of two  multiobjective functions. The constraint set is assumed to be convex and compact, one of the objetive functions is assumed to be  continuously differentiable and  other one is  non  necessarily differentiable. The method is considered with different strategies for obtaining the step sizes. Asymptotic convergence properties and  iteration-complexity bounds with and without convexity assumptions on the objective functions are established.


Data: 06/05

Título: Método de gradiente projetado inexato para otimização vetorial
Palestrante: Fernando Santana Lima
Resumo: Neste seminário estudaremos o método do tipo gradiente projetado inexato para resolver problemas suaves de otimização vetorial, proposto por Ellen H. Fukuda e  Graña Drummond. No cenário restrito, o método proposto estende o método exato proposto por Graña Drummond e Iusem. Os resultados de convergência deste método estendem aqueles obtidos por Fukuda e Graña Drummond para a versão exata.  


Data: 29/04
Título:  A quasi-Newton method with Wolfe line search for multiobjective optimization.
Palestrante: Danilo Rodrigues de Souza.
Resumo: Neste seminário apresentaremos uma extensão, para otimização multiobjectivo irrestrita, do método quase-Newton proposto em LI, Dong-Hui; FUKUSHIMA, Masao. A modified BFGS method and its global convergence in nonconvex minimization. Journal of Computational and Applied Mathematics, v. 129, n. 1-2, p. 15-35, 2001. Os autores modificaram o método BFGS clássico de modo a obter convergência para problemas não convexos. Assim como no caso escalar, se os comprimentos de passos satisfazem as condições de Wolfe multiobjectivo, então a atualização MBFGS proposta se mantém definida positiva. As hipóteses utilizadas na análise de convergência são extensões diretas das consideradas no cenário escalar.  Sem assumir convexidade, provamos que o método proposto encontra pontos Pareto críticos. Além disso, sob condições apropriadas, mostramos que a taxa de convergência é superlinear.


Date: April 22 

Title:   Numerical Experiment with a new line search for Vector Optimization

Speaker:  Flávio P. Vieira

Abstract: Yunda Dong (2010 and 2012) introduced a new line search procedure for Conjugated Gradient Methods using only gradient information, that is, without working with  functional values. We extended this line search to Vector Optimization with a gradient-type algorithm. In this talk, some theoretical results, including convergence results under convexity and under Lipschitz continuity of the Jacobian, will be shown. We will consider  two sets of test problems for which we have interesting numerical results, comparing computational time, the number of iterations, and the Purity Metric, a metric introduced by CUSTODY and others in 2010.


Date: April 15

Title:  Inexact methods for constrained optimization problems and for constrained monotone nonlinear equations.

Speaker:  Tiago da Costa Menezes

Abstract: In this talk, we will present some methods to solve constrained optimization problems and constrained monotone nonlinear systems of equations. Our first algorithm is an inexact variable metric method for solving convex-constrained optimization problems. At each iteration of the method, the search direction is obtained by inexactly minimizing a strictly convex quadratic function over the closed convex feasible set. Here, we propose a new inexactness criterion for the search direction subproblems. Our second method consists of a Gauss-Newton algorithm with approximate projections for solving constrained nonlinear least squares problems. The local convergence of the method including results on its rate is discussed. By combining the latter method and a nonmonotone line search strategy, we also propose a global version of this algorithm. Our third approach corresponds to a framework, which is obtained by combining a safeguard strategy on the search directions with a notion of approximate projections, to solve constrained monotone nonlinear systems of equations. Some examples of methods which fall into this framework are presented. Numerical experiments illustrating the practical behaviors of the methods are discussed and comparisons with existing algorithms are presented.


Date: April 08

Title: First-order methods for vector optimization

Speaker: Ray Victor Guimarães Serra

Abstract: In this talk, we will present some first-order iterative methods for solving convex vector optimization problems. The presentation is divided into two parts. The first one considers problems for which the objective function can be described as the sum of two convex vector functions. In order to solve this kind of problem, we study a proximal gradient method under three perspectives of line-search procedures. The second one, we consider convex vector optimization problems in Hilbert spaces. Our interest is to develop methods that converge in the strong topology to a weakly efficient solution. The main idea is the extension to the vectorial setting of a technique proposed by Svaiter and Solodov for forcing strong convergence of proximal point type methods for obtaining zero of maximal monotone operators.



Date:  March 25

Title: A modified BFGS method and its global convergence in nonconvex minimization.

Speaker: Danilo Rodrigues de Souza (PhD student-IME/UFG)

Abstract: Nesta apresentação, discutiremos o artigo LI, Dong-Hui; FUKUSHIMA, Masao. A modified BFGS method and its global convergence in nonconvex minimization. Journal of Computational and Applied Mathematics, v. 129, n. 1-2, p. 15-35, 2001. Os autores propõem uma modificação no método BFGS para otimização irrestrita de modo a obter convergência global mesmo sem a suposição de convexidade da função objetivo. Além do mais, sob certas condições, é mostrado que o método proposto tem taxa de convergência superlinear.


Date: March 18 

Title: The proximal point method for locally Lipschitz functions in multiobjective optimization with application to the compromise problem

Speaker: Elianderson Meneses Santos - (PhD student-IME/UFG)

Abstract: In this seminar will be presented the paper of Bento et al. denominated "The proximal point method for locally Lipschitz functions in multiobjective optimization with application to the compromise problem" [SIAM J. Optim., 28.2 (2018), pp. 1104-1120]. In this paper the authors extend the proximal point method considered by Bonnel, Iusem, and Svaiter [SIAM J. Optim., 15 (2005), pp. 953–970] to locally Lipschitz functions in the finite dimensional multiobjective setting.


Date:  March 11

Title:  Um estudo de métodos de gradiente conjugado não linear  para otimização vetorial

Speaker: Fernando Santana Lima (PhD student-IME/UFG)

Abstract: Nesta palestra, consideraremos métodos de gradiente conjugado não linear para encontrar pontos críticos de funções com valor vetorial. Estenderemos para o caso vetorial alguns métodosconhecidos na literatua unidimensional. Mostraremos que os parâmetros estudadosrecuperam em parte os clássicos no caso de minimização escalar. Provaremos que as sequências geradas pelos métodos propostos convergem a pontos críticos  Pareto.


Date: March 4

Title: Alternating conditional gradient method for convex feasibility problems

Speaker: Prof. Orizon Pereira Ferreira, UFG

Abstract: The classical convex feasibility problem in a finite dimensional Euclidean space consists of finding a point in the intersection of two convex sets. In this talk,  we are interested in two particular instances of this problem. First, we assume to know how to compute an exact projection onto one of the sets involved and the other set is compact such that the conditional gradient (CondG) method can be used for computing efficiently an inexact projection on it. Second, we assume that both sets involved are compact such that the CondG method can be used for computing efficiently inexact projections on them. We combine alternating projection method with CondG method to design a new method, which can be seen as an inexact feasible version of alternate projection method. The proposed method generates two different sequences belonging to each involved set, which converge to a point in the intersection of them whenever it is not empty. If the intersection is empty, then the sequences converge to points in the respective sets whose distance between them is equal to the distance between the sets in consideration. Numerical experiments are provided to illustrate the practical behavior of the method.