ProfOptimization2016

Seminar 2020.1

The seminars will be held remotely at 8:00 am via Google Meet, unless otherwise stated. All interested are very welcome. To attend, please contact Prof. Glaydston de Carvalho Bento by e-mail glaydston@ufg.br

---------------------------------------------------------------------------------------------------------------------------------------

Date: September 03

Speaker: Prof. Max Leandro Nobre Gonçalves, UFG

Title: Projection-free accelerated method for convex optimization

Abstract: In this talk, we discuss a projection-free accelerated method for solving convex optimization problems with unbounded feasible set.
The method is an accelerated gradient scheme such that each projection subproblem is approximately solved by means of a conditional gradient scheme. Under reasonable assumptions, it is shown that an $\varepsilon$-approximate solution (concept related to the optimal value of the problem) is obtained in at most $\mathcal{O}(1/\sqrt{\varepsilon} )$ gradient evaluations and $\mathcal{O}(1/\varepsilon)$ linear oracle calls. We also discuss a notion of approximate solution based on the first-order optimality condition of the problem and present iteration-complexity results for the proposed method to obtain an approximate solution in this sense. Finally, numerical experiments illustrating the practical behavior of the proposed scheme are discussed.

---------------------------------------------------------------------------------------------------------------------------------------

Date: September 10

Speaker: Ray Victor Guimarães Serra (PhD student-IME/UFG)

Title: A strongly convergent proximal gradient method for solving convex composite vector optimization problems in Hilbert spaces

Abstract: In this talk, we will present a variant of the proximal gradient method for solving convex composite vector optimization problems in real Hilbert spaces. We will show that under some mild conditions, the proposed scheme converges strongly to a weakly efficient solution.

---------------------------------------------------------------------------------------------------------------------------------------

Date: September 17

Speaker: Tiago da Costa Menezes (PhD student-IME/UFG)

Title: Inexact Variable Metric Method for Convex-Constrained Optimization Problems

Abstract: In this talk, we will discuss the inexact variable metric method for solving convex-constrained optimization problems. At each iteration of this method, the search direction is obtained by inexactly minimizing a strictly convex quadratic function over the closed convex feasible set. We will present a new inexactness criterion for the search direction subproblems. Under mild assumptions, we prove that any accumulation point of the sequence generated by the new method is a stationary point of the problem under consideration. Finally, we will also discuss some numerical experiments and an application where our concept of the inexact solutions is quite appealing.

---------------------------------------------------------------------------------------------------------------------------------------

Date: September 24

Speaker: Prof. Glaydston de Carvalho Bento, UFG

Title: Some Recent Advances in Optimization in Riemannian Manifolds

Abstract: In this lecture I intend to present some recent advances on optimization in Riemannian manifolds associated with two popular descent methods, namely, the gradient method and the proximal point method.

----------------------------------------------------------------------------------------------------------------------------------------

Date: October 01

Speaker: Ademir Alves Aguiar (PhD student-IME/UFG)

Title: Inexact Projections for constrained convex optimization Problems

Abstract: In this talk, we will present a new inexact version of projected subgradient method to solve nondifferentiable constrained convex optimization problems. The method combine $\epsilon$-subgradient method with a procedure to obtain a feasible inexact projection onto the constrained set. Also, the gradient projection method with a feasible inexact projection is proposed in this seminar. To perform the proposed inexact projection in both algorithms, a relative error tolerance is introduced. Asymptotic analysis and iteration-complexity bounds for methods are established.

----------------------------------------------------------------------------------------------------------------------------------------

Date: October 08

Speaker: Danilo Rodrigues de Souza (PhD student-IME/UFG)

Title: Um método quase-Newton com busca linear de Wolfe para otimização multiobjetivo

Abstract: Neste trabalho, propomos um método quase-Newton com busca linear de Wolfe para otimização multiobjetivo irrestrita. A Hessiana de cada função objetivo é aproximada por uma matriz inspirada na atualização BFGS clássica. Assim como no caso escalar, se os comprimentos de passos satisfazem as condições de Wolfe multiobjetivo, então a atualização BFGS proposta se mantém definida positiva. A boa definição do método é obtida mesmo quando as funções objetivo são não-convexas. Usando hipóteses de convexidade, estabelecemos convergência superlinear para pontos Pareto-ótimos. As hipóteses consideradas são extensões diretas das usadas no método BFGS escalar.

---------------------------------------------------------------------------------------------------------------------------------------

Date: October 15

Speaker: Prof. Pedro Bonfim de Assunção Filho (PhD student-IME/UFG)

Title: Conditional gradient method for multiobjective optimization

Abstract: In this talk, we analyze the conditional gradient method, also known as Frank-Wolfe method, for constrained multiobjective optimization. The constraint set is assumed to be convex and compact, and the objective functions are assumed to be continuously differentiable. The method is considered with different strategies for obtaining the step size. Asymptotic convergence properties and iteration complexity assumptions on the objective functions are established. Numerical experiments are provided to illustrate the effectiveness of the method and certify the obtained theoretical results.

---------------------------------------------------------------------------------------------------------------------------------------

Date: October 22

Speaker: Prof. Gilson do Nascimento Silva, UFOB

Title:  On the Inexact Quasi-Newton Methods for Solving Nonsmooth Generalized Equations: Broyden's Update and Dennis-Moré Theorem

Abstract: This talk is about an inexact quasi-Newton method for solving nonsmooth generalized equations. At first, using the Coincidence Point Theorem and theory of metric regularity, we prove the q-linear convergence of the sequence generated by the inexact quasi-Newton method. In a specific case, we use the well-known Broyden update to obtain a convergence result. Secondly, we assume that the generalized equation is strongly metrically r-subregular, and we obtain a higher order convergence to the inexact quasi-Newton method proposed with the Broyden update. We finish by showing the Broyden update applied to a nonsmooth generalized equation in Hilbert spaces satisfies the Dennis-Moré condition for q-superlinear convergence.

---------------------------------------------------------------------------------------------------------------------------------------

Date: October 29

Speaker: Prof. Flávio Pinto Vieira (PhD student-IME/UFG)

Title:  Steepest descent methods for constrained vector optimization problems with a new non-monotone line search

Abstract: In this talk, we will present a new non-monotone line search that can be used in procedures for finding Pareto optima of constrained vector optimization problems. Our work was inspired by Yunda Dong’s works in the development of non-linear conjugate algorithms for scalar optimization problems. In our opinion, the novelty of our procedure is that the line search is performed without comparing function values, overcoming an important issue in general vector optimization. Our convergence analysis covers the general case, for functions with continuous Jacobian, the Lipschitz case, when a Lipschitz constant of the Jacobian is known, and the convex case. As an application, we will present our results solving a bi-criteria model for the design of a cross-current multistage extraction process, proposed by Kitagawa and others.

---------------------------------------------------------------------------------------------------------------------------------------

Date: November 05 

Speaker: Fernando Santana Lima (PhD student-IME/UFG)

Title: Globally convergent Newton-type methods for multiobjective optimization

Abstract: We propose  two Newton-type methods for solving (possibly) nonconvex unconstrained multiobjective optimization problems. The first is directly inspired by the Newton method designed to solve convex problems, whereas  the second uses  second-order information of the objective functions with ingredients of the steepest descent method.  One of the key points of our approaches  is to impose some safeguard strategies  on the search directions.  
These strategies are associated to the conditions that prevent, at each iteration, the search direction to be  too close to orthogonality with the multiobjective steepest descent direction and require a proportionality between  the lengths  of such directions.
In order to fulfill the demanded safeguard conditions  on the search directions of Newton-type methods,  we adopt the technique in which the Hessians are modified, if necessary, by adding multiples of the identity. For our first Newton-type method,  it is also shown that, under convexity assumptions,  the local
superlinear rate of convergence (or quadratic, in the case where the Hessians of the objectives are Lipschitz continuous) to a local efficient point of the given problem is recovered.
The global convergences of the aforementioned methods are based, first,  on  presenting and establishing the global convergence
of a general algorithm and, then, showing that the new methods fall in this general algorithm.

---------------------------------------------------------------------------------------------------------------------------------------

Date: November 12

Speaker: Elianderson Meneses Santos (PhD student-IME/UFG)

Title: An algorithm for minimization of a certain class of nonconvex functions
 
Abstract:
In this talk we present an algorithm to minimize a special class of nonconvex functions. Besides, we'll also show some results of partial convergence and complexity bounds for that algorithm.
---------------------------------------------------------------------------------------------------------------------------------------

Date: November 19 - Seminário desse dia foi suspenso em virtude do WebJME - Webinário de Jovens Pesquisadores em Matemática Pura, Aplicada e Estatística. Os participantes devidamente matriculados nessa atividade foram orientados a participarem das seguintes atividades: 

Lecture: November 19

On the Complexity of an Augmented Lagrangian Method for Nonconvex Optimization

Geovani Nunes Grapiglia (UFPR) 

Lecture: November 20

Iteration-Complexity of an Inexact Proximal Augmented Lagrangian Method for Solving Constrained Composite Optimization problems

Jefferson Divino Gonçalves de Melo (UFG)

---------------------------------------------------------------------------------------------------------------------------------------

Date: November 26

Speaker: Prof. Pedro Bonfim de Assunção Filho (PhD student-IME/UFG)

Title: Model Function Based Conditional Gradient Method with Armijo-like Line Search

Abstract: In this seminar will be presented the paper "Model Function Based Conditional Gradient Method with Armijo-like Line Search (https://arxiv.org/pdf/1901.08087.pdf)"

---------------------------------------------------------------------------------------------------------------------------------------

Date: December 03

Speaker: Ademir Alves Aguiar (PhD student-IME/UFG)

Title: Convergence analysis of a nonmonotone projected gradient method for multiobjective optimization problems

Abstract: In this seminar will be presented the paper "Fazzio. N. S; Schuverdt. M. L.: Convergence analysis of a nonmonotone projected gradient method for multiobjective optimization problems. Optimization Letters (2019) 13:1365-1379.” Our objective is to present an extension of the projected gradient method to include a nonmonotone line search based on the average of the successive previous functions values instead of the traditional Armijo-like rules.

---------------------------------------------------------------------------------------------------------------------------------------

Date: December 10

Speaker: Prof. Flávio Pinto Vieira (PhD student-IME/UFG)

Title: A new line search for vector optimization

Abstract:   We introduce a new line search for vector optimization. This procedure is non-monotone, i.e., function value at the new
iterate may be not least than or equal to the current one in the considered order. Convergence analysis of the steepest descent
algorithm, using the proposed line search procedure, is presented. Extensive numerical experiences were performanced testing the
effectiveness in building Pareto-fronts.

---------------------------------------------------------------------------------------------------------------------------------------

Date: December 17

Speaker: Prof. Reinier Diaz Millan, Deakin University-AU

Title: An algorithm for best generalized rational approximation of continuous functions

Abstract: The motivation of  this paper is the development of an optimization method for solving optimization problems appearing in Chebyshev rational and generalized rational approximation problems, where the approximations are constructed as ratios of linear forms (linear combinations of basis functions). The coefficients of the linear forms are subject to optimization and the basis functions are continuous functions. It is known that the objective functions in generalized rational approximation problems are quasi-convex. In this paper we also prove a stronger result: the objective function is pseudo-convex. Then, we develop numerical methods that are efficient for a wide range of pseudo-convex functions and test them on generalized rational approximation problems.

---------------------------------------------------------------------------------------------------------------------------------------