ProfOptimization2016

Seminar 2019.2

The seminars in this semester will be held in the Lecture Room of IME/UFG, unless otherwise stated. All interested are very welcome to attend.

---------------------------------------------------------------------------------------------------------------------------------------

Date: August 22

Speaker: Prof. Jefferson Melo, UFG

Title:  Quadratic Penalty Method with Inexact Proximal Subproblems for Solving  Linearly Constrained  Nonconvex Problems

Abstract:  In this talk, we will present  a quadratic penalty (QP) method and its iteration complexity for solving linearly constrained nonconvex optimization problems. More specifically, the objective function is of the form f+h where f is a differentiable function whose gradient is Lipschitz continuous and h is a closed convex function with possibly unbounded domain. The method, basically, consists of applying an  inexact proximal point method for solving approximately a sequence of quadratic penalized subproblems associated to the linearly constrained problem. Each subproblem of the proximal point method is in turn approximately solved by an accelerated composite gradient (ACG) method. It is shown that this scheme generates a papproximate stationary point in at most O(p^(3)) ACG iterations. Finally, numerical results showing  the performance of the proposed QP method will be presented.

---------------------------------------------------------------------------------------------------------------------------------------

Date: August 29

Speaker: Flavio Pinto Vieira (PhD student-IME/UFG)

Title: A New non-linear Conjugate Gradients Algorithms

Abstract: Im this talk, we present a variant of the of the Conjugate Gradients Methods for vector optimization proposed by Lucambio Perez and Prudente. We will replace the Wolfe-type line search by a new Armijo-Type line-search using just the Jacobian of the objective.   

---------------------------------------------------------------------------------------------------------------------------------------

Date: September 05

Speaker: Ademir Alves Aguiar (PhD student-IME/UFG)

Title: Accelerating the DC algorithm for smooth functions

Abstract: In this seminar will be presented the paper "Artacho. F. J; Fleming. R. M.T; Vuong. P.T.: Accelerating the DC algorithm for smooth functions. Math. Program. 169 (2018) 95-118."  Our objective is to present a algorithm for unconstrained minimization of the difference of two convex functions called Boosted Difference of Convex function Algorithms (BDCA). In the traditional DC algorithm (DCA), each iteration consists in linearizing at the current point the convex function which is to be subtracted. BDCA in question accelerate DCA by incorporating a line search using an Armijo-type rule in the direction of the point generated by the DCA.

 ---------------------------------------------------------------------------------------------------------------------------------------

Date: September 19

Speaker: Fernando Lima (PhD student-IME/UFG)

Title: Variações do Método de Newton em Otimização Multiobjetivo

Abstract: Neste seminário iremos apresentar uma variação do Método de Newton no contexto Multiobjetivo, onde apresentaremos uma nova forma de se calcular a direção de descida para a função objetivo. Seguiremos a linha de raciocínio apresentada por Martínez no caso escalar, dando uma condição de ângulo e de proporcionalidade para a direção de descida. Mostraremos para o contexto multiobjetivo que se uma direção de descida satisfaz uma condição do tipo-ângulo e uma condição de proporcionalidade então o algoritmo tem convergência global.

 ---------------------------------------------------------------------------------------------------------------------------------------

Date: October 03

Speaker: Pedro Bonfim de Asunção Filho (PhD student-IME/UFG)

Title: Faster Rates for the Frank-Wolfe Method over Strongly-Convex Sets

Abstract: In this talk will be presented the paper " Dan Garbe, Elad Hazan, Faster rates for the Frank-Wolfe method over strongly-convex sets,  In International Conference on Machine Learning, pages 541–549, 2015, http://proceedings.mlr.press/v37/garbera15.pdf".   In this paper is considered the special case of optimization over strongly convex sets, for which we prove that the vanila FW method converges at a rate of 1 / t². This gives a quadratic improvement in convergence rate compared to the general case, in which convergence is of the order 1 / t, and known to be tight.  It is showed that various balls induced by ℓ_p norms, Schatten norms and group norms are strongly convex on one hand and on the other hand, linear optimization over these sets is straightforward and admits a closed-form solution. It is  further showed how several previous fast-rate results for the FW method follow easily from the presented analysis. 

 ---------------------------------------------------------------------------------------------------------------------------------------

Date: October 10

Speaker: Ray Victor Guimarães Serra (PhD student-IME/UFG)

Title: Uma variante do método do ponto proximal para otimização vetorial

Abstract: Neste seminário apresentaremos uma variante  do método do ponto proximal para otimização vetorial proposta em "Bonnel, H., Iusem, A.N., and Svaiter, B.F.: Proximal methods in vector optimization. SIAM J. Optim., 15(4):953-970 (2005). Mostraremos que o método proposto obtém pontos fracamente eficientes (em relação a um cone) de aplicações multi-objetivas definidas em um espaço de Hilbert.

 ---------------------------------------------------------------------------------------------------------------------------------------

Date: October 17

Speaker: Tiago da Costa Menezes (PhD student-IME/UFG)

Title: Inexact Spectral Projected Gradient Methods on Convex Sets

Abstract:  In this talk will be presented the paper "Birgin, Ernesto G., José Mario Martínez, and Marcos Raydan. Inexact spectral projected gradient methods on convex sets. IMA Journal of Numerical Analysis 23.4 (2003): 539-559. " Our main objective will be to present a generalization of the Spectral Projected Gradient method (SPG).  The general model algorithm involves, at each iteration, the approximate minimization of a quadratic convex on the feasible set of the original problem and global convergence obtained by means of nonmonotone line searches.

 ---------------------------------------------------------------------------------------------------------------------------------------

Date: October 31

Speaker: Ademir Alves Aguiar (PhD student-IME/UFG)

Title: Accelerated and Inexact forward-backward algorithms

Abstract: In this seminar will be presented the paper "Villa. Silvia; Salzo. Saverio; Baldassarre. Luca and Verri. Alessandro.: Accelerated and Inexact forward-backward algorithms. SIAM J. Optim. 23 (2013), no. 3, 1607–1633". Our objective is to present a  convergence analysis of accelerated forward-backward splitting methods for composite function minimization, when the proximity operador is not available in closed form. This analysis is based on the machinery of estimate sequences first introduced by Nesterov for the study of accelerated gradient descent algorithms.

 ---------------------------------------------------------------------------------------------------------------------------------------

Date: November 07

Speaker: Pedro Bonfim de Asunção Filho (PhD student-IME/UFG)

Title: Gradient Projection and Conditional Gradient Methods for Constrained Nonconvex Minimization

Abstract: In this seminar will be presented the paper "Maxim Balashov; Boris Polyak; Andrey Tremba.: Gradient Projection and Conditional Gradient Methods for Constrained Nonconvex Minimization. https://arxiv.org/pdf/1906.11580.pdf". 

 ---------------------------------------------------------------------------------------------------------------------------------------

Date: November 21

Speaker: Fernando Lima (PhD student-IME/UFG)

Title: Extended newton methods for multiobjective optimization: majorizing function technique and convergence analysis

Abstract: We consider the extended Newton method for approaching a Pareto optimum of a multiobjective optimization problem, establish quadratic convergence criteria, and estimate a radius of convergence ball under the assumption that the Hessians of objective functions satisfy an L-average Lipschitz condition.

 ---------------------------------------------------------------------------------------------------------------------------------------

Date: November 28

Speaker: Flavio Pinto Vieira (PhD student-IME/UFG)

Title: A new non-linear conjugate gradient algorithm for vector optimization

Abstract: Apresentaremos um algoritmo do gradiente conjugado não-linear com busca linear usando somente informações do Jacobiano e não da função objetivo, i.e. não usa uma busca tipo Wolfe. Mostraremos resultados de convergência para diferentes escolhas de betas: quando infinitos betas são zeros, os betas de Fletcher-Reeves, CD and Dai-Yuan e quando Polak-Ribière-Polyak e Hestenes-Stiefel são usados. Discutiremos resultados numéricos preliminares. Nosso trabalho é baseado no artigo de Yunda Dong. Faremos uma comparação dos nossos resultados com os deste trabalho.

 ---------------------------------------------------------------------------------------------------------------------------------------

Date: December 05

Speaker: Ray Victor Guimarães Serra (PhD student-IME/UFG)

Title: A Proximal Gradient Splitting Method for Solving Convex Vector Optimization Problems

Abstract: In this talk, we will present a proximal gradient splitting method for solving nondifferentiable vector optimization problems. The convergence analysis is carried out when the objective function is the sum of two convex functions where one of them is assumed to be continuously differentiable.

 ---------------------------------------------------------------------------------------------------------------------------------------

Date: December 12

Speaker: Tiago da Costa Menezes (PhD student-IME/UFG)

Title: Gauss-Newton methods with approximate projections for solving constrained nonlinear least squares problems

Abstract: This paper is concerned with algorithms for solving constrained nonlinear least squares problems. We first propose a local Gauss-Newton method with approximate projections for solving the aforementioned problems and study, by using a general majorant condition, its convergence results, including results on its rate. By combining the latter method and a nonmonotone line search strategy, we then propose a global algorithm and analyze its convergence results. Finally, some preliminary numerical experiments are reported in order to illustrate the advantages of the new schemes.