Seminar 2017.1
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Organized by Jefferson D. G. Melo
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The seminars in this semester will be held in the Lecture Room of IME/UFG, unless otherwise stated. All interested are very welcome to attend.
Um Método "Splitting Forward-Backward" modificado para operadores monótonos maximais (Paul Tseng - 2000)
Speaker: Paulo Cesar (Ph.D. student-IME/UFG)
Abstract:
Consideraremos o método "splitting forward-backward" para encontrar um zero da soma de dois operadores monótonos maximais. Já é conhecido que este método converge quando o inverso do operador forward é fortemente monótono.
Será proposto uma modificação para este método, no sentido do método extragradiente para desigualdades variacionais monótonas, a qual fará o método convergir, assumindo apenas que o operador forward seja (Lipschitz) contínuo em algum subconjunto convexo e fechado do seu domínio.
Data: 02/06/2017
Hora: 14:00 hrs.
--------------------------------------------------------------------------------------------------
Iteration-complexity analysis of a generalized alternating direction method of multipliers
Speaker: Vando Antonio Adona (Ph.D. student-IME/UFG)
Abstract:
In this talk, we will analyze the iteration-complexity of a generalized alternating direction method of multipliers (G-ADMM) for solving linearly constrained convex problems. This ADMM variant, which was first proposed by Bertsekas and Eckstein, introduces a relaxation parameter into the second ADMM subproblem. We show that the G-ADMM is an instance of a hybrid proximal extragradient framework with some special properties.
Data: 26/05/2017
Hora: 14:00 hrs.
--------------------------------------------------------------------------------------------------
Subgradient Algorithm on Riemannian manifolds with lower bounded curvature
Speaker: Mauricio Louzeiro (Ph.D. student-IME/UFG)
Abstract: Neste seminário será apresentada a demonstração de um resultado de convergência do método de subgradiente sobre variedades Riemannianas completas de curvatura seccional limitada inferiormente. Este resultado e' uma generalização da referência
FERREIRA, O. P. ; OLIVEIRA, P. R. . Subgradient Algorithm Algorithm on Riemannian Manifolds. Journal of Optimization Theory and Applications, Vol. 97, pp 93–104 (1998).
Data: 12/05/2017
Hora: 14:00 hrs.
--------------------------------------------------------------------------------------------------
A Trust Region Algorithm with Trust Region Radius Converging to Zero
Speaker: Yuri Rafael (Ph.D. student-IME/UFG)
Abstract: In this talk, based on a work of Jin-Yan Fan and Ya-Xiang Yuan, we consider trust region
algorithm with the trust region radius converging to zero. We show that the new algorithm
preserves the global convergence of the traditional trust region algorithms, and superlinear
convergence is also obtained under certain conditions.
Data: 05/05/2017
Hora: 14:00 hrs.
------------------------------------------------------------------------------------------------------
Two algorithms for solving systems of inclusion problems
Speaker: Reinier Diaz Millan-(Professor do Instituto Federal de Educação, Ciência e Tecnologia de Goiás (IFG))
Abstract: The goal of this paper is to present two algorithms for solving systems of inclusion problems, with all component of the systems being a sum of two maximal monotone operators.The algorithms are variants of the forward-backward splitting method and one being a hybrid with the alternating projection method. They consist of approximating the solution sets involved in the problem by separating halfspaces which are a well-studied strategy. The schemes contain two part, the first one is an explicit Armijo-type search in the spirit of the extragradient-like methods for variational inequalities. The second part is the projection step, being this the main difference between the algorithms. While the first algorithm computes the projection onto the intersection of the separating halfspaces, the second choose one component of the system and project onto the separating halfspace of this case. In the iterative process, the forward-backward operator is computed once per inclusion problem, representing a relevant computational saving if compared with similar algorithms in the literature. The convergence analysis of the proposed methods is given assuming monotonicity in all operators, without Lipschitz continuity assumption. We also present some numerical experiments.
Data: 07/04/2017
Hora: 14:00 hrs.
--------------------------------------------------------------------------------------------------------
Proximal Alternating Direction Method of Multipliers with over-relaxation parameter
Speaker: Vando Antonio Adona-(Ph.D. student-IME/UFG)
Abstract: In this talk, we will discuss how an over-relaxation parameter in the proximal
alternating direction method of multipliers depends on the proximal term added on
the second subproblem. Ergodic convergence rates results for the above method will be analyzed.
Data: 31/03/2017
Hora: 14:00 hrs.
References:
Bingsheng H.; Feng M.
Convergence Study on the Proximal Alternating Direction Method with Larger Step Size.
arXiv preprint, Feb 2017.
Gonçalves, M.L.N.; Melo, J.G.; Monteiro, R.D.C.
Extending the ergodic convergence rate of the proximal ADMM
arXiv preprint, Nov 2016.
---------------------------------------------------------------------------------------------------------
Speaker: Professor Dr. Luis R. Lucambio Perez-IME/UFG
Joint work with Leandro F. Prudente
Abstract:
In this work, we propose non-linear conjugate gradient methods for finding critical points of vector-valued functions with respect to the partial order induced by a closed, convex and pointed cone with nonempty interior. The objective functions are not assumed to be convex. The Wolfe and Zoutendjik conditions are extended to vector-valued optimization. Particularly, we show that there exist intervals of step-sizes satisfying Wolfe-type conditions. The convergence analysis covers the vector extensions of the Fletcher-Reeves, Conjugate Descent, Dai-Yuan, Polak-Ribère-Polyak and Hestenes-Stiefel parameters which retrieve the classical ones in the scalar minimization case. Under inexact line search and without regular restarts, we prove that the sequences generated by the proposed methods are globally convergent.
Data: 24/03/2017
-------------------------------------------------------------------------------------------------------------
An inexact Newton-like conditional gradient method for constrained nonlinear systems
Speaker: Fabrícia Rodrigues de Oliveira (Ph.D. student-IME/UFG)
Joint Work with Max L.N. Gonçalves
Abstract: In this talk, we will present an algorithm based on a combination of inexact Newton-like
and conditional gradient methods and also discuss its local convergence analysis.