ProfOptimization2016

Seminar 2015.2

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Organized by  Leandro da Fonseca Prudente
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

14/08/2015, 16:00-17:00
José Yunier Bello Cruz (Professor IME/UFG)
On the complexity of the proximal gradient method for convex minimization
Abstract: In this talk we present the convergence and complexity analysis of the iterates in the proximal gradient method with a linesearch. When the stepsizes generated by the linesearch are bounded below by a positive number,  our analysis shows that the expected error from the cost value at the $k$-th iteration  to the optimal value is $\mathcal{O}(k^{-1})$ in Hilbert spaces and $o(k^{-1})$ in finite dimensions, which improves the complexity of the first-order algorithm presented in the literature. It is worth emphasizing  that the global Lipschitz continuity assumption on the gradient of $f$ is sufficient but not necessary for the boundedness from below of the stepsizes aforementioned. Moreover, we show that if the gradient of $f$ is locally Lipchiptz the stepsizes generated by the linesearch are bounded below by a positive number. Furthermore, we aswer the main question here: "Can we have the complexity $o(k^{-1})$ when $\liminf_{k\to \infty}\alpha_k=0$?" with an example.

21/08/2015, 16:00-17:00
Gilson do Nascimento Silva  (Phd Student IME/UFG)
Inexact Newton's method to nonlinear functions with values in a cone
Abstract: The problem of finding a  solution of  nonlinear inclusion problems in Banach space  is considered.   Using convex optimization techniques introduced by Robinson (Numer. Math., Vol. 19, 1972, pp. 341-347),   a robust convergence theorem  for inexact Newton's method is proved. As an application, an  affine  invariant version of Kantorovich's theorem and Smale's  $\alpha$-theorem for inexact Newton's method is obtained.

28/08/2015, 10:00-11:00 (canceled)
Chong Li (Professor da Zhejiang University, China)
The linearized proximal algorithm for convex composite optimization with applications
Abstract: We propose a linearized proximal algorithm (LPA) to solve a convex composite optimization problem. Each iteration of the LPA is a proximal minimization on the composition of the outer function and the linearization of the inner function at current iterate. The LPA has the attractive computational advantage in that the solution of each subproblem is a singleton, which avoids the difficulty of finding the whole solution set of the subproblem, as in the Gauss-Newton method (GNM), while it still maintains the same local convergence rate as that of the GNM. Under the assumptions of local weak sharp minima of order $p$ ($p \in [1,2]$) and the quasi-regularity condition, we establish the local superlinear convergence rate for the LPA. We also propose a globalization strategy for LPA based on the backtracking line-search and the inexact LPA, as well as the superlinear convergence results. We further apply the LPA to solve a feasibility problem, as well as a sensor network localization problem. Our numerical results illustrate that the LPA meets the demand for a robust and efficient algorithm for the sensor network localization problem.

04/09/2015, 16:00-17:00
Yuri Rafael Leite Pereira  (Phd Student IME/UFG)
Proximal Point Method for Vector Optimization on Hadamard Manifolds
Abstract: In this talk, we extend the proximal point algorithm in vector optimization to the context in Riemannian Manifolds, by assuming an iterative process which uses a variable nonlinear scalarization function. Then, we show that any sequence generated by this new algorithm reach a weakly e cient point after a .nite number of iterations under the assumption that the weakly e cient point set is weak sharp for the vectorial problem.

11/09/2015, 16:00-17:00
Edvaldo E. A. Batista  (Phd Student IME/UFG)
Korpelevich's method for variational inequality problems on Hadamard manifolds - Parte 1
Resumo: A variant of Korpelevich?s method for solving the variational inequality problem is extended from Euclidean spaces to constant curvature Hadamard manifolds. Under a pseudomonotone assumption on the underlying vector ?eld, we prove that the sequence generated by the method converges to a solution of variational inequality, whenever it exists.

18/09/2015, 16:00-17:00
Maurício Silva Louzeiro  (Aluno de mestrado do IME/UFG)
Método de Newton para encontrar zeros de uma classe especial de funções semi-suaves
Resumo: Neste seminário farei uso do método de Newton para resolver algumas equações semi-suaves especiais. De início, veremos o problema de minimizar uma função convexa sobre um cone simplicial, problema este que equivale a resolver uma determinada equação semi-suave. Particularmente veremos o caso em que essa função é quadrática. Por fim, serão estudadas  algumas equações lineares por partes.

25/09/2015, 16:00-17:00
Valdinês Leite de Sousa Júnior  (Phd Student IME/UFG)
Dual Descent Methods and Inexact Proximal Algorithms
Resumo: Neste seminário apresentaremos um método abstrato em espaços quase-métricos. Analisaremos sua  convergência e provaremos a taxa de convergência do método.

02/10/2015, 16:00-17:00
Paulo César da Silva Júnior  (Phd Student IME/UFG)
A fast multistep proximal forward-backward method with Linesearch 1
Abstract: We present a Method 2, a faster version of the proximal forward-backward method with Linesearch 1, improving the convergence result for Method 1. Here we modify the method by adding a linesearch and an extra projection step.

23/10/2015, 16:00-17:00
Edvaldo E. A. Batista  (Phd Student IME/UFG)
An extragradient-type algorithm for non-smooth variational inequalities
Abstract: We study extragradient-type methods to solve variational inequality problems involving maxima! monotone point-to-set npcratorz. Wc will show, xing an example, that, in this case, to achieve convergence to the solution set of sequences generated by extragradient methods it is necessary, in the search of the auxiliary point and its suitable image by the involved operator, to enlarge the set of possible search directions. Such an enlargement is known and we use it to overcome this failure of the extragradient-methods.

06/11/2015, 16:00-17:00
Yuri Rafael Leite Pereira  (Phd Student IME/UFG)
Trust-Region Method  on  Riemannian Manifolds
Abstract: In this talk, we make a introduction  of the trust-region method and we the extend this concept to Riemannian manifolds environment and make the convergence analysis.

13/11/2015, 16:00-17:00
Aymee Marrero Severo (Professora da Universidad de La Habana, Cuba)
Algoritmo Genético Semidifuso. Aplicação de um modelo epidemiologico em Cuba
Resumo: Apresentar  uma alternativa para tratar a incerteza em parâmetros para resolver o problema de estimativa de parâmetros num sistema dinâmico definido por equações diferenciais ordinárias (e.d.o.), como uma variação da Algoritmo Genético fuzzy. Propone uma representação fuzzy de indivíduos na população e operadores de cruzamento e mutação também difusa, melhorando diversidade populacional do AG para que os problemas de convergência prematuros são superados, com a manipulação menos complicado Sistema de Inferência difusa. Aplicação a um modelo epidemiológico.

20/11/2015, 16:00-17:00
Ábssan Matuzinhos de Moura  (Aluno de mestrado do IME/UFG)
A variante de Barzilai-Borwein do Método Gradiente
Resumo: Considerando uma função f: Rn --> R quadrática, estritamente convexa e x* o único minimizador de f, mostraremos que o Método Gradiente com o tamanho do passo dado pela variante de Barzilai-Borwein, converge para x*. Lembrando que para o caso bidimensional foi provado que a convergência acontece, e, além disso, com taxa R-superlinear.

27/11/2015, 16:00-17:00
Gilson do Nascimento Silva  (Phd Student IME/UFG)
Local Newton's method for generalized equations under majorante condition
Resumo: The Newton?s method under the majorant condition for solving generalized equations will be presented. Examples are given to emphasize the application of this method to generalized equations representing the nonlinear programming problem and the nonlinear complementary problem.