Seminar 2013.1
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Organized by Max Leandro Nobre Gonçalves
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
08/04/2013, 15:45 - 16:45
Leandro F. Prudente IME/UFG
On convergence to infeasible points in Augmented Lagrangian methods
Abstract: Practical Nonlinear Programming algorithms may converge to infeasible points, even when feasible points exist. It is sensible to detect this situation as quickly as possible, in order to have time to change initial approximations and parameters, with the aim of obtaining convergence to acceptable solutions in further runs. Sometimes, on the other hand, the feasible set of an optimization problem that one aims to solve is empty. In this case, two characteristics of the algorithm are desirable: the algorithm should converge to a minimizer of some infeasibility measure and one may wish to find a point with minimal infeasibility for which some optimality condition, with respect to the objective function, holds. Ideally, the algorithm should converge to a minimizer of the objective function subject to minimal infeasibility. We present an Augmented Lagrangian algorithm that, with minor modifications, may achieve the goals of both cases. In the first one, the probability of quick detection of asymptotic infeasibility is enhanced and, in the second one, the method satisfies, as much as possible, the two desirable properties.
17/04/2013, 15:45 - 16:45
Leandro F. Prudente IME/UFG
An augmented Lagrangian method with finite termination for global optimization
Abstract: We will present a new algorithm based on the Powell-Hestenes-Rockafellar Augmented Lagrangian approach for constrained global optimization. Possible infeasibility will be detected in ?nite time. Furthermore, we will introduce a pratical stopping criterion that guarantees that, at the approximate solution provided by the algorithm, feasiability holds up to some prescribed tolerance and the objective function value is the optimal one up to tolerance. At first, in this algoritm, each subproblem is solved with a precision ?_k that tends to zero. An adaptive modi?cation in which optimality subproblem tolerances depend on current feasibility and complementarity will also be given. The adaptive algorithm allows the augmented Lagrangian subproblems to be solved without requiring unnecessary potentially high precisions in the intermediate steps of the method, which improves the overall efficiency.
24/04/2013, 15:45 - 16:45
Ole Peter Smith, IME/UFG
Otimização Combinatória Aplicada a Distribuição de Aulas
Resumo: Este seminário é um RFC (Request For Comments) do grupo de otimização. De forma prática, precisa-se aplicar um algoritmo de otimização de distribuição de (por ex.) aulas; primeiramente aplicada ao pr?rio distribuição de aulas no IME. Dado uma quantia de disciplinas com horários, professores com carga horários e afinidades com as disciplinas dados e horários, encontre uma distribuição ôtimo. Discutiremos a formulação do modelo e possíveis caminhos para sua implementação.
08/05/2013, 15:45 - 16:45
Reinier Diaz Millan, aluno de doutorado do IME/UFG
A variant of Forward-Backward splitting method for the sum of two monotone operators with a new search strategy
Abstract: We propose variants of Forward-Backward splitting method for finding a zero of the sum of two operators. A classical modification of Forward-Backward method was proposed by Tseng, which is known to converge when the forward and the backward operators are monotone and under Lipschitz
continuity of the backward operator. The conceptual algorithm proposed here improves Tseng's method in some instances. The first and main part of our approach contains an explicit Armijo-type search in the same spirit of the extragradient-like methods for variational inequalities. During the iteration process the search performs only one calculation of the forward-backward operator, in each tentative of the step. This achieves a considerable saving when the forward-backward operator is computationally expensive. The second part of the scheme consists in special projection steps. The convergence analysis on the proposed scheme is given assuming monotonicity on both operators, without Lipschitz continuity assumption on the backward operator.
15/05/2013, 15:45 - 16:45
Luis Roman Lucambio Pérez (Professor, IME/UFG)
Subgradient-like algorithms for variable inequalities
Resumo:Este é um trabalho conjunto de J. Y. Bello Cruz, G. Bouza Allende e L. R. Lucambio Pérez. Nele definimos o problema de achar pontos que satisfaçam $f(x)_K\leq0$, onde $f\colon\mathbb{R}^m\rightarrow\mathbb{R}^n$, $K$ é uma aplicação ponto-conjunto de $\mathbb{R}^n$ em partes de $\mathbb{R}^n$, sendo que $K(x)$ é um cone. Estudamos os conceitos de convexidade e sub-gradiente, neste contexto, e propomos uma família de três algoritmos para resolver o problema, mostrando, sob quais condições, estes algoritmos convergem à soluções.
22/05/2013, 15:45 - 16:45
Jorge Barrios Ginart (Professor da Universidad de La Habana, Cuba)
A Differential Inclusion Approach for Modeling and Analysis of Dynamical Systems under Uncertainty
Abstract: In this work we deal with the application of diferential inclusions to modeling nonlinear dynamical systems under uncertainty in parameters. In this case, differential inclusions seems to be better suited to modeling practical situations under uncertainty and imprecision than formulations by means of fuzzy differential equations. We develop a practical algorithm to approximate the reachable sets of a class of nonlinear di?erential inclusion, which eludes the computational problems of a previous set-valued version of the Heun?s method. Our algorithm is based on a complete discretization (time and state space) of the di?erential inclusion and it suits hardware features, handling the memory used by the method in a controlled fashion during all iterations. As a case of study, two epidemiological models are analyzed.
29/05/2013, 15:45 - 16:45
Aymee Marrero Severo (Professor da Universidad de La Habana, Cuba)
Soluções alternativas para o problema de estimação de parâmetros em modelos descritos por equações diferenciais ordinárias
Motivação: A insatisfação com o tratamento determinísticos parâmetro de estimativa de intervalos definidos e com determinados níveis de incerteza e/ou imprecisão. Obtenção de resultados confiáveis e melhores simulações na aplicação de modelos epidemiológicos em geral, na dinâmica populacional, essencialmente quando a informação é utilizada como uma possibilidade, é inexato ou incompleto.
Resumo: Este trabalho apresenta algumas soluções alternativas utilizadas para o problema de estimação de parâmetros, definidos pelo e.d.o incluindo:
Técnicas de otimização baseadas em Análise Interval e melhoria do algoritmo proposto protótipo
Abordagem preliminar para modelagem utilizando lógica fuzzy para o biomatemático com uso do método de Mamdani.
Junho/2013, 15:45 - 16:45
05/06/2013, 15:45 - 16:45
José Yunier Bello Cruz (Professor IME/UFG)
On weak convergence of the projected gradient method for convex optimization in Hilbert spaces
Abstract: We prove that the sequence generate by the projected gradient
method converges weakly to a solution of the convex minimization problem in Hilbert spaces. Using the classical Armijo search, the weak convergence is given assuming that the objective function is convex and Gâteaux differentiable, without Lipschitz continuity assumption.
12/06/2013, 15:45 - 16:45
Orizon Pereira Ferreira (Professor IME/UFG)
A robust Kantorovich's theorem on inexact Newton method with relative residual error tolerance
Abstract: We prove that under semi-local assumptions, the inexact Newton method with a fixed relative residual error tolerance converges Q-linearly to a zero of the non-linear operator under consideration. Using this result we show that Newton method for minimizing a self-concordant function or to find a zero of an analytic function can be implemented with a fixed relative residual error tolerance. In the absence of errors, our analysis retrieve the classical Kantorovich Theorem on Newton method.
19/06/2013, 15:45 - 16:45
Edvaldo E. A. Batista (Phd Student IME/UFG)
Finding best approximation pairs relative to two closed convex sets in Hilbert spaces, part I
Abstract: (This work was published by H. Bauschke in 2004, Journal Approximation Theory). We consider the problem a best approximation pair, i.e., two points which achieve the minimum distance between two closed convex sets in a Hilbert space. We investigate systematically the asymptotic behavior of averaged alternating reflections method (AAR) in the general case when the sets do not necessarily intersect and show that the method produces best approximation pairs provided they exist.
26/06/2013, 15:45 - 16:45
Edvaldo E. A. Batista (Phd Student IME/UFG)
Finding best approximation pairs relative to two closed convex sets in Hilbert spaces, part II
Abstract: ( This work was published by H. Bauschke in 2004, Journal Approximation Theory). We consider the problem a best approximation pair, i.e., two points which achieve the minimum distance between two closed convex sets in a Hilbert space. We investigate systematically the asymptotic behavior of averaged alternating reflections method (AAR) in the general case when the sets do not necessarily intersect and show that the method produces best approximation pairs provided they exist.
10/07/2013, 15:45 - 16:45
Tibério Bittencourt de Oliveira Martins (Phd Student IME/UFG)
Análise da convergência local do método de Newton Inexato sob condição majorante em Variedades Riemaninnas
Resumo: O método de Newton Inexato é utilizado para encontrar singularidades em campos de vetores definidos em variedades Riemannianas. A derivada covariante do campo está sob a hipótese do princípio majorante.
17/07/2013, 15:45 - 16:45
Jefferson D. G. de Melo (Professor IME/UFG)
Three not expected properties of a subgradient method
Abstract: In this talk, we consider a subgradient method applied to a dual problem generated via an Augmented Lagrangian function. We shall discuss three particular properties of this method which are not shared, in general, by other subgradient methods. Precisely, subgradient methods, in general, have weak convergence properties (in infinite dimensional spaces, the sequence is weakly convergent to a solution), the sequence of functional values is not decreasing, and for the convergence proof, it is assumed boundedness properties of the subgradients generated by the method. Our algorithm has strong convergence properties (the sequence generated is strongly convergent to a dual solution), it generates a sequence of decreasing functional values (increasing in the case of maximizing) and bounded subgradients