CCOR Workshop – Optimum est optimum

The Corvinus Centre for Operations Research (CCOR), the Corvinus Institute for Advanced Studies (CIAS) and the Institute of Operations and Decision Sciences invites you to the 

CCOR Workshop titled Optimum est optimum.

Venue: Corvinus University of Budapest, Building E, Faculty Club
Date: February 7, 2024 (Wednesday), 13:00 – 17:00

Programme

  • 13:00 – 13:05: Opening
  • 13:05 – 13:25: Petra Renáta Rigó: Scientific activity of Yurii Nesterov
  • 13:30 – 14:30: Yurii Nesterov: Optimization, the philosophical background of Artificial Intelligence
  • 14:30 – 15:00: Coffee break
  • 15:00 – 15:30: Marianna E.-Nagy: Should we worry about the handicap of the matrix?
  • 15:30 – 16:00: Goran Lešaja: Simplified analysis of Kernel- based Interior-Point Methods for 𝒫*(𝜅) LCPs
  • 16:00 – 16:30: Tamás Solymosi: On computing the disruption nucleolus in balanced games
  • 16:30 – 17:00:  Informal discussion

The workshop will be broadcast online; the Teams link will be available on the Webpage of the CUB Research Week. The link can also be requested from Anita Varga (anita dot varga at uni-corvinus dot hu).

Abstracts

Petra Renáta Rigó: Scientific activity of Yurii Nesterov

Yurii Nesterov is the world’s top and most influential researcher in the theory of convex optimization. In this short talk we present only a few of the seminal contributions of Yurii Nesterov. We will mention his results related to the fast gradient methods for smooth convex minimization, the self-concordance-based theory of polynomial time interior point algorithms in convex programming and for conic problems on self-scaled cones, Newton method with cubic regularization, etc. The selected topics are the result of a subjective selection.

Yurii Nesterov: Optimization, the philosophical background of Artificial Intelligence

Short bio: Yurii Nesterov is a professor at the Center for Operations Research and Econometrics (CORE) at the Catholic University of Louvain (UCL), Belgium. He received a Ph.D. degree (Applied Mathematics) in 1984 at the Institute of Control Sciences, Moscow. Since 1993, he has worked at the Center of Operations Research and Econometrics (Catholic University of Louvain, Belgium).

His research interests are related to complexity issues and efficient methods for solving various optimization problems. The main results are obtained in Convex Optimization (optimal methods for smooth problems, polynomial-time interior-point methods, smoothing technique for structural optimization, complexity theory for second-order methods, optimization methods for huge-scale problems). He is an author of 6 monographs and more than 150 refereed papers in the leading optimization journals. He got several international prizes and recognitions, among them there are

  • Dantzig Prize from SIAM and Mathematical Programming society (2000),
  • von Neumann Theory Prize from INFORMS (2009),
  • SIAM Outstanding paper award (2014)
  • Euro Gold Medal from Association of European Operations Research Societies (2016).
  • Member of Academia Europaea (2021) and National Academy of Sciences (USA, 2022).
  • Lanchester prize from INFROMS (2022).

Abstract: We discuss new challenges in the modern Science, created by Artificial Intelligence (AI). Indeed, AI requires a system of new sciences, mainly based on computational models. Their development has already started by the progress in Computational Mathematics. In this new reality, Optimization plays an important role, helping the other fields with finding tractable models and efficient methods, and significantly increasing their predictive power. We support our conclusions by several examples of efficient optimization schemes related to human activity.

Marianna E.-Nagy: Should we worry about the handicap of the matrix?

The handicap is a matrix parameter that was defined in connection with the linear complementarity problem (LCP). We can solve efficiently with an interior point algorithm a linear complementarity problem if the handicap of LCP’s coefficient matrix is finite. However, the complexity of the algorithm depends also on the handicap. Therefore it is a natural and important question how big can be the handicap. We show that it cannot be arbitrarily big if we fix the dimension of the matrix. In the talk, we will investigate some further aspects of the question in the title.
Most of the results mentioned in the talk are joint work with Tibor Illés and László Végh.

Goran Lešaja: Simplified analysis of Kernel- based Interior-Point Methods for 𝒫*(𝜅) LCPs

Linear Complementarity Problems (LCP) is an important class of problems closely related to many optimization problems. Thus, efficient algorithms for solving LCP are of the interest for theoretical and practical purposes. It is important to mention that LCP has a solution if the strict interior of the feasible region of the problem is nonempty. This condition is called Interior-Point Condition (IPC).

Interior-Point Methods (IPMs) have revolutionized the field of Optimization in past several decades and have offered new, efficient approaches to solving many important and difficult optimization problems, including LCPs.

The idea of using kernel functions in the design and analysis of the IPMs is quite powerful and rich. We present a Feasible Interior-Point Methods (IPM) based on the class of eligible kernel functions (EKF).  The EKF is used to determine search direction, proximity measure, step-size, and other important elements of the method, which are then used to prove the global convergence of the method and to calculate the iteration bounds of the method.

This approach was developed as an effort to resolve the “Irony of IPMs”, the term coined by J. Renegar, which describes the fact that the iteration bound of the short-step IPMs is the order of magnitude better than the iteration bound of classical long-step IPMs, while in practice long-step methods perform much better than short-step methods.

The iteration bounds of the short –step IPM to obtain -approximate solution matches best known iteration bounds for these types of methods, namely O((1+2κ) √n log n/ε ​​).  One of the main achievements of the kernel-based IPMs is the improved complexity of long-step methods. For certain types of EKFs an O((1+2κ) √n  log  n  log n/ε ​) iteration complexity is achieved.

The streamlined Scheme to calculate iteration bounds is introduced and is used to derive the iteration bounds of IPMs for several specific EKFs.

Although the Scheme is helpful, the derivation of the iteration bounds for particular EKF is still quite involved. We introduce additional conditions on EKF that are   mild and not very restrictive which lead to a significant simplification of the Scheme and consequently the derivation of the iteration bounds. We illustrate the new approach on the number of particular EKF matching the results obtained in the classical approach.

Tamás Solymosi: On computing the disruption nucleolus in balanced games

In transferable utility cooperative games, the satisfaction of a coalition with a payoff allocation is measured by the difference of the payoff and the value of the coalition. It is positive for payoff allocations in the relative interior of the core. At such core allocations, the ratio of the satisfactions of the complementary coalition and the coalition is used to measure the propensity of the coalition to disrupt full cooperation by rejecting the payoff allocation. The maximum of these ratios over all coalitions indicates a kind of “vulnerability’’ of the core allocation, thus it is desired to be minimized. The disruption nucleolus lexicographically minimizes, over the interior of the core, the nonincreasingly ordered vector of the disruption propensities of the coalitions. In the talk, we show how the disruption nucleolus of a balanced game can be computed as a (linearized) weigthed nucleolus. We also explore simplification possibilities in the standard lexicographic center procedure for specific well-known classes of balanced games.