Fourth BIGMATH PhD Course

Large-scale and distributed optimization
Immagine correlata

Schedule: 27-28 and 30-31 January 2020. The course is carried out in 4 days, with 6 hours of lectures each day for all the course days.

Venue: University of Novi Sad, Main Recorate Building, room 1/9, Dr. Zorana Djindjica Street No. 1, 21000 Novi Sad, Serbia

Content and goals

The goal of the training course is to provide to the BIGMATH ESRs, and to a more general audience of participants, an overview of tools and algorithms in the area of large scale and distributed optimization. The course will provide illustrative application examples which help in understanding how optimization-based modelling can be applied on a given problem. In addition, selected algorithms and tools will be studied in more detail.

Outline of the programme

Machine learning and optimization; Optimality conditions for unconstrained problems; Convexity; Line search methods; Gradient methods; Second order methods; Optimality conditions for constrained problems; Augmented Lagrangian methods; Parallel methods: duality theory and dual subgradient method; primal decomposition; dual decomposition; augmented Lagrangian; alternating direction method of multipliers. Distributed methods: distributed gradient descent; stochastic distributed methods. Part of the course will be devoted to computer labs for software/implementation tutorial: CVX software package for disciplined convex programming.

Target audience

Besides BIGMATH ESRs, target audience is primarily PhD students in the area of applied mathematics; also, the course is of interest to PhD students in computer science and electrical engineering; finally, more general academia and industry audience interested in optimization tools and their applications is also considered.

Expected results

Students will obtain a solid overview of large scale and distributed optimization, as well as a solid understanding on how and where they can be useful for a given research problem. They will also become familiar with a number of large scale and distributed algorithms that they may use in their work. Finally, they will get an understanding and experience on what steps usually need to be followed in order to re-design or analyze a given algorithm.

For more information and registration, please contact: Prof. Natasa Krejic natasak@uns.ac.rs 

https://ecmiindmath.org/2020/01/16/bigmath-advanced-course-4-large-scale-and-distributed-optimization/