Mathematical Tools and Efficient Algorithms (Richard Peng)

Brief Introduction

1. Multi-armed Bandit Framework

2. The Upper Confidence Bound (UCB) Method

3. Elementary Improvements for Both Regret Minimization and Pure Exploration Settings

4. The Law-of-Iterated-Logarithm UCB Algorithm

5. Linear Contextual Bandits

6. Generalized Linear Models and Generalized Linear Contextual Bandits

7. Dynamic Assortment Optimization and the Multilinear Logit Bandit Problem

8. (If Time Permits) Zeroth Order Convex Optimization

Time

2018-07-11 ~ 2018-08-05   

Lecturers

Yuan Zhou, Indiana University Bloomington

Venue

Room 104, School of Information Management & Engineering, Shanghai University of Finance & Economics

Application and Registration

Program

Contact Us

Directions to ITCS

View Direction Page