Jun-Kun Wang

 

I am a postdoc at Yale University working with Professor Andre Wibisono. I received my CS PhD from Georgia Tech and was very fortunate to be advised by Professor Jacob Abernethy. My PhD research focuses on theoretical understanding of modern algorithms/techniques in optimization and deep learning (e.g. Polyak's momentum, Nesterov's momentum, accelerated gradient methods), as well as designing new algorithms with provable guarantees.

I hold my M.S. in Communication Engineering and B.S. in Electrical Engineering from National Taiwan University.

Fun skill: I like to run and enjoy long-distance running. I run a lot!

Reviewer of NeurIPS 2016,2017,2018,2019,2020,2021,2022, of ICML 2017,2018,2019,2020,2021,2022, of COLT 2017,2018,2019,2020,2021,2022, of ALT 2017,2018,2019,2020,2022, of ICLR 2021,2022

Email: jun-kun.wang [at] yale [dot] edu

Publications: *Corresponding Author/*Presenting Author

Accelerating Hamiltonian Monte Carlo via Chebyshev Integration Time
Jun-Kun Wang and Andre Wibisono
In ICLR (International Conference on Learning Representations), 2023.

Continuized Acceleration for Quasar Convex Functions in Non-Convex Optimization
Jun-Kun Wang and Andre Wibisono
In ICLR (International Conference on Learning Representations), 2023.

Towards Understanding GD with Hard and Conjugate Pseudo-labels for Test-Time Adaptation
Jun-Kun Wang and Andre Wibisono
In ICLR (International Conference on Learning Representations), 2023.

Provable Acceleration of Heavy Ball beyond Quadratics for a class of Polyak-Lojasiewicz Functions when the Non-Convexity is Averaged-Out
Jun-Kun Wang, Chi-Heng Lin, Andre Wibisono, and Bin Hu
In ICML (International Conference on Machine Learning), 2022.

No-Regret Dynamics in the Fenchel Game: A Unified Framework for Algorithmic Convex Optimization.
Jun-Kun Wang, Jacob Abernethy, Kfir Y. Levy.
Under submission.

Understanding Modern Techniques in Optimization: Frank-Wolfe, Nesterov's Momentum, and Polyak's Momentum.
PhD Dissertation at Georgia Tech. 2021.

A Modular Analysis of Provable Acceleration via Polyak's momentum: Training a Wide ReLU Network and a Deep Linear Network
Jun-Kun Wang, Chi-Heng Lin, and Jacob Abernethy.
In ICML (International Conference on Machine Learning), 2021.

Understanding How Over-Parametrization Leads to Acceleration: A case of learning a single teacher neuron
Jun-Kun Wang and Jacob Abernethy.
In ACML (Asian Conference on Machine Learning), 2021.

Escape Saddle Points Faster with Stochastic Momentum.
Jun-Kun Wang, Chi-Heng Lin, and Jacob Abernethy.
In ICLR (International Conference on Learning Representations), 2020.

Online Linear Optimization with Sparsity Constraints
*Jun-Kun Wang, * Chi-Jen Lu, and Shou-De Lin.
In ALT (International Conference on Algorithmic Learning Theory), 2019.

Revisiting Projection-Free Optimization For Strongly Convex Constraint Sets
Jarrid Rector-Brooks, Jun-Kun Wang, and Barzan Mozafari.
In AAAI 33, 2019.

Acceleration through Optimistic No-Regret Dynamics
*Jun-Kun Wang and Jacob Abernethy.
In NeurIPS (Annual Conference on Neural Information Processing Systems), 2018. (Spotlight) paper

Faster Rates for Convex-Concave Games
(name order) Jacob Abernethy, Kevin Lai, Kfir Levy, and *Jun-Kun Wang.
In COLT (Computational Learning Theory), 2018.

On Frank-Wolfe and Equilibrium Computation
Jacob Abernethy and *Jun-Kun Wang.
In NeurIPS (Annual Conference on Neural Information Processing Systems), 2017. (Spotlight) paper supplementary

Efficient Sampling-based ADMM for Distributed Data
*Jun-Kun Wang, Shou-De Lin.
In DSAA (IEEE International Conference on Data Science and Advanced Analytics), 2016. code

Parallel Least-Squares Policy Iteration
*Jun-Kun Wang, Shou-De Lin.
In DSAA (IEEE International Conference on Data Science and Advanced Analytics), 2016.

Robust Inverse Covariance Estimation under Noisy Measurements
*Jun-Kun Wang, Shou-De Lin.
In ICML (International Conference on Machine Learning), 2014. paper slide

Techical Reports:

1 Quickly Finding a Benign Region via Heavy Ball Momentum in Non-Convex Optimization
Jun-Kun Wang, Jacob Abernethy.
arXiv 2020.