Jun-Kun Wang
I am looking for strongly motivated postdoctoral researchers, PhD, and MS students interested in the algorithmic foundations of optimization and machine learning. I am also seeking students with strong coding skills who are eager to make optimization and machine learning more efficient and more reliable. Feel free to drop me an email with your CV. Publications: *Corresponding Author/*Presenting Author No-Regret Dynamics in the Fenchel Game: A Unified Framework for Algorithmic Convex Optimization. Accelerating Hamiltonian Monte Carlo via Chebyshev Integration Time Continuized Acceleration for Quasar Convex Functions in Non-Convex Optimization Towards Understanding GD with Hard and Conjugate Pseudo-labels for Test-Time Adaptation Provable Acceleration of Heavy Ball beyond Quadratics for a class of Polyak-Lojasiewicz Functions when the Non-Convexity is Averaged-Out Understanding Modern Techniques in Optimization: Frank-Wolfe, Nesterov's Momentum, and Polyak's Momentum. A Modular Analysis of Provable Acceleration via Polyak's momentum: Training a Wide ReLU Network and a Deep Linear Network Understanding How Over-Parametrization Leads to Acceleration: A case of learning a single teacher neuron Escape Saddle Points Faster with Stochastic Momentum. Online Linear Optimization with Sparsity Constraints Revisiting Projection-Free Optimization For Strongly Convex Constraint Sets Acceleration through Optimistic No-Regret Dynamics Faster Rates for Convex-Concave Games On Frank-Wolfe and Equilibrium Computation Efficient Sampling-based ADMM for Distributed Data Parallel Least-Squares Policy Iteration Robust Inverse Covariance Estimation under Noisy Measurements Techical Reports: 1 Quickly Finding a Benign Region via Heavy Ball Momentum in Non-Convex Optimization Reviewer of NeurIPS 2016,2017,2018,2019,2020,2021,2022, of ICML 2017,2018,2019,2020,2021,2022, of COLT 2017,2018,2019,2020,2021,2022, of ALT 2017,2018,2019,2020,2022, of ICLR 2021,2022 |