Preprints:
Lions and Muons: Optimization via Stochastic Frank-Wolfe
Maria-Eleni Sfyraki and Jun-Kun Wang.
arXiv:2506.04192. 2025.
Optimistic Interior Point Methods for Sequential Hypothesis Testing by Betting
Can Chen and Jun-Kun Wang.
arXiv:2502.07774. 2025.
Frictionless Hamiltonian Descent and Coordinate Hamiltonian Descent for Strongly Convex Quadratic Problems and Beyond
Jun-Kun Wang.
arXiv:2402.13988. 2024.
Publications: *Corresponding Author/*Presenting Author
Online Detection of LLM-Generated Texts via Sequential Hypothesis Testing by Betting.
Can Chen and Jun-Kun Wang.
In ICML (International Conference on Machine Learning), 2025.
No-Regret Dynamics in the Fenchel Game: A Unified Framework for Algorithmic Convex Optimization.
Jun-Kun Wang, Jacob Abernethy, Kfir Y. Levy.
Mathematical Programming 2024.
Accelerating Hamiltonian Monte Carlo via Chebyshev Integration Time
Jun-Kun Wang and Andre Wibisono
In ICLR (International Conference on Learning Representations), 2023.
Continuized Acceleration for Quasar Convex Functions in Non-Convex Optimization
Jun-Kun Wang and Andre Wibisono
In ICLR (International Conference on Learning Representations), 2023.
Towards Understanding GD with Hard and Conjugate Pseudo-labels for Test-Time Adaptation
Jun-Kun Wang and Andre Wibisono
In ICLR (International Conference on Learning Representations), 2023.
Provable Acceleration of Heavy Ball beyond Quadratics for a class of Polyak-Lojasiewicz Functions when the Non-Convexity is Averaged-Out
Jun-Kun Wang, Chi-Heng Lin, Andre Wibisono, and Bin Hu
In ICML (International Conference on Machine Learning), 2022.
Understanding Modern Techniques in Optimization: Frank-Wolfe, Nesterov's Momentum, and Polyak's Momentum.
PhD Dissertation at Georgia Tech. 2021.
A Modular Analysis of Provable Acceleration via Polyak's momentum: Training a Wide ReLU Network and a Deep Linear Network
Jun-Kun Wang, Chi-Heng Lin, and Jacob Abernethy.
In ICML (International Conference on Machine Learning), 2021.
Understanding How Over-Parametrization Leads to Acceleration: A case of learning a single teacher neuron
Jun-Kun Wang and Jacob Abernethy.
In ACML (Asian Conference on Machine Learning), 2021.
Escape Saddle Points Faster with Stochastic Momentum.
Jun-Kun Wang, Chi-Heng Lin, and Jacob Abernethy.
In ICLR (International Conference on Learning Representations), 2020.
Online Linear Optimization with Sparsity Constraints
*Jun-Kun Wang, * Chi-Jen Lu, and Shou-De Lin.
In ALT (International Conference on Algorithmic Learning Theory), 2019.
Revisiting Projection-Free Optimization For Strongly Convex Constraint Sets
Jarrid Rector-Brooks, Jun-Kun Wang, and Barzan Mozafari.
In AAAI 33, 2019.
Acceleration through Optimistic No-Regret Dynamics
*Jun-Kun Wang and Jacob Abernethy.
In NeurIPS (Annual Conference on Neural Information Processing Systems), 2018.
(Spotlight) paper
Faster Rates for Convex-Concave Games
(name order) Jacob Abernethy, Kevin Lai, Kfir Levy, and *Jun-Kun Wang.
In COLT (Computational Learning Theory), 2018.
On Frank-Wolfe and Equilibrium Computation
Jacob Abernethy and *Jun-Kun Wang.
In NeurIPS (Annual Conference on Neural Information Processing Systems), 2017.
(Spotlight)
paper supplementary
Efficient Sampling-based ADMM for Distributed Data
*Jun-Kun Wang, Shou-De Lin.
In DSAA (IEEE International Conference on Data Science and Advanced Analytics), 2016. code
Parallel Least-Squares Policy Iteration
*Jun-Kun Wang, Shou-De Lin.
In DSAA (IEEE International Conference on Data Science and Advanced Analytics),
2016.
Robust Inverse Covariance Estimation under Noisy Measurements
*Jun-Kun Wang, Shou-De Lin.
In ICML (International Conference on Machine Learning), 2014.
paper
slide