Smoothing proximal gradient method for general structured sparse regression X Chen, Q Lin, S Kim, JG Carbonell, EP Xing Annals of Applied Statistics 6 (2), 719-752, 2012 | 304 | 2012 |
Weakly-convex–concave min–max optimization: provable algorithms and applications in machine learning H Rafique, M Liu, Q Lin, T Yang Optimization Methods and Software, 1-35, 2021 | 269 | 2021 |
Smoothing proximal gradient method for general structured sparse learning X Chen, Q Lin, S Kim, JG Carbonell, EP Xing Proceedings of the Twenty-Seventh Conference on Uncertainty in Artificial …, 2011 | 251* | 2011 |
A Unified Analysis of Stochastic Momentum Methods for Deep Learning. Y Yan, T Yang, Z Li, Q Lin, Y Yang IJCAI, 2955-2961, 2018 | 234* | 2018 |
An Accelerated Randomized Proximal Coordinate Gradient Method and its Application to Regularized Empirical Risk Minimization Q Lin, Z Lu, L Xiao SIAM Journal on Optimization 25 (4), 2244–2273, 2015 | 153 | 2015 |
An accelerated proximal coordinate gradient method Q Lin, Z Lu, L Xiao Advances in Neural Information Processing Systems, 3059-3067, 2014 | 153 | 2014 |
Optimistic knowledge gradient policy for optimal budget allocation in crowdsourcing X Chen, Q Lin, D Zhou International Conference on Machine Learning, 64-72, 2013 | 153 | 2013 |
Distributed stochastic variance reduced gradient methods by sampling extra data with replacement JD Lee, Q Lin, T Ma, T Yang Journal of Machine Learning Research 18 (122), 1-43, 2017 | 129* | 2017 |
First-order convergence theory for weakly-convex-weakly-concave min-max problems M Liu, H Rafique, Q Lin, T Yang Journal of Machine Learning Research 22 (169), 1-34, 2021 | 117* | 2021 |
An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization Q Lin, L Xiao Computational Optimization and Applications 60 (3), 633–674, 2015 | 114 | 2015 |
RSG: Beating subgradient method without smoothness and strong convexity T Yang, Q Lin Journal of Machine Learning Research 19 (6), 1−33, 2015 | 101 | 2015 |
Generalized inverse classification MT Lash, Q Lin, N Street, JG Robinson, J Ohlmann Proceedings of the 2017 SIAM International Conference on Data Mining, 162-170, 2017 | 82 | 2017 |
Optimal epoch stochastic gradient descent ascent methods for min-max optimization Y Yan, Y Xu, Q Lin, W Liu, T Yang Advances in Neural Information Processing Systems 33, 5789-5800, 2020 | 74* | 2020 |
Stochastic convex optimization: Faster local growth implies faster global convergence Y Xu, Q Lin, T Yang International Conference on Machine Learning, 3821-3830, 2017 | 68* | 2017 |
Optimal regularized dual averaging methods for stochastic optimization X Chen, Q Lin, J Pena Advances in Neural Information Processing Systems 25, 2012 | 68 | 2012 |
Complexity of an inexact proximal-point penalty method for constrained smooth non-convex optimization Q Lin, R Ma, Y Xu Computational optimization and applications 82 (1), 175-224, 2022 | 62* | 2022 |
ADMM without a fixed penalty parameter: Faster convergence with new adaptive penalization Y Xu, M Liu, Q Lin, T Yang Advances in neural information processing systems 30, 2017 | 61 | 2017 |
Sparse latent semantic analysis X Chen, Y Qi, B Bai, Q Lin, JG Carbonell Proceedings of the 2011 SIAM International Conference on Data Mining, 474-485, 2011 | 58 | 2011 |
Dscovr: Randomized primal-dual block coordinate algorithms for asynchronous distributed optimization L Xiao, AW Yu, Q Lin, W Chen Journal of Machine Learning Research 20 (43), 1-58, 2019 | 53 | 2019 |
Hybrid predictive models: When an interpretable model collaborates with a black-box model T Wang, Q Lin Journal of Machine Learning Research 22 (137), 1-38, 2021 | 47 | 2021 |