I received my Ph. D on Faster, Incentivized, and Efficient Federated Learning: Theory and Applications at Carnegie Mellon University on Aug. 2024, and and joined Google to conduct applied research on on-device machine learning for large language models. My research interests are broadly in distributed machine learning, on-device machine learning, and federated learning.
News
- Sep. 2024: Our work in collaboration with Google Research on Heterogeneous LoRA for Federated Fine-tuning of LLMs has been accepted to EMNLP main proc.! See you at Miami!
- Aug. 2024: Defended my Thesis on Faster, Incentivized, and Efficient Federated Learning: Theory and Applications!
- Jul. 2024: Joined Google to work on On-device Machine Learning for LLMs
- Dec. 2023: Our work done during my summer at Google Research on parameter-efficient federated fine-tuning with heterogeneous Low-Rank Approximation has been accepted to FL@FM-NeurIPS’23! See you at New Orleans!
- Jul. 2023: Our work done during my summer at Microsoft Research on federated learning with limited labels is accepted to ICCV 2023! See you at Paris!
- May. 2023: I will be interning at Google Research, Seattle working on the intersection of LLMs and FL this summer!
- Apr. 2023: Our work on cyclic client participation in federated learning is accepted to ICML 2023 (28% acceptance rate)! See you at Honolulu!
- Nov. 2022: I am invited to the Women in Research Lean In 2022 event hosted by Meta! See you at Menlo Park!
- Oct. 2022: Our work on client incentives in federated learning is accepted to FL-NeurIPS’22 for oral presentation (12% acceptance rate)! See you at New Orleans!
- Aug. 2022: I finished my second summer research internship at Microsoft Research’s Privacy in AI Team working on semi-supervised federated learning!
- Apr. 2022: Our team was selected as the Finalist for the 2022 Qualcomm Innovation Fellowship for research on Incentivized Federated Learning for Data-Heterogeneous and Resource-Constrained Clients
- Mar. 2022: I gave a talk at the MLOPT Research Group’s Idea Seminar at UW-Madison on Leveraging Biased Client Selection in Federated Learning – work accepted to AISTATS 2022 (29% acceptance rate)
- Aug. 2021: I finished my summer research internship at Microsoft Research’s Privacy in AI Team working on Heterogeneous Ensemble Knowledge Transfer for Training Large Models in Federated Learning – work accepted to IJCAI 2022 (15% acceptance rate)
Selected Publications
- Y. J. Cho, L. Liu, Z. Xu, A. Fahrezi, and G. Joshi, “Heterogeneous LoRA for Federated Fine-tuning of On-Device Foundation Models”, EMNLP 2024 [pdf]
- Y. J. Cho, D. Jhunjhunwala, T. Li, V. Smith, and G. Joshi, “To Federate or Not To Federate: Incentivizing Client Participation in Federated Learning”, TMLR 2024, Shorter version at FL-NeurIPS’22 (oral) [pdf]
- Y. J. Cho, L. Liu, Z. Xu, A. Fahrezi, M. Barnes, and G. Joshi, “Heterogeneous LoRA for Federated Fine-tuning of On-device Foundation Models”, FL@FM-NeurIPS’23 [pdf]
- Y. J. Cho, G. Joshi, and D. Dimitriadis, “Local or Global: Selective Knowledge Assimilation for Federated Learning with Limited Labels”, ICCV 2023 [pdf]
- Y. J. Cho, P. Sharma, G. Joshi, Z. Xu, S. Kale, and T. Zhang, “On the Convergence of Federated Averaging with Cyclic Client Participation”, ICML 2023 [pdf]
- Y. J. Cho, J. Wang, T. Chiruvolu, and G. Joshi, “Personalized Federated Learning for Heterogeneous Devices with Clustered Knowledge Transfer”, IEEE Journal of Selected Topics in Signal Processing (IEEE JSTSP), Dec 2022 [pdf]
- Y. J. Cho, A. Manoel, G. Joshi, R. Sim, and D. Dimitriadis, “Heterogeneous Ensemble Knowledge Transfer for Training Large Models in Federated Learning”, IJCAI 2022 [pdf]
- Y. J. Cho, J. Wang, and G. Joshi, “Client Selection in Federated Learning: Convergence Analysis and Power-of-Choice Selection Strategies”, AISTATS 2022 [pdf]
- Y. J. Cho, S. Gupta, G. Joshi, and O. Yagan, “Bandit-based Communication-Efficient Client Selection Strategies for Federated Learning”, Asilomar Conference on Signals, Systems and Computers 2020 (Invited Paper) [pdf]
Miscellaneous
In my free time I enjoy playing the piano, squash, swimming, and spending time with PanitheCorgi.
PanitheCorgi loves ❄
Last updated on Sep. 2024.