About Me
I’m a fifth-year PhD student at UT Austin co-advised by Aryan Mokhtari and Sanjay Shakkottai. I’m broadly interested in improving the learning abilities of machine learning models, especially in low-data and low-compute scenarios. This has led to work in a variety of areas, including federated learning, meta-learning, multi-task learning, contrastive learning, and most recently, in-context learning with and parameter-efficient fine-tuning of large language models. Before UT, I completed my undergrad at Princeton where I worked with Yuxin Chen.
I am graduating in May 2024 and am currently on the job market.
My email is liamc at utexas dot edu.
News
February 2024: New paper on in-context learning with transformers with softmax-activated self-attention.
December 2023: Our paper was selected as a Best Paper at FL@FM-NeurIPS’23!
October 2023: Our paper on federated prompt tuning was selected for Oral Presentation at FL@FM-NeurIPS’23.
Summer 2023: I interned at Google Research, working with Shanshan Wu, Sewoong Oh, and Khe Chai Sim on federated prompt tuning of large language models.
June 2023: New paper on multi-task learning with two-layer ReLU networks.
May 2023: Our paper InfoNCE Loss Provably Learns Cluster-Preserving Representations was accepted at COLT 2023.
October 2022: I gave a talk on representation learning in federated learning at the Federated Learning One World (FLOW) Seminar.
Summer 2022: I interned at Amazon Alexa under the supervision of Jie Ding and Tanya Roosta. My project studied personalized federated learning with side information. Our paper was accepted at FL-NeurIPS’22.
Papers
For the most updated list of papers, please see my Google Scholar profile.
Profit: Benchmarking Personalization and Robustness Trade-off in Federated Prompt Tuning
LC, Shanshan Wu, Sewoong Oh, Khe Chai Sim
FL@FM-NeurIPS’23 Best Paper [PDF]
Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks
LC, Hamed Hassani, Mahdi Soltanolkotabi, Aryan Mokhtari, Sanjay Shakkottai
arxiv preprint
[PDF]
InfoNCE Provably Learns Cluster-Preserving Representations
Advait Parulekar, LC, Karthikeyan Shanmugam, Aryan Mokhtari, Sanjay Shakkottai
COLT 2023
[PDF]
FedAvg with Fine-Tuning: Local Updates Lead to Representation Learning
LC, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai
NeurIPS 2022
[PDF]
MAML and ANIL Provably Learn Representations
LC, Aryan Mokhtari, Sewoong Oh, Sanjay Shakkottai
ICML 2022
[PDF]
How does the Task Landscape Affect MAML Performance?
LC, Aryan Mokhtari, Sanjay Shakkottai
CoLLAs 2022 Oral Presentation
[PDF]
Exploiting Shared Representations for Personalized Federated Learning
LC, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai
ICML 2021
[PDF] [Code]
Task-Robust Model-Agnostic Meta-Learning
LC, Aryan Mokhtari, Sanjay Shakkottai
NeurIPS 2020
[PDF] [Code]