About Me

I’m a fifth-year PhD student at UT Austin co-advised by Aryan Mokhtari and Sanjay Shakkottai. I’m broadly interested in improving the learning abilities of machine learning models, especially in low-data and low-compute scenarios. This has led to work in a variety of areas, including federated learning, meta-learning, multi-task learning, contrastive learning, and most recently, in-context learning with and parameter-efficient fine-tuning of large language models. Before UT, I completed my undergrad at Princeton where I worked with Yuxin Chen.

I am graduating in May 2024 and am currently on the job market.

My email is liamc at utexas dot edu.

News

Papers

For the most updated list of papers, please see my Google Scholar profile.

Profit: Benchmarking Personalization and Robustness Trade-off in Federated Prompt Tuning
LC, Shanshan Wu, Sewoong Oh, Khe Chai Sim
FL@FM-NeurIPS’23 Best Paper [PDF]

Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks
LC, Hamed Hassani, Mahdi Soltanolkotabi, Aryan Mokhtari, Sanjay Shakkottai
arxiv preprint
[PDF]

InfoNCE Provably Learns Cluster-Preserving Representations
Advait Parulekar, LC, Karthikeyan Shanmugam, Aryan Mokhtari, Sanjay Shakkottai
COLT 2023
[PDF]

FedAvg with Fine-Tuning: Local Updates Lead to Representation Learning
LC, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai
NeurIPS 2022
[PDF]

MAML and ANIL Provably Learn Representations
LC, Aryan Mokhtari, Sewoong Oh, Sanjay Shakkottai
ICML 2022
[PDF]

How does the Task Landscape Affect MAML Performance?
LC, Aryan Mokhtari, Sanjay Shakkottai
CoLLAs 2022 Oral Presentation
[PDF]

Exploiting Shared Representations for Personalized Federated Learning
LC, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai
ICML 2021
[PDF] [Code]

Task-Robust Model-Agnostic Meta-Learning
LC, Aryan Mokhtari, Sanjay Shakkottai
NeurIPS 2020
[PDF] [Code]