About Me
I’m a Research Scientist at Snap Research on the User Modeling and Personalization (UMaP) team led by Neil Shah. My research focuses on user representation learning from sequential and multi-modal interaction data.
I completed my PhD at UT Austin, where I was advised by Aryan Mokhtari and Sanjay Shakkottai and studied in-context learning, multi-task learning and feature learning theory, among other ML theory topics. Prior to this I earned a B.S.E. from Princeton where I worked under Yuxin Chen.
My email is lcollins2 at snap dot com.
We are currently recruiting interns to work on a variety of projects in recommendation, user modeling and graph learning for 2026. Start dates are flexible. If interested please send me an email, job link to appear soon.
[Last update: November 2025]
News
October 2025 Our paper “Generative Recommendation with Semantic IDs: A Practitioner’s Handbook” won the Best Paper Award at CIKM 2025! Please take a look at our public repo, and see you in Seoul!
October 2025 Our paper studying data augmentation for generative recommendation has been accepted to WSDM 2025. Congrats to Geon! See you in Boise.
October 2025 We recently released a pre-print studying the scaling laws of generative recommendation! Please take a look and suggestions are welcomed.
September 2025 Our paper on meta-learning for LoRA was accepted to NeurIPS 2025. Congrats to Jacob and Sundar! See you in sunny San Diego.
May 2025 Two of our papers are accepted to the research track at KDD 2025. One studies the popularity bias of recommender systems and the other studies cross-domain sequential recommendation.
April 2025 Our paper studying universal user representation learning via cross-domain user signals has been accepted to the industry track at SIGIR 2025.
November 2024 Selected as a top reviewer for NeurIPS 2024.
September 2024: Started working at Snap!
September 2024: Our in-context learning paper was selected for Spotlight Presentation at NeurIPS 2024.
June 2024: Our multi-task learning paper was selected for Oral Presentation at ICML 2024.
May 2024: Our paper on multi-task learning with two-layer ReLU networks was accepted at ICML 2024.
April 2024: Defended my thesis!
February 2024: New paper on in-context learning with transformers with softmax-activated self-attention.
December 2023: Our paper was selected as a Best Paper at FL@FM-NeurIPS’23.
October 2023: Our paper on federated prompt tuning was selected for Oral Presentation at FL@FM-NeurIPS’23.
Summer 2023: I interned at Google Research, working with Shanshan Wu, Sewoong Oh, and Khe Chai Sim on federated prompt tuning of large language models.
June 2023: New paper on multi-task learning with two-layer ReLU networks.
May 2023: Our paper InfoNCE Loss Provably Learns Cluster-Preserving Representations was accepted at COLT 2023.
October 2022: I gave a talk on representation learning in federated learning at the Federated Learning One World (FLOW) Seminar.
Summer 2022: I interned at Amazon Alexa under the supervision of Jie Ding and Tanya Roosta. My project studied personalized federated learning with side information. Our paper was accepted at FL-NeurIPS’22.
Papers
Please see my Google Scholar profile for the most updated list of papers.
In-Context Learning with Transformers: Softmax Attention Adapts to Function Lipschitzness
LC*, Advait Parulekar*, Aryan Mokhtari, Sujay Sanghavi, Sanjay Shakkottai
* co-first authors
NeurIPS 2024, Spotlight [PDF]
Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks
LC, Hamed Hassani, Mahdi Soltanolkotabi, Aryan Mokhtari, Sanjay Shakkottai
ICML 2024 Oral Presentation [PDF]
Profit: Benchmarking Personalization and Robustness Trade-off in Federated Prompt Tuning
LC, Shanshan Wu, Sewoong Oh, Khe Chai Sim
Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023 Best Paper
[PDF]
InfoNCE Provably Learns Cluster-Preserving Representations
Advait Parulekar, LC, Karthikeyan Shanmugam, Aryan Mokhtari, Sanjay Shakkottai
COLT 2023
[PDF]
FedAvg with Fine-Tuning: Local Updates Lead to Representation Learning
LC, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai
NeurIPS 2022
[PDF]
PerFedSI: A Framework for Personalized Federated Learning with Side Information
LC, Enmao Diao, Tanya Roosta, Jie Ding, Tao Zhang
Workshop on Federated Learning: Recent Advances and New Challenges in Conjunction with NeurIPS 2022
[PDF]
MAML and ANIL Provably Learn Representations
LC, Aryan Mokhtari, Sewoong Oh, Sanjay Shakkottai
ICML 2022
[PDF]
How does the Task Landscape Affect MAML Performance?
LC, Aryan Mokhtari, Sanjay Shakkottai
CoLLAs 2022 Oral Presentation
[PDF]
Exploiting Shared Representations for Personalized Federated Learning
LC, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai
ICML 2021
[PDF] [Code]
Task-Robust Model-Agnostic Meta-Learning
LC, Aryan Mokhtari, Sanjay Shakkottai
NeurIPS 2020
[PDF] [Code]