About Me

I am currently a Research Scientist at Snap Research on the User Modeling and Personalization (UMaP) team led by Neil Shah. My work focuses on learning user representations for downstream recommendation tasks from sequential and multi-modal interaction data.

Before Snap, I completed my PhD at UT Austin, where I studied a range of topics including in-context learning, multi-task learning and feature learning theory under the excellent supervision of Aryan Mokhtari and Sanjay Shakkottai. I was also fortunate to hold internships at Google Research and Amazon during my PhD. I completed my undergrad at Princeton University where I had the pleasure of working with Yuxin Chen.

My email is lcollins2 at snap dot com.

We are currently recruiting interns to work on a variety of projects in recommendation, graph learning, and user modeling for 2025. Start dates are flexible. If interested please apply online and send me an email.

News

  • September 2024: Started working at Snap!

  • September 2024: Our in-context learning paper was selected for Spotlight Presentation at NeurIPS 2024.

  • June 2024: Our multi-task learning paper was selected for Oral Presentation at ICML 2024.

  • May 2024: Our paper on multi-task learning with two-layer ReLU networks was accepted at ICML 2024.

  • April 2024: Defended my thesis!

  • February 2024: New paper on in-context learning with transformers with softmax-activated self-attention.

  • December 2023: Our paper was selected as a Best Paper at FL@FM-NeurIPS’23.

  • October 2023: Our paper on federated prompt tuning was selected for Oral Presentation at FL@FM-NeurIPS’23.

  • Summer 2023: I interned at Google Research, working with Shanshan Wu, Sewoong Oh, and Khe Chai Sim on federated prompt tuning of large language models.

  • June 2023: New paper on multi-task learning with two-layer ReLU networks.

  • May 2023: Our paper InfoNCE Loss Provably Learns Cluster-Preserving Representations was accepted at COLT 2023.

  • October 2022: I gave a talk on representation learning in federated learning at the Federated Learning One World (FLOW) Seminar.

  • Summer 2022: I interned at Amazon Alexa under the supervision of Jie Ding and Tanya Roosta. My project studied personalized federated learning with side information. Our paper was accepted at FL-NeurIPS’22.

Papers

Please see my Google Scholar profile for the most updated list of papers.

In-Context Learning with Transformers: Softmax Attention Adapts to Function Lipschitzness
LC*, Advait Parulekar*, Aryan Mokhtari, Sujay Sanghavi, Sanjay Shakkottai
* co-first authors
NeurIPS 2024, Spotlight [PDF]

Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks
LC, Hamed Hassani, Mahdi Soltanolkotabi, Aryan Mokhtari, Sanjay Shakkottai
ICML 2024 Oral Presentation [PDF]

Profit: Benchmarking Personalization and Robustness Trade-off in Federated Prompt Tuning
LC, Shanshan Wu, Sewoong Oh, Khe Chai Sim
Workshop on Federated Learning in the Age of Foundation Models in Conjunction with NeurIPS 2023 Best Paper
[PDF]

InfoNCE Provably Learns Cluster-Preserving Representations
Advait Parulekar, LC, Karthikeyan Shanmugam, Aryan Mokhtari, Sanjay Shakkottai
COLT 2023
[PDF]

FedAvg with Fine-Tuning: Local Updates Lead to Representation Learning
LC, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai
NeurIPS 2022
[PDF]

PerFedSI: A Framework for Personalized Federated Learning with Side Information
LC, Enmao Diao, Tanya Roosta, Jie Ding, Tao Zhang
Workshop on Federated Learning: Recent Advances and New Challenges in Conjunction with NeurIPS 2022
[PDF]

MAML and ANIL Provably Learn Representations
LC, Aryan Mokhtari, Sewoong Oh, Sanjay Shakkottai
ICML 2022
[PDF]

How does the Task Landscape Affect MAML Performance?
LC, Aryan Mokhtari, Sanjay Shakkottai
CoLLAs 2022 Oral Presentation
[PDF]

Exploiting Shared Representations for Personalized Federated Learning
LC, Hamed Hassani, Aryan Mokhtari, Sanjay Shakkottai
ICML 2021
[PDF] [Code]

Task-Robust Model-Agnostic Meta-Learning
LC, Aryan Mokhtari, Sanjay Shakkottai
NeurIPS 2020
[PDF] [Code]