Profile banner
Bhairav Mehta

Bhairav Mehta

Co-Founder, CEO at CharacterQuilt. AI/ML, Robotics, Entrepreneur

San Francisco, California, United States
Joined February 2025

Network

3.1K connections
πŸ’‘
Startup Founders Investors
πŸ“ˆ
Marketing Executives
🎯
GTM Leaders
πŸ€–
AI ML Researchers
🧠
Mila Montreal AI
🧡
Quilt Team
πŸŽ“
Waterloo AI SWE

Summary

Bhairav Mehta is a serial entrepreneur in the AI space, having co-founded buzzle.ai (a Y Combinator S21 company utilizing NLP for voice-of-customer analytics) and currently leading CharacterQuilt, an early-stage startup focused on B2B AI creative solutions. ycombinator+1
With a strong foundation in deep learning and robotics, Bhairav has conducted research at prominent institutions including NVIDIA and NASA Jet Propulsion Laboratory. His work includes contributions to robotics simulation calibration and active domain randomization, reflecting expertise in applying advanced AI techniques to real-world problems. github+2
Bhairav possesses a robust academic background, holding a Master's degree in Machine Learning from Mila - Quebec Artificial Intelligence Institute and a Bachelor's degree in Computer Science and Mathematics, Magna Cum Laude, from the University of Michigan. He also began a PhD in Computer Science at MIT, before opting to pursue his entrepreneurial ventures. github+1
He is also an engaged researcher and educator in the field of machine learning, evident from his numerous publications on Google Scholar in areas such as reinforcement learning, robotics, and meta-learning, and his involvement in mentoring initiatives. google+1

Work

Education

Writing

Three Ways to Fix The Negative Pretraining Effect

January 1, 2021

Investigates the effects of pretraining on the plasticity of neural networks, finding that different trajectories induce invariances that can either help or hinder plasticity in multi-task learning scenarios.

Favicon imagescholar.google.com

A User's Guide to Calibrating Robotics Simulators

January 1, 2020

Explores current methods in machine learning system identification, presenting a user's guide on when and where to use each, and introducing the SIPE benchmark for testing and comparing algorithms.

Favicon imagescholar.google.com

Bisimulation-Inducing Graph Neural Networks

January 1, 2020

Demonstrates that bisimulation relations and metrics can be induced by graph neural networks, establishing an equivalence between the original formulation of bisimulation on MDPs and the L2 distance induced by a particular type of GNN embedding.

Favicon imagebhairavmehta95.github.io

Generating Automatic Curricula via Self-Supervised Active Domain Randomization

January 1, 2020

This work demonstrates that agents trained through self-play in the ADR framework significantly outperform uniform domain randomization in both simulated and real-world transfer, even without explicit rewards.

Favicon imagescholar.google.com

Curriculum in gradient-based meta-reinforcement learning

January 1, 2020
Favicon imagescholar.google.com

Active Domain Randomization

January 1, 2019

This paper explores methods for generating automatic curricula via self-supervised active domain randomization, showing that agents trained via self-play can outperform uniform domain randomization in simulated and real-world transfer.

Favicon imagescholar.google.com

Symbolic Regression for Interpretable Offline Reinforcement Learning

Describes ISRL, a new paradigm for extracting interpretable symbolic reward functions from noisy data solely via symbolic regression, allowing human experimenters to extract reward functions from data.

Favicon imagebhairavmehta95.github.io

Active Domain Randomization and Safety-Critical Few-Shot Learning

A follow-up to ADR, this research shows that adaptive simulators can be learned within the maximum-entropy RL framework, allowing ADR's learned 'randomization-distributions' to serve as a strong, meaningful prior in a domain randomization setting.

Favicon imagebhairavmehta95.github.io