
Harshil Vejendla
Rutgers CS student, ML Engineer, and researcher with a focus on AI
Summary
Work
Education
Writing
Learning to Predict Chaos: Curriculum-Driven Training for Robust Forecasting of Chaotic Dynamics
October 1, 2025This paper proposes Curriculum Chaos Forecasting (CCF), a training paradigm that organizes training data based on dynamical systems theory. CCF significantly enhances performance on unseen, real-world benchmarks by progressively introducing more chaotic dynamics.
Drift-Adapter: A Practical Approach to Near Zero-Downtime Embedding Model Upgrades in Vector Databases
September 1, 2025This paper presents Drift-Adapter, a lightweight, learnable transformation layer designed to bridge embedding spaces between model versions. It enables continued use of existing ANN indexes, effectively deferring full re-computation and reducing recompute costs.
Wave-PDE Nets: Trainable Wave-Equation Layers as an Alternative to Attention
January 1, 2025Introduces Wave-PDE Nets, a neural architecture that simulates the second-order wave equation. Each layer propagates its hidden state as a continuous field, offering an alternative to traditional attention mechanisms.
SliceMoE: Routing Embedding Slices Instead of Tokens for Fine-Grained and Balanced Transformer Scaling
January 1, 2025Introduces SliceMoE, an architecture that routes contiguous slices of embedding to Mixture-of-Experts (MoE) layers, aiming to address capacity bottlenecks and load-balancing issues in transformer scaling.
Efficient Uncertainty Estimation via Distillation of Bayesian Large Language Models
January 1, 2025Addresses the efficiency issues of existing Bayesian methods for uncertainty estimation in LLMs by proposing a distillation approach that avoids multiple sampling iterations during inference.
H1B-KV: Hybrid One-Bit Caches for Memory-Efficient Large Language Model Inference
January 1, 2025Explores a memory-efficient approach for large language model inference using hybrid one-bit caches to manage key-value pairs, addressing memory-bound problems in long-context inference.
LATTA: Langevin-Anchored Test-Time Adaptation for Enhanced Robustness and Stability
January 1, 2025Presents LATTA, a test-time adaptation method designed to improve the robustness and stability of pretrained models against distribution shifts, overcoming limitations of existing methods like Tent.
Similar profiles
Sal Spina
Investor at TQ Ventures
3.7K connections
OBOlivia Bastianich
Chief of Staff at TQ Ventures
3.1K connections
NBNathan Benaich
Founder at Spinout.fyi
97.1K connections
CJCharles-André Jolly
Founder at Stealth Startup
3.4K connections
DADr Addi Haran Diman
Co-founder and CEO at Stealth AI Startup
4.7K connections
BRBryan Rosenblatt
Partner at Craft Ventures
37.7K connections