I build AI infrastructure from data pipelines to GPU kernels.
Data engineer progressing through LLMOps, GPU engineering, and RL infrastructure. I document every layer of the journey — the systems, the tradeoffs, and the things I wish someone had told me.
Each layer builds on the last — from reliable data systems to the frontier of RL infrastructure.
01
Data Engineering for AI
ETL pipelines, feature stores, and the scalable data infrastructure that powers everything above it.
02
LLMOps & Production AI
RAG pipelines, vector databases, LLM observability, and serving models reliably in production.
03
GPU Engineering & RL Infrastructure
CUDA kernels, distributed training, performance optimization, and building the systems behind reinforcement learning.
CAREER ROADMAP
Data Engineering → LLMOps → GPU Engineering → RL Infrastructure
I'm building toward RL infrastructure expertise — one layer at a time. Follow along as I document the progression, share what I'm learning, and build in public.
Every LLM is built on five foundational pillars: Basics, Systems, Scaling Laws, Data, and Alignment. This post maps out what they are and why mastering them is the path to building real AI systems.
Test-time training lets models update their own weights during inference. Learn how TTT layers work, their GPU implications, and why this changes AI infrastructure.