Where This Started
In 2022, I was a Senior Engineer at Eventbrite working on the SEO and Growth platform. We had a cost problem: external API calls were generating $15,000 per day in charges, over $450K per month, with no visibility into which code paths were responsible. I built a cross-service caching and deduplication layer that cut that spend by 99.7%, from $15K/day to roughly $40/month.
The more interesting thing happened after. When engineers could see the cost of their own code paths in real time, they started designing differently, without being asked. The same visibility-driven shift repeated on the Ads platform, where I built real-time budget observability that enabled a decision to sunset a $60K/month ML ranking system that the data showed wasn't earning its keep.
That is when it became clear to me that production AI is not primarily a capability problem. It is an economics and measurement problem. Most AI systems fail not because the model is wrong, but because nobody built the layer that answers "is this earning its cost?" I have spent the years since building that layer, and teaching others to do the same.
The Engineering Foundation
I have spent 8+ years shipping software at scale, a world where uptime is non-negotiable and "it works on my machine" is not a deployment strategy. That experience shaped how I approach AI today. I treat LLMs the same way I treat any powerful but unpredictable component: give it structure, measure what it costs, and make sure your team can maintain it after you leave. Every system I build is production-ready, maintainable, and profitable, not a science experiment.
Shipping at Scale
I brought that discipline to Eventbrite as a Senior Engineer, learning that code is only as good as its uptime. When you ship at that scale, you learn that unboring, reliable systems are the most exciting thing you can build. From there, I led infrastructure teams at FlowWest and ESLWorks, where I learned that the best systems are the ones your team isn't afraid to deploy on a Friday.
The AI Reality
Today, I'm building Arepa.AI, an AI agent platform that helps Spanish-speaking SMBs automate customer interactions, from qualifying leads to scheduling appointments in natural Spanish. I don't just advise on AI strategy, I ship production code every day. I also share my playbooks and lessons learned at Celestino.ai, an interactive documentary where you can ask my AI about how I build these systems.
My Philosophy
"Unit Economics is the only Feature."
I believe AI is fundamentally a supply chain problem. The most impressive model is useless if it bankrupts you to run it. The question I ask on every project: does this system pay for itself, and can the team maintain it without me?
- Measure before you build: I replace "vibe checks" with automated evaluation harnesses because you cannot improve what you do not measure. When I introduced these standards at scale, user trust and impressions increased by 482%.
- Make it pay for itself: I re-architect retrieval pipelines to cut costs by up to 99%, turning unprofitable AI into a viable product. I implement vendor off-ramps to keep long-term opex low, saving roughly $60K/mo in one engagement.
- Leave it better than you found it: I ship runbooks, decision records, and safety valves so your team is not terrified to deploy on Fridays. A profitable system means nothing if it falls apart the day I leave.
Let's Connect
I write about the intersection of AI engineering, economics, and reliability.
Explore my work to see what I've built, contact me to discuss roles, projects, or collaborations, or talk to my AI to see the system in action.