Vision

What I'm building toward

Updated March 2026

From robotic tooling to defense ML to production AI to agent systems—a consistent thread of designing simple affordances within complex environments. The thread is always the same motion: take a messy, high-dimensional problem space, find the structural decomposition, build the interface that lets the high-entropy parts iterate without breaking the low-entropy foundation.

The Agent Era

The bitter lesson says general methods leveraging computation consistently outperform approaches based on handcrafted human knowledge. Most agent harnesses today fight this—shifting complexity away from the part that scales (the model) into the part that doesn't (bespoke scaffolding).

Agent harnesses should be thin interfaces to scalable computation, not the place you stash the intelligence. Structure should emerge from learning rather than be imposed through design. Don't freeze your guess of the right specialists into the architecture.

Dynamic subagent spawning over fixed role hierarchies. Metaprompting expands intent—three minutes of prompting buys twenty minutes of execution. The for-loop is a mechanism, not a scaling strategy.

If model capability doubles next year, does your system get dramatically simpler without major refactors? That's the test.

What Survives

The value of a 10K-line Python library is approaching $1 in 2026. When agents can synthesize wrappers on demand, the wrappers will disappear. What survives?

I've been building this way since defense: efficiency under constraint, interpretability for trust, graceful degradation. Now it's the whole industry's problem.

Technical Taste

From early on, I invested in and saw the way forward on continued pretraining conditioning, reinforcement learning with verifiable rewards, and prompt optimization coupled with context distillation. Much of the discussion treats prompt optimization as last-mile refinement for frozen models, but it's also a cheap way to explore strategy space with more direct steerability than RL, and coupling with context distillation lets you bake it in.

Current interests: late interaction, sparse encoding, modular manifolds, matroshka embeddings. Interpretability arbitrages—sparse autoencoders as first-class citizens. Specialized small language models. Computational data markets and associative memory. Agent DX: interfaces that work the way agents have been trained. RLMs: recursive delegation, symbolic access, context isolation. Configurancy: keeping systems intelligible while agents write all the code.

The Manufacturing Bridge

There is a through-line I haven't told well enough yet. I started designing robotic tools and manipulators at sixteen—pneumatic fixtures for medical device manufacturing, concept to production. Five years of that before pivoting to ML. The design instinct from that era never left: separate what changes from what must be stable. Compress complexity into reliability or people get hurt.

Instant-quote manufacturing eliminates soft costs. Physical technology cycles go much faster once these new business models permeate the economy. The digitized manufacturing thesis—AI as accelerant for low-volume custom parts—connects directly to my origin.

Applied Medical to robotaxi design to sensor platforms to defense ML to agent infrastructure for physical processes. The number of people who've designed pneumatic manipulators and built GNN architectures and shipped production AI platforms is genuinely small.

Convergence

The timeline tells the story:

The through-line from epistemology to neural architectures to production systems to agent infrastructure is coherent. And it's accelerating.

Knowledge and technical moats are being blown apart. Companies are forced to become legible to machinic agents. New commodity paradigms. The things that were never inspiring to me are becoming less relevant. My patterns and interests are the most interesting, most productive, most creative way to ride the wave.

Do not labor to compete with the machine, own the machine.