Research Thesis Map

Where representation, calibration, and operations meet

I treat AI systems as epistemic infrastructure: they are only as good as their update process under uncertainty. The recurring question across my work is structural—what should remain explicit, what should be learned, and where validation boundaries must stay hard.

1) Cognitive security and calibrated action

In contested environments, anomaly detection without confidence and context is operationally brittle. The target is calibrated inference with analyst-legible evidence paths, not alert volume.

Cognitive Security brief · RAM Labs dossier

2) Graph modeling and representation boundaries

Graph methods are strongest when paired with explicit constraints and program-like structure. In cyber this means temporal heterogeneity; in spatial systems this means object identity, typed relations, and deterministic validators.

Graph Modeling · Augrade framing

3) Thin control planes for stronger models

As model capability scales, orchestration should simplify. Good harnesses preserve observability, provenance, and operator control while minimizing bespoke glue code.

Current position · Compressed forms