
I study how AI systems fail under adversarial pressure.
I focus on how model behavior becomes real-world attack paths.
My work explores how probabilistic systems introduce new attack surfaces, and how those surfaces can be chained into meaningful impact.
Currently focused on LLM behavior, agent security, and evaluation frameworks.
Attack paths > isolated bugs












