Applied Scientist building reliable AI for public policy and social science.
Where political science meets machine learning
I'm a Ph.D.-trained applied scientist specializing in LLM evaluation, hallucination detection, and fine-tuning for policy and regulatory domains. With a unique background combining political science (PhD, Claremont Graduate University) and data science (MS, Georgetown University), I bring over 3 years of ML/NLP experience to high-stakes AI applications.
As an O-1A Visa Holder (Extraordinary Ability), I've built LLM pipelines for policy document analysis, developed hallucination mitigation systems, and fine-tuned models for automated content generation. I'm driven by using AI to advance policy and political research — delivering real productivity gains where it matters most.
📍 Based in Northern Virginia / DC Metro • 🎓 Georgetown, Claremont, Peking University • 🔬 Researcher in computational social science
Building AI systems for policy and legislation
Engineered context engineering solutions for long-document processing in the policy domain. Developed hallucination mitigation and citation grounding systems for long-context LLMs. Fine-tuned Gemini models for automated content generation. Built and optimized RAG pipelines for large-scale document analysis.
Authored a policy report on social media regulation and youth protection, comparing approaches across multiple states and analyzing the trade-offs of each policy framework. The report was submitted to Virginia's Joint Commission on Autonomous Systems (J-CAS) to inform legislative decision-making.
Led the Policy Change Index – North Korea (PCI-NKO) project. Optimized RoBERTa models for policy change prediction. Developed ML/LLM algorithms for analyzing North Korean state media and predicting policy shifts.
Conducted exploratory research at the intersection of computational social science and large language models, applying NLP/LLM techniques to political ideology analysis. Rapidly adopted emerging techniques such as RAG — which appeared mid-way through the fellowship — and integrated them into ongoing research workflows.
Tools and technologies I use to build AI for policy and research
Academic foundation in data science and political science
Applying ML/AI to policy analysis
An open-source machine learning project that predicts North Korean policy changes by analyzing state propaganda from Rodong Sinmun (Workers' Newspaper). Used AI to label and classify newspaper articles, identifying policy-relevant domains and events that signal shifts in regime priorities. Built on the Policy Change Index framework originally created for China, this extension uses RoBERTa and custom NLP pipelines to detect subtle changes. The research was conducted at the Mercatus Center, George Mason University, and the findings were published as a policy brief.
I write about AI, political science research, education, and the philosophy of technology.
An essay on why AI governance may require a governance-grade “normative world model”: a structured reasoning layer that connects facts, norms, justifications, actions, and auditable logs.
Read on MediumOn being outpaced by the cognition you outsourced. From the car wash riddle to a two-week research sprint with AI agent teams — a reflection on speed, ownership, and what happens when the tools you build start running faster than you can follow.
Read on MediumWhen the real "jack of all trades" arrives, perhaps the most human response is to become a Renaissance person. A reflection on how AI might reunite the divided "Two Cultures" and what that means for how we learn and work.
Read on MediumA year ago, I landed my dream job. Last week, that chapter came to an abrupt end. Instead of panicking, I built an AI workflow to manage the job search process—and ended up learning something unexpected about the gap between "vague idea" and "deployed product."
Read on MediumInterested in collaborating on AI for policy or high-stakes domains? Get in touch.