Applied Scientist building reliable AI for public policy and social science.
Where political science meets machine learning
I'm a Ph.D.-trained applied scientist specializing in LLM evaluation, hallucination detection, and fine-tuning for policy and regulatory domains. With a unique background combining political science (PhD, Claremont Graduate University) and data science (MS, Georgetown University), I bring over 3 years of ML/NLP experience to high-stakes AI applications.
As an O-1A Visa Holder (Extraordinary Ability), I've built LLM pipelines for policy document analysis, developed hallucination mitigation systems, and fine-tuned models for automated content generation. I'm driven by using AI to advance policy and political research — delivering real productivity gains where it matters most.
📍 Based in Northern Virginia / DC Metro • 🎓 Georgetown, Claremont, Peking University • 🔬 Researcher in computational social science
Building AI systems for policy and legislation
Engineered context engineering solutions for long-document processing in the policy domain. Developed hallucination mitigation and citation grounding systems for long-context LLMs. Fine-tuned Gemini models for automated content generation. Built and optimized RAG pipelines for large-scale document analysis.
Authored a policy report on social media regulation and youth protection, comparing approaches across multiple states and analyzing the trade-offs of each policy framework. The report was submitted to Virginia's Joint Commission on Autonomous Systems (J-CAS) to inform legislative decision-making.
Led the Policy Change Index – North Korea (PCI-NKO) project. Optimized RoBERTa models for policy change prediction. Developed ML/LLM algorithms for analyzing North Korean state media and predicting policy shifts.
Conducted exploratory research at the intersection of computational social science and large language models, applying NLP/LLM techniques to political ideology analysis. Rapidly adopted emerging techniques such as RAG — which appeared mid-way through the fellowship — and integrated them into ongoing research workflows.
Tools and technologies I use to build AI for policy and research
Academic foundation in data science and political science
Applying ML/AI to policy analysis
An open-source machine learning project that predicts North Korean policy changes by analyzing state propaganda from Rodong Sinmun (Workers' Newspaper). Used AI to label and classify newspaper articles, identifying policy-relevant domains and events that signal shifts in regime priorities. Built on the Policy Change Index framework originally created for China, this extension uses RoBERTa and custom NLP pipelines to detect subtle changes. The research was conducted at the Mercatus Center, George Mason University, and the findings were published as a policy brief.
Interested in collaborating on AI for policy or high-stakes domains? Get in touch.