Responsible AI at real stakes
AskLisa AI
Constitutional AI with human oversight for Digital Citizen Academy.
- Model
- Constitutional AI
- Oversight
- Human-in-loop
- Sector
- Education
Problem
An education partner wanted an AI system that could answer student questions, guide learning, and flag concerns — without the usual LLM failure modes. Hallucination, inappropriate content, and overconfident responses weren't acceptable. Neither was hiding the AI behind so much friction that nobody used it.
Approach
01
Constitutional AI principles: explicit rules governing what the system can and can't say, with the constitution itself as a reviewable artifact.
02
Escalation paths that kick in when the system is uncertain, when a conversation touches flagged topics, or when a human should be in the loop — by design, not by accident.
03
Review tooling for educators: inspectable transcripts, override capability, and feedback loops that improved the system over time.
04
Transparent trust model. Students and educators knew what the AI was, what it would and wouldn't do, and who was responsible when it mattered.
Outcome
A deployable AI system that met the partner's responsibility bar and didn't need a disclaimer banner to justify its existence. The rules were the feature, not the apology.
Stack
- Constitutional AI
- Human-in-the-loop review
- Escalation workflows
- Audit transcripts
