Kevin Lewis
AI Systems
Trust Boundaries
Interaction Design

ParaAIS

An experimental AI interaction framework built around a strict principle: the assistant should never sound more confident than its evidence allows.

Core idea

Many AI “failures” are presentation failures: the model guesses, and the UX frames it like truth. ParaAIS explores how to build assistants that earn trust by being explicit about boundaries, sources, and uncertainty.

Design rule

If evidence is missing: redirect to verified material, offer next-best help, and avoid inventing specifics.

System design principles
  • Tiered claim confidence (confirmed / inferred / unsupported)
  • Evidence-aware responses and safe redirection
  • Refusals that remain helpful (no dead ends)
  • Clear separation between retrieved knowledge and model reasoning
Where it applies
  • Recruiter-facing portfolio assistants
  • Internal knowledge base chat UX
  • High-trust professional tools
  • Governance-aware GenAI demos
Why it matters

ParaAIS is an applied exploration of “trust by limitation.” It prioritizes safe interaction patterns over persuasion, and frames uncertainty as a feature — not an embarrassment.

The win condition isn’t “sounds smart.” It’s “remains reliable under pressure.” (Basically: no confident improv in a job interview. We leave that to LinkedIn.)