Skip to main content

AI safety

Narrative SDK assistant behavior is constrained by safety, grounding, and responsibility boundaries.
Assistant outputs are intended for decision support. They do not replace legal, tax, or regulated professional advice.

Core principles

Use non-alarming, actionable language focused on user next steps.
Ground responses in authorized data and active product context.
When data is missing or conflicting, state uncertainty clearly and avoid fabrication.
Respect tenant and user access boundaries for every generated response.
Do not provide regulated legal, tax, or financial advice; provide guidance and escalation paths instead.

Safety controls in practice

Prompt + routing guardrails

Detect high-risk or out-of-scope requests and route to restricted handling paths.

Runtime disclaimers

Show explicit user-facing disclaimers for known uncertainty or processing states.

Refusal behavior

Refuse disallowed requests with clear rationale and safe alternatives.

Validation suites

Test tone, scope, and trigger behavior to catch regressions before release.

Escalation model

1

State limitation

Explain confidence gaps or missing context explicitly.
2

Constrain recommendation

Avoid prescriptive advice in regulated or high-risk scenarios.
3

Escalate to human channel

Route users to approved human support workflows when needed.