What's in this release:
Some people love to let AI cook. But we know "not in our kitchen" is your legal team's response. So newest workspace feature to prevent a fire: Guardrails. Screen inputs or outputs no matter what you've got going on under that hood.
Always-on safety and compliance rules that run at runtime (outside your prompts and flows) for every exchange happening in conversation. Works across any AI engine, LLM, channel...basically if it talks, it blocks.
Three ways to catch- Regex: Laser-precision pattern checks (think credit cards, SSNs, phone numbers, profanity)
- Keyword: Say the magic word
- LLM judge: “Is this unsafe or policy-breaking?” Let a model decide with your criteria
Four ways to smack down- Override: Replace with a safe response (“Let’s keep it classy”)
- Mask: Redact the spicy bits (■■■■)
- Redirect: Send to a safe flow (escalation, error handler, knowledge base)
- Flag: Let it through but log it
And speaking of logs: Every Guardrail has receipts so you can see what tripped, why it tripped, and what it was replaced with.