Requesty Guardrails are enterprise-grade security filters that scan every request and every response for sensitive patterns — masking them before the model sees them and before the response reaches the caller. Three categories ship today: personal data, credentials, and financial data. Admins toggle each one in the dashboard, changes apply org-wide immediately, and there is no bypass mechanism. This is the control that satisfies the compliance reviewer in the 11th-hour sprint.
Full feature docs: requesty.ai/features/guardrails.
What it protects against
| Category | Detects | Why it matters |
|---|---|---|
| Personal data | SSNs, emails, phone numbers, full names | GDPR, CCPA, internal PII policy |
| Credentials | API keys, DB passwords, OAuth tokens, auth secrets | Accidental prompt leaks, insider threats |
| Financial data | Credit card numbers, bank accounts, PCI-scoped info | PCI-DSS, financial-services regulation |
Each toggles on or off independently — you're not forced to turn on all three at once.
The scan runs in both directions
A lot of teams scan inbound requests and call it a day. That's a partial solution. Two classes of leak:
- Your user accidentally pastes a secret into a prompt. Caught on inbound scan, masked before the model processes it.
- The model generates or echoes a sensitive pattern in its output. Caught on outbound scan, masked before the response reaches your application.
The second one is the one most teams miss. An LLM asked to "rewrite this support email" will happily repeat a credit card number verbatim. Output scanning is what prevents that number from ending up in your support-agent UI, your logs, or your analytics warehouse.
Requesty scans both directions on the gateway itself. Flow:
Caller → [Inbound scan → mask] → Model → [Outbound scan → mask] → Caller
│ │
└─── Audit log (patterns only) ────┘Configuration: one toggle, no restart
From the Admin Panel → Guardrails tab:
- PII Protection — on/off
- Secret Keys Protection — on/off
- Financial Data Protection — on/off (with sub-categories: PCI, banking, general financial)
Changes apply organisation-wide immediately, no downtime. Every key, every approved model, every endpoint inherits the new policy on the next request. No application code change. No deploy.
Why org-wide and not per-key
This is a deliberate design choice that sometimes surprises teams at first. The rule is simple: a security control that individual teams can disable is a security control that will be disabled. Usually not out of malice — someone's on deadline, a guardrail blocks a legitimate output, they flip the switch "just for this release" and forget.
Guardrails sit at the organisation level because the compliance audit runs at the organisation level. If your PII scanning has a per-team bypass, your compliance report has a per-team asterisk. That's a bad trade.
For the rare workload that genuinely needs raw access (security research, internal red-team tooling), the right answer is a separate organisation — isolated keys, isolated logs, isolated scope — not a backdoor.
Pairs with approved-model lists
Guardrails are one of three governance surfaces on the gateway:
- Content filtering (this post) — scan and mask sensitive patterns
- Approved model lists — via policies, restrict which models any key can actually route to. See Routing policies 101.
- Spend controls — via API key labels + alerts.
All three compose: a support team's key can only route to an approved policy/support-eu that stays in-region, with PII masking on, spend capped at $2k/mo, and an alert firing at 80%. Four controls, zero application code.
What this replaces
Before a gateway-layer guardrail, teams end up writing regexes in application code:
- One regex library per service
- Different coverage on each service
- No audit trail of matches
- Bypass-by-default if the team forgets to call it
The gateway replacement is one org-wide toggle, full audit of matches, and no way to bypass. That's the upgrade.
TL;DR
- Three categories — PII, credentials, financial data — toggled independently
- Both directions — inbound requests and outbound responses scanned and masked
- Org-wide, no bypass — deliberate, not an oversight
- No application code — turn on in Admin Panel, applies immediately
- Pairs with API key labels for attribution and routing policies for approved-model restriction
- Docs: requesty.ai/features/guardrails
Frequently asked questions
- What do Requesty Guardrails protect against?
- Three categories. Personal data (SSNs, emails, phone numbers, names), credentials (API keys, database passwords, auth secrets), and financial data (credit card numbers, bank account details, PCI-scoped info). Each category toggles on or off independently in the Admin Panel.
- Are guardrails applied to requests, responses, or both?
- Both. Inbound requests are scanned and sensitive patterns are masked before the model ever sees them. Outbound responses are scanned and masked before reaching the caller. That's the point — the model provider never handles raw PII, and your application never accidentally surfaces it downstream.
- Can a team bypass guardrails for a specific workload?
- No. Guardrails are org-wide and apply to every API key, every approved model, every endpoint. The design choice is deliberate — a security control that can be turned off by individual teams is a security control that will be turned off. For workloads that genuinely need raw access, use a separate organisation.
- Do guardrails add latency?
- Yes, but it's small — the scan runs in-line on the gateway, not as an extra LLM call. Typical overhead is single-digit milliseconds for requests and responses of normal size. For context: Requesty adds ~16ms of total gateway overhead, and the guardrail scan is a fraction of that.
- What happens to masked data — can I see the original?
- The original is never sent to the model or stored in the response log. Masking replaces matched patterns with placeholders (e.g. [REDACTED_EMAIL]) before the request leaves the gateway. The matched-pattern metadata is recorded for audit, but not the raw value.
- FEB '26
Label your API keys: the cost-attribution trick most teams miss
Requesty API keys carry arbitrary key-value labels. That one feature unlocks per-team, per-feature, per-customer spend attribution without a single line of instrumentation code. Here's the pattern.
- JAN '26
Routing policies 101: fallback, load balancing, and latency in production
The three routing-policy primitives every LLM gateway needs — fallback chains, weighted load balancing, and latency-based selection — and when to use each. Written for teams deploying multi-model production setups.
- APR '26
Agentic routing, benchmarked: Requesty adds 16ms of overhead, OpenRouter adds 55ms
Agentic routing is the decision layer inside a multi-agent LLM system that picks which model or sub-agent handles an incoming request. Here's what it does, what it costs, and how the gateways compare.

