Replace emails, names, phones with reversible placeholders before sending to ChatGPT/LLMs. Restore the original values after getting the AI response.
5-minute setup. 500 requests free. No credit card.
# Redact PII before sending to LLM
curl -X POST https://api.scrubprompt.com/api/redact \
-H "Content-Type: application/json" \
-H "X-API-Key: your-api-key" \
-d '{"text": "John lives at john@email.com"}'
# Response: {"text": "[[SP_P_abc123]] lives at [[SP_E_xyz789]]"}
# Restore after LLM response
curl -X POST https://api.scrubprompt.com/api/restore \
-H "Content-Type: application/json" \
-H "X-API-Key: your-api-key" \
-d '{"text": "Hello [[SP_P_abc123]]"}'
# Response: {"text": "Hello John"}The only solution that combines LLM data privacy with reversible data masking.
Permanently removes PII — business logic breaks downstream
Custom compliance pipelines require weeks of engineering
Enterprise solutions start at $50k — prohibitive for startups
Preserve business logic while achieving GDPR compliance
Integrate in minutes with Python, Node.js, or Go SDKs
Starts at $19.9/mo — affordable for teams of any size
Everything you need to secure your AI workflows
Automatically detect and redact 20+ types of PII including names, emails, phone numbers, credit cards, and more.
Replace PII with secure placeholders and restore them perfectly after LLM processing. Preserve business logic.
TLS 1.2+ encryption, SOC 2 compliance, and zero-knowledge architecture. GDPR, HIPAA, and PCI-DSS ready.
Three simple steps to protect your data in any AI workflow
Send your text with PII to our API. We replace sensitive data with secure placeholders.
Send the redacted text to ChatGPT or any LLM. The AI never sees real PII.
Pass the LLM response through our restore endpoint. Placeholders are swapped back perfectly.
Start free. Scale as you grow. No hidden fees.
For growing teams
All plans include TLS 1.2+ encryption, 99.5% uptime SLA, and GDPR compliance.
Practical data masking solutions for developers shipping real products
"I want to test my app with real user data without exposing sensitive information."
Replace all PII before using in staging environments
"I want to use ChatGPT for customer support without exposing customer data."
Redact personal info from tickets before sending to AI
"I need to share logs with my team without leaking sensitive data."
Remove sensitive data from logs before sharing
"I'm building AI products and need to protect user data."
Clean user data before feeding to ChatGPT, Claude, Gemini, or any LLM
Everything you need to know about LLM data privacy and PII redaction
ScrubPrompt replaces PII with unique, deterministic placeholders before sending data to LLMs. These mappings are stored securely and can be restored exactly after processing, maintaining your business logic while achieving GDPR compliance.
Yes. ScrubPrompt is designed to help you achieve GDPR compliance when processing personal data with AI/ML models. Our zero-knowledge architecture ensures we never see your original data — only encrypted placeholder mappings. We provide documentation for compliance audits.
Our fault-tolerant restoration handles variations. For best results with ChatGPT securityand other platforms, we recommend including a system prompt instructing the LLM to preserve placeholders.
Absolutely. We use zero-knowledge architecture — we never store your original text, only encrypted placeholder mappings. All data is processed with TLS 1.2+ encryption. Our architecture is designed for HIPAA compliant AI and enterprise security requirements.
We offer a generous free tier for development. Professional plans include 10,000 API calls/month with priority support. Enterprise plans include custom integrations and dedicated support forPCI-DSS AI processing and other compliance requirements.