Installation
Add prompt injection protection to any Node.js app in minutes. Use the official middleware package or call the HTTP API directly from any language.
npm Package
The safeprompt-middleware package wraps the SafePrompt API with typed Express and Next.js helpers.
npm install safeprompt-middlewareRequires Node.js ≥ 18. Works with any ai-security-gateway-spec compliant provider.
Express
Use createGuard() as route-level middleware. It validates req.body.prompt before your handler runs and short-circuits with a 400 if injection is detected.
import express from 'express';
import { createGuard } from 'safeprompt-middleware';
const app = express();
app.use(express.json());
// Protect a single route
app.use('/api/chat', createGuard({
apiKey: process.env.GUARD_API_KEY,
}));
app.post('/api/chat', (req, res) => {
// req.body.prompt is safe — pass to your LLM here
res.json({ reply: 'response from LLM' });
});Full configuration
app.use('/api/chat', createGuard({
apiKey: process.env.GUARD_API_KEY,
mode: 'balanced', // 'fast' | 'balanced' | 'strict'
fieldName: 'message', // which req.body field to validate (default: 'prompt')
failOpen: true, // allow through if SafePrompt is unreachable (default: false)
onBlock: (req, res, result) => {
res.status(400).json({ error: 'Unsafe prompt', threats: result.threats });
},
onError: (req, res, err) => {
console.error('SafePrompt error:', err);
res.status(500).json({ error: 'Validation failed' });
},
}));What gets attached
On a safe request, req.guardResult is set:
{
"safe": true,
"threats": [],
"confidence": 0.99,
"processingTimeMs": 4,
"passesUsed": 1,
"request_id": "uuid",
"timestamp": "2026-03-19T..."
}Next.js — App Router
Wrap your route handler with withGuardRoute(). The validated prompt is available via the original request.
// app/api/chat/route.ts
import { withGuardRoute } from 'safeprompt-middleware/next';
async function POST(req: Request) {
const { message } = await req.json();
// message has passed injection check
return Response.json({ reply: 'response from LLM' });
}
export { withGuardRoute(POST, {
apiKey: process.env.GUARD_API_KEY!,
fieldName: 'message',
}) as POST };Next.js — Pages Router
// pages/api/chat.ts
import { withGuard } from 'safeprompt-middleware/next';
import type { NextApiRequest, NextApiResponse } from 'next';
async function handler(req: NextApiRequest, res: NextApiResponse) {
// req.body.prompt is safe
res.json({ reply: 'response from LLM' });
}
export default withGuard(handler, {
apiKey: process.env.GUARD_API_KEY!,
fieldName: 'message',
});HTTP API — Any Language
No package required. POST to https://api.safeprompt.dev/api/v1/validate from any language or framework.
curl -X POST https://api.safeprompt.dev/api/v1/validate \
-H "X-API-Key: YOUR_API_KEY" \
-H "X-User-IP: CLIENT_IP" \
-H "Content-Type: application/json" \
-d '{"prompt": "Hello, how can you help me?"}'import requests
result = requests.post(
'https://api.safeprompt.dev/api/v1/validate',
headers={
'X-API-Key': 'YOUR_API_KEY',
'X-User-IP': client_ip,
'Content-Type': 'application/json',
},
json={'prompt': user_input}
).json()
if not result['safe']:
raise ValueError(f"Blocked: {result['threats'][0]}")See the API Reference for all request fields, response schema, and rate limits.
Using a Different Provider
The middleware is provider-agnostic. To use any ai-security-gateway-spec compliant provider, pass a provider URL:
createGuard({
provider: 'https://your-provider.com',
apiKey: process.env.YOUR_PROVIDER_KEY,
})