AI Moderation
AI Content Moderation API
Use AI to evaluate content with clear, deterministic actions.
Deploy policies once and apply them across every content type.
What it detects
- • Text, images, and video
- • Hate & harassment
- • Sexual content
- • Violence & self-harm
- • Spam & scams
- • Custom rules
Why developers choose Vettly
- • Consistent action schema
- • Policy thresholds you control
- • Audit-friendly decision logs
- • Developer plan to start and scale
Example request
bashcurl -X POST https://api.vettly.dev/v1/check \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"content": "You are terrible.", "contentType": "text"}'Example response
json{
"flagged": true,
"action": "block",
"categories": {
"harassment": 0.93,
"hate": 0.02
},
"policy": "default",
"latency_ms": 142
}Compared to LLM-only checks
Vettly is purpose-built for decision workflows with clear actions and policy control.
Keep exploring
Content Moderation API
One endpoint for text, image, and video moderation.
OpenAI Moderation Alternative
Keep OpenAI speed while adding workflows, images, and video.
UGC Moderation API
Moderate posts, comments, profiles, and media at scale.
React Native Moderation API
Add moderation to React Native apps without custom backend glue.
Get an API key
Start making decisions in minutes with a Developer plan and clear upgrade paths.
Get an API key