App Store Compliance

App Store Guideline 1.2 Compliance

Pass App Review on your first submission. Vettly covers every Guideline 1.2 requirement: content filtering, user reporting, user blocking, and audit trails, all with a single API integration.

What Apple Requires

1.2

User Generated Content

Apps with user-generated content or services that end up being used primarily for pornographic content, Chatroulette-type experiences, objectification of real people, making physical threats, or bullying do not belong on the App Store and may be removed without notice. If your app includes user-generated content from a web-based service, it may display incidental mature "NSFW" content, provided that the content is hidden by default and only displayed when the user turns it on via your website.

1.2.1

Content Filtering

Apps with user-generated content must include a mechanism for filtering objectionable material from being posted to the app.

1.2.2

User Reporting

Apps with user-generated content must include a mechanism for users to flag objectionable content.

1.2.3

User Blocking

Apps with user-generated content must include the ability to block abusive users from the service.

Compliance Checklist

Requirement
What Apple Checks
Vettly Feature
Status
Content filtering
UGC is screened before it is displayed to other users
POST /v1/check
Reporting mechanism
Users can flag content they find objectionable
POST /v1/reports
User blocking
Users can block abusive accounts from contacting them
POST /v1/blocks
Audit trail
Evidence of moderation decisions for appeals and review
decisionId on every response

App Review Notes Template

Paste this into the “App Review Notes” field in App Store Connect. Replace the [bracketed] placeholders with your app-specific details.

App Review Notes
Content Moderation: This app uses Vettly (https://vettly.dev) for content moderation.
- Filtering: All UGC is screened via the Vettly API before display (Guideline 1.2.1)
- Reporting: Users can flag content via [describe your report button/flow] (Guideline 1.2.2)
- Blocking: Users can block other users via [describe your block flow] (Guideline 1.2.3)
- Contact: [your-email@example.com]

Code Examples

ContentCheck.swiftSwift
import Foundation
func checkContent(_ text: String) async throws -> Bool {
let url = URL(string: "https://api.vettly.dev/v1/check")!
var request = URLRequest(url: url)
request.httpMethod = "POST"
request.setValue("Bearer vettly_live_...", forHTTPHeaderField: "Authorization")
request.setValue("application/json", forHTTPHeaderField: "Content-Type")
request.httpBody = try JSONSerialization.data(
withJSONObject: ["content": text, "policy": "default"]
)
let (data, _) = try await URLSession.shared.data(for: request)
let result = try JSONSerialization.jsonObject(with: data) as! [String: Any]
return result["action"] as? String != "block"
}
moderation.tsReact Native
import { Vettly } from '@vettly/sdk';
const client = new Vettly('vettly_live_...');
// Check content before posting
const result = await client.check({
content: userInput,
policy: 'community-safe',
});
if (result.action === 'block') {
Alert.alert('Content blocked', 'Please review our community guidelines.');
return;
}
// Report objectionable content
await client.reports.create({
contentId: post.id,
reason: 'offensive',
reportedBy: currentUser.id,
});
// Block an abusive user
await client.blocks.create({
userId: abusiveUser.id,
blockedBy: currentUser.id,
});

Ship compliance before your next App Review

Get started with Vettly's free tier: 15,000 text checks, 1,000 image checks, and 250 video checks per month. No credit card required.

Get started free