Guide
How to Pass Apple's App Store Guideline 1.2 for UGC Apps
What is Guideline 1.2?
If your iOS or macOS app lets users post text, images, or video, Apple's App Review team will check it against Guideline 1.2, the "User Generated Content" section of the App Store Review Guidelines. Apps that fail these checks are rejected, sometimes repeatedly, costing developers days or weeks of launch delays.
Guideline 1.2 exists because Apple wants to keep the App Store safe. If your app lets strangers share content with each other, Apple needs to see that you have systems in place to prevent abuse. The good news: the requirements are specific and predictable. Meet all four and you pass.
The Four Requirements
Guideline 1.2 breaks down into one parent rule and three specific sub-requirements:
User Generated Content
The parent rule. Apps with UGC that becomes primarily used for objectionable content may be removed without notice.
Content Filtering
A mechanism for filtering objectionable material from being posted to the app.
User Reporting
A mechanism for users to flag objectionable content.
User Blocking
The ability to block abusive users from the service.
Requirement 1: Content Filtering
Apple wants proof that user-generated content is screened before other users see it. This means server-side moderation, not just client-side word lists. The reviewer will typically try posting something obviously objectionable and check whether it appears in the feed.
With Vettly, you call POST /v1/check before saving any UGC. The response tells you whether to allow, flag, or block the content, and includes a decisionId you can reference later. If you're new to Vettly, the Getting Started guide walks through setup in under five minutes.
import { Vettly } from '@vettly/sdk';const client = new Vettly('vettly_live_...');const result = await client.check({content: userPost.text,policy: 'community-safe',});if (result.action === 'block') {// Reject the post before it's visible to othersreturn res.status(422).json({ error: 'Content violates guidelines' });}
Requirement 2: Reporting Mechanism
Even with automated filtering, some content will slip through. Apple requires a way for users to flag content they find objectionable. The reviewer will look for a "Report" button or menu item on every piece of user-generated content.
Vettly's POST /v1/reports endpoint stores the report, notifies your moderation queue, and creates an audit trail, all in one call.
// When a user taps "Report" on a postawait client.reports.create({contentId: post.id,reason: selectedReason, // e.g. "harassment", "spam", "nudity"reportedBy: currentUser.id,});
Requirement 3: Blocking Users
Users must be able to block other users so they no longer see that person's content or receive messages from them. Apple checks that the block is effective, not just cosmetic.
Vettly's POST /v1/blocks endpoint records the block relationship. You query it when fetching feeds or messages to filter out blocked users.
// When a user taps "Block" on another user's profileawait client.blocks.create({userId: blockedUser.id,blockedBy: currentUser.id,});
Setting Up Audit Trails
Every Vettly API response includes a decisionId. This is your paper trail. If a user appeals a moderation decision, or if Apple asks for evidence during review, you can look up exactly what happened: what content was checked, which policy rules fired, what scores were returned, and when.
Audit trails also matter for legal compliance in some jurisdictions (e.g., the EU Digital Services Act requires record-keeping of content moderation decisions). Vettly handles the storage and retention so you just reference the decisionId. For more detail on retention policies and compliance exports, see the Audit Trails documentation.
What to Write in App Review Notes
App Store Connect has a "Notes" field visible only to the review team. Use it to clearly explain how your app meets each sub-requirement. Reviewers read hundreds of submissions, so make it easy for them to find what they need.
Content Moderation: This app uses Vettly (https://vettly.dev) for content moderation. - Filtering: All UGC is screened via the Vettly API before display (Guideline 1.2.1) - Reporting: Users can flag content via [describe your report button/flow] (Guideline 1.2.2) - Blocking: Users can block other users via [describe your block flow] (Guideline 1.2.3) - Contact: [your-email@example.com]
For a more detailed template and compliance checklist, see the Guideline 1.2 Compliance page or the App Store compliance docs.
Common Rejection Reasons
Even developers who implement moderation sometimes get rejected. Here are the most common pitfalls:
- ×Client-side only filtering. Word lists in your app binary are not enough. Apple wants server-side moderation that works even if the client is bypassed.
- ×Missing report UI on some content types. If your app has posts, comments, and profiles, every one of those needs a report option, not just posts.
- ×Block doesn't actually block. Blocking a user must prevent them from appearing in feeds, messages, and search. A cosmetic block that only hides the button will be caught.
- ×No explanation in App Review Notes. If the reviewer can't find your moderation system, they may reject first and ask questions later. Always explain your setup in the notes field.
- ×Images and video unchecked. Text filtering alone isn't enough if your app accepts photos or video. Use multi-modal moderation (Vettly supports text, image, and video in one API).
Related Implementation Guides
Guideline 1.2 checklist
Review the full compliance checklist for filtering, reporting, blocking, and audit trails.
React Native moderation
Add moderation flows to a cross-platform mobile stack without custom backend glue.
Swift moderation
Ship native iOS moderation that stands up to App Review and production scale.
Ready to pass Guideline 1.2?
Vettly covers all four requirements with a single integration. Start with the free tier and ship compliance before your next App Review.