Overview
We use a two-layer moderation system for Seedance video tasks to balance content safety with API availability:- Layer 1 (Pre-submission): We screen prompts and input images before sending the task upstream. Clearly prohibited content such as explicit sexual material or graphic violence is blocked immediately. Blocked requests are fully refunded.
- Layer 2 (Post-submission): Some tasks may still be rejected by ByteDance’s review system after submission. In that case, the task cost is refunded minus a small violation surcharge, which increases with repeated daily violations on the same account.
This policy exists to reduce repeated borderline submissions that can hurt upstream account health and degrade service availability for everyone.
Layer 1: API-Level Content Moderation
This moderation happens before a task is submitted to Seedance.- Trigger: The prompt or input image is flagged by our moderation system
- Covered categories: sexual, violence, harassment, hate, self-harm, illicit
- Penalty: None. The task is fully refunded
- Purpose: Prevent obviously prohibited content from consuming upstream resources
submit endpoint returns a response like this:
In this case, the task is rejected before upstream submission, so no post-submission surcharge applies.
Layer 2: Upstream Content Review
This moderation happens after the task has already been submitted to Seedance.- Trigger: ByteDance rejects the task during its own content review
- Penalty: A tiered credit surcharge is deducted from the refunded credits
- Reset rule: The daily violation counter resets at midnight UTC
| Daily Violations | Credit Surcharge |
|---|---|
< 20 | 1% of task cost |
20 – 100 | 5% of task cost |
> 100 | 10% of task cost |
< 20 daily violation tier, you receive 99% of the task cost back.
Notes
- The moderation service follows a fail-open strategy. If our moderation service is temporarily unavailable, the task proceeds normally.
- For content violations, the task’s
failure_reasonisCONTENT_MODERATION. - Violation details, including triggered categories when available, are included in the task logs.
Content Moderation Tips
Understanding what each layer catches can help reduce avoidable rejections.Layer 1: What Our Filter Catches
Layer 1 uses banned-word matching plus the OpenAI Moderation API. It is designed to catch standard unsafe content such as explicit sexual material, graphic violence, and similar categories. If a prompt would be flagged by a typical moderation tool, it will usually be blocked here and fully refunded.Layer 2: What Frequently Triggers ByteDance Review
ByteDance’s review system is noticeably stricter and less predictable. Based on observed outcomes, the following categories frequently cause rejection:- Real person depictions
Content featuring real people is currently not supported. In some cases, using AI-generated fictional faces instead of real-person references may reduce rejection risk. - Copyrighted or branded content
References to brands, franchises, trademarked properties, movie studios, or game titles are likely to be rejected. Generic descriptions work better than explicit brand names. - Political content
Politically sensitive material is likely to be rejected. - Visual-content review during generation
You may sometimes receivegeneration_failedorRUN_ERRORwithout a detailed reason. Based on our observations, this likely means Seedance reviewed the generated frames or source materials and rejected the output visually, even though the prompt itself passed. Because the exact cause is never disclosed, this case does not incur any surcharge. If it happens repeatedly, revise the prompt or input materials.