Rethinking AIGC Disclosure Across Platforms
A reflection on labels, context and audience perception
Context: I ran an experiment to test how different AI video formats, avatar-led vs. faceless styles, affect media performance across platforms, using the same motivational content. During the first few days of creating and publishing, I was a bit concerned about AI-generated content (AIGC), as I wasn’t sure how the audience or the platform would respond to it.
Platforms’ Cautious and Exploratory Positions
AIGC Label
Both TikTok and YouTube Shorts include a labeling option in the publishing flow, as shown above. However, for Instagram, I only found a brief guideline in the Help Center and nothing within the app or on the web interface.
Flow: If I remember correctly, TikTok automatically displayed an AIGC label notice the first time I tried to publish, and YouTube doesn’t do that. In both apps, labeling remains largely voluntary for creators, as the option is tucked away under the “Show more” section.
Disclosure Responsibility: In most cases, the disclosure is self-reported by the creators themselves.
Naming: TikTok uses the label “Creator labeled as AI-generated,” which is very direct and might raise concerns for some creators. In contrast, YouTube opts for the term “Altered or synthetic content,” which feels more accurate and neutral, though less explicit.
What to Label: TikTok requires a label for content that is significantly edited or generated by AI, including content using TikTok’s own effects or external tools like CapCut. This casts a fairly wide net, likely aiming for both transparency and responsibility. YouTube, on the other hand, provides clearer labeling guidelines. Creators are required to check “Yes” if the content violates one of three specific conditions, making the process more comprehensible and actionable.
Label Placement: YouTube usually displays the label in the video description, which is relatively low-key and a good balance for both creators and viewers. TikTok, by comparison, doesn’t offer a description page, so the label is more visible by default.
AIGC Creation Tool
TikTok offers a wide range of AI-powered creation tools, from in-app filters and effects to external apps like CapCut and Pippit (focused on e-commerce). CapCut AI supports nearly every stage of content production, from brainstorming to ad monetization. It covers audio generation, captions, video editing, format optimization, avatars, special effects, and more. YouTube is still developing its YouTube Create, with features powered by Veo. Many of the exciting improvements showcased at the recent Google I/O will definitely further empower YouTube tools.
All of these platforms have begun adding identifiers for AI-generated content created using their tools, and are accelerating efforts in content recognition. Cross-platform metadata sharing could play a crucial role in identifying potentially harmful AI-generated content and preventing it from being re-uploaded or distributed across multiple platforms.
Why Label AIGC at All?
Typically, content that is entirely generated by AI or significantly edited using AI tools is classified as AIGC. That sounds reasonable in theory but who gets to define what counts as “significant”? For instance, if a podcast includes AI-generated background music, does the podcast qualify as AIGC? What if I use a cloned version of my own voice or an AI-generated avatar? If I write and refine the prompts myself, going back and forth to shape the output, should the authorship belong to the AI, to me, or should we be considered co-creators?
AI tools and applications are evolving at an exponential rate, and their involvement in content creation is only increasing. It’s incredibly difficult to define or categorize AIGC solely by the percentage, duration, or intensity of AI involvement. Moreover, these factors are closely interrelated, and their importance can vary greatly depending on the media format, distribution channel, creative purpose, audience interpretation, and other contextual elements.
Not all AIGC needs to be explicitly labeled, especially content that is neither misleading nor difficult to distinguish from reality. And while the definition of AIGC remains under debate, over-labeling can burden audiences with unnecessary information. Worse, it could lead to desensitization, diluting the impact of warnings when they truly matter. If AI-generated content has the potential to mislead audiences or blur the line between reality and falsehood, then yes, and it absolutely should be labeled.
AIGC labels are mostly seen on social media platforms, and rarely appear elsewhere, such as in AI-generated code, PRDs, or short plays. This makes sense to some extent, since social media is open to everyone with a broad and diverse user base. Because anyone can share and access content freely, reminders or labels serve as a form of protection during distribution. Thus, media platforms have gradually taken the first step in labeling AI content. However, the conversation around AIGC should not be limited only to media. Even when there’s no clear misuse like in AI storytelling, I’ve still seen negative comments about AI. So what else are people really complaining about?
Concern that the author isn’t putting in enough effort? That the content quality is mass-produced or shallow?
Dislike for the “machine-like” feel of content? I actually see this as a positive sign as people still value authenticity and warmth. But if you really want something genuine, maybe it’s time to put down your phone instead of endlessly scrolling social media…
Fear, resistance, or rejection stemming from how rapidly AI is developing?
Discomfort or disdain or exclusion toward AI as something both human-like yet not human, different from ourselves? Unfamiliarity or unease with this “otherness”?
Effective AIGC Label for Warning
If it’s necessary to use an AIGC label to prevent misunderstandings or the spread of uncertain information, a simple “AI-Generated Content” tag is not enough. From a conceptual and principled perspective, what else do we need?
Dynamic Leveling: Just like content moderation or quality control, AIGC alerts should have varying levels based on factors such as degree of AI involvement, topic sensitivity, content type, and timing. For example, AI-generated presidential mimics are widespread but must be extremely restricted during election periods.
Specific Warnings: The purpose of these labels is to alert people to potentially misleading information, not merely because it’s AI-generated. For instance, if a post contains medical advice, the label might say: “AI-generated content for reference only; please consult a doctor if you feel unwell.” For posts about extreme sports, it could read: “AI-generated content: this activity carries risks; perform only under professional supervision.” So the warning becomes not just a technical disclosure, but a context-aware prompt that encourages critical thinking and informed decision-making based on the content’s nature.
Detailed Disclosure: For example, consider a post about a war between countries A and B. While the war itself is real, some images or music in the post might be exaggerated or fabricated by AI. In such cases, it’s more helpful for creators to specify exactly which parts were AI-edited, allowing viewers to better understand and interpret the information.
Additional Verification: Returning to the war video example, if only the music and some images are fake but the news itself is accurate, audiences may still worry about the facts. Providing links to official sources or trusted accounts, or offering AI assistants, can help users verify information and access relevant, reliable data.
As AI becomes a more integral part of content creation, the way we label and interpret AIGC will shape not just viewer perception, but creator responsibility and platform accountability. Labels alone won’t solve trust issues or ethical concerns, but thoughtful and transparent communication just might. The future of AIGC disclosure isn’t just about tagging content, it’s about building a shared understanding of how we create, share, and consume in an AI-assisted world.