Designing Misinformation Interventions for X
| YEAR | 2024 |
| ORG | University of Washington, HCDE |
| ROLE | Researcher & Designer |
| TYPE | Research + Design |
| DURATION | 10 weeks (Spring 2024) |
| TEAM | Wilson Chen, Nina Lutz, Ben Yamron |
| METHODS | Literature review, ideation & concept scoring, medium-fidelity prototyping (Figma), think-aloud user testing |
| STATUS | Complete |
Overview
Every intervention strong enough to meaningfully reduce misinformation on a social platform is also strong enough to be called censorship. That tension — designing moderation features for an audience primed to see moderation as an attack — was the central problem this project was built around.
In HCDE 598: Designing for Trust, our three-person team designed, prototyped, and user-tested three platform interventions for misinformation on X (formerly Twitter). We used a values-centered iterative design process: establishing success criteria from the literature, brainstorming against them, building medium-fidelity Figma prototypes, and refining based on think-aloud testing with five users. The result was three intervention proposals — each targeting a different moment in the misinformation lifecycle, and each representing a different answer to the censorship tension.
Problem
Misinformation on X is structurally entrenched. Algorithmic feeds reward engagement, and inflammatory or false content reliably generates it. X's "free speech" brand positioning attracts audiences that are particularly critical of moderation — and likely to frame interventions as censorship. Meanwhile, platform trust among the public is low and declining, and legacy media trust is eroding in parallel.
Top-down removals (like the Trump account ban after January 6th) generate significant backlash. Community-led approaches like Community Notes exist but have limited reach. The core question: what moderation interventions can meaningfully reduce misinformation without triggering the censorship response that undermines them?
We scoped our work to high-profile national events — a space where misinformation is both high-stakes and fast-moving. For user testing, we used a specific false claim: that National Guard soldiers were called to the University of Texas Austin during pro-Palestinian campus protests in spring 2024. This claim had circulated on X and was definitively debunked by the Associated Press, giving us a testable, verifiable, politically charged — but not partisan — scenario.
Process
We used a six-stage iterative design process: literature review → ideation and success criteria → medium-fidelity mockups → user testing → analysis and refinement → synthesis and final recommendations. I contributed to every stage: literature synthesis, ideation and concept scoring, Figma prototyping, running user test sessions, analyzing findings, and writing the final report.
Success criteria
The literature review surfaced three initial success criteria: User Experience and Affect, Trust and Facts, and Adaptability and Scalability. We used these to evaluate our initial concepts and to structure our user test questions, eventually translating them into five specific, discussable design values: Free Speech, Reducing Bias, Transparency, Societal Impact, and User Effort. These became the lens through which we evaluated every concept in user testing.
Ideation and concept selection
We brainstormed against the success criteria, producing a field of ideas that we scored and discussed as a team. Six were developed in greater detail through sketching:
- Community Notes extension if Community Notes change
- More descriptive labels and options in Community Notes
- Interstitial cover warning users of potential manipulation
- Preventing sharing of posts under fact-checking review
- Algorithmically rewarding nuanced posts with badges
- Tracking developing stories with journalistic curation
After scoring and discussion, we combined ideas 1 and 2 and moved forward with three concepts for prototyping: Community Notes Notifications, a Quarantine Feature, and a Developing Story Tracker. Each targeted a different point in the misinformation lifecycle — correction after the fact, containment at the moment of spread, and context-setting before exposure.
Three Concepts
Concept 1 — Community Notes Notifications
Community Notes is X's existing community-based correction system, but it has a critical gap: if a user reads a post before a Community Note is added, they never see the correction. This concept addresses that gap by notifying users when context is added to a post they previously viewed.
The standard user journey: a user views a post with unlabeled misinformation → a day later, they receive a notification that a post they interacted with has received additional context from community members → they read the note and update their interpretation of the original post. Secondary features included a category label on the contextualized post (e.g., "Misleading — Factual Error") and a history view for posts that receive multiple corrections.
Concept 2 — Quarantine Feature
Some content needs to be slowed before moderation teams can review it — particularly when it has the potential to cause immediate harm. The Quarantine Feature withholds potentially misleading posts from the algorithmic "For You" feed and search results while review is underway, though they remain visible in the "Following" feed. Users cannot like, share, or comment on quarantined posts, but they can subscribe to be notified when a decision is made.
Three review outcomes are possible: the post is removed for violating guidelines, context is added for being potentially misleading, or the post is cleared entirely. The feature was designed to be transparent about this process — users see that a post is under review and understand that the platform is making an active determination, not silently suppressing content.
Concept 3 — Developing Story Tracker
Rather than addressing false content directly, this concept focuses on surfacing credible information and providing a frame for users to interpret events as they unfold. The Developing Story Tracker layers a journalist-curated "Analysis" tab on top of trending topics. The journalist curates credible posts and adds commentary; the "Latest" tab remains for users who want unfiltered real-time posts. The goal, drawing on Kate Starbird's framing, was not to create better facts but better frames through which to interpret events.
This approach is notably lower-risk for the platform: posts are not removed, the "free speech" objection is harder to invoke, and the curator burden falls on journalists rather than platform moderators. X benefits from keeping news-seeking behavior on-platform rather than losing users to CNN or other news sites for breaking updates.
User Testing
We recruited five participants with varied backgrounds, X usage patterns, and demographics to evaluate all three concepts through think-aloud sessions. Participants included a software engineer, a Latino man from Texas with medium-to-high X usage, an urban planner in Washington DC, a medical researcher in Seattle, and a PhD student in Seattle. Ages ranged from 20s to 28; gender was varied across male, female, and nonbinary.
We evaluated each concept against all five design values through structured questions, followed by concept-specific questions about clarity and mental model accuracy. Each session followed the same protocol: baseline questions about X usage and misinformation attitudes → walkthroughs of each prototype → concluding questions about platform responsibility.
We acknowledge this sample skews toward educated, left-leaning, urbanite users — a significant limitation for research on politically charged content moderation. More diverse political representation would be essential for any future work.
Findings and Iterations
Community Notes Notifications
The notification concept was the most uniformly well-received. Users saw it as an augmentation of how they already use X — keeping up with evolving stories and not spreading things they later learn are wrong.
"I would appreciate being notified if something I read is factually incorrect or has other errors because oftentimes I bring this up in discussion and it would be embarrassing to be wrong."
The primary concern was notification overload: if the trigger was simply viewing a post, users feared being flooded. A secondary finding was that users had wildly varied mental models of how Community Notes actually works — users who understood it were more confident in its corrections; those who didn't were skeptical about contributor bias.
Iterations: We changed the notification threshold from "viewed" to "interacted with" (liked, retweeted, or commented on), significantly narrowing the trigger. For note updates, we set a higher standard — only evolving guidance from verifiable, authoritative sources would trigger an update, not evolving rumors. We also added a direct link to X's Community Notes overview, reducing the friction for users who want to understand the system before trusting it.
Quarantine Feature
Users understood the appeal of quarantine — it targets the infrastructure of spread rather than requiring active user behavior — but had significant concerns about free speech, bias, and adversarial use.
"I think this could be good but yea…screenshots. I feel like it's almost a challenge for them like 'oh you don't want me to share this I'm gonna prove you wrong and share it'...they really think the radical Left is out to get them."
Users also raised concerns about harmful applications: quarantine-by-reporting could be weaponized against queer communities, artists, activists, and small businesses. The reporting mechanism itself was seen as a vector for bias.
Iterations: We restricted quarantine initiation to the platform's moderation team — users can still flag posts for review, but only moderators can actually quarantine. This addresses the weaponization concern. We also specified that quarantine should only be invoked when content has the potential to cause immediate harm, not as a general tool. For the post-removal flow, user testing helped us select the appropriate "Learn More" screen: users who followed an account wanted to still be able to see removed content in a dedicated moderation center view, supporting transparency without allowing the content back into regular feeds.
Developing Story Tracker
This concept fit most naturally into users' existing behaviors and received the warmest reception overall. Users already use X to follow breaking news; the Story Tracker streamlines this without forcing behavior change.
"I would use this as a reference when debating my family during stuff in group texts. My parents are smart but sometimes they share and believe crazy sh*t. This could automate it for me to send them here."
The main concern was the "Analysis" tab being shown by default, which felt like the platform asserting an editorial perspective — "narrating rather than having me research for myself." Users still said they would read it, but the default was contested.
Iterations: We clarified the entry-point language ("Follow Story" was confusing given the existing meaning of "follow" on the platform) and added a Share button to the developing story so users could send the journalist-curated view directly to others — turning the feature into a personal debunking tool. We also recommended that X implement policy around journalist selection, prioritizing outlets with minimal perceived political lean or facilitating cross-perspective collaborations.
Takeaways
Three design principles emerged from the process:
Agency distribution determines reception. The Quarantine Feature was the most technically aggressive intervention and generated the most resistance — not because users disagreed with the goal, but because it concentrated authority in the platform. Community Notes Notifications and the Story Tracker both distribute agency across communities, journalists, and users. Interventions that augment user agency are better received than those that replace it.
Notification design is misinformation design. Even for users who valued staying corrected, the prospect of notification overload was enough to undermine trust in the system. An effective correction that users learn to ignore is no better than no correction. Any notification-based intervention must budget carefully for users' attention.
Mental models shape trust as much as the intervention itself. Users who understood how Community Notes worked trusted the corrections it produced. Users who didn't were skeptical regardless of the content. Designing effective moderation features also means designing for comprehension of the moderation system — these can't be separated.
Reflection
The most significant limitation is sample diversity. Five participants, all educated and urbanite, all tested on a politically complex but not explicitly partisan topic — this leaves significant uncertainty about how any of these interventions would land with users who have strong partisan identities or an ideological commitment to "free speech" as a value above accuracy. That population is a meaningful proportion of X's user base and the one most likely to push back on moderation features. Future work should prioritize recruiting across political identity.
The medium-fidelity, non-interactive prototypes also limit the ecological validity of the findings. Encountering a Quarantine notification while mindlessly scrolling is meaningfully different from observing it attentively in a research session. Higher-fidelity interactive prototypes — ideally tested within a simulated real X session — would surface different reactions.
What we got right: the values-centered framework was genuinely useful for structuring both ideation and testing. Using a real, debunked claim as the test stimulus grounded the sessions and gave participants a concrete reference point. And finding that each concept maps to a distinct theory of change — correct retroactively, contain at spread, contextualize before exposure — gave us a useful vocabulary for recommending when to deploy which tool.