Skip to content
ELVIS FERNANDES

04 · CASE STUDY · 6 MIN READ

Signal — AI Trust Layer

Cross-feed provenance as infrastructure: real-time detection rendered as glanceable trust signals, contextual transparency when you pull for it, and one adaptive grammar from Reels to long-form.

ROLE
Product Designer (concept)
YEAR
2026
TEAM
Independent · vision study
SURFACE
Cross-platform · Trust Signals · Adaptive
STATUS
Concept

Interactive flows, exploratory systems, and archived process studies.

04 · REAL-TIME CUES · CONTEXTUAL DISCLOSURE · FEED-NATIVE TRUST
COVER

PRODUCT DESIGNER (CONCEPT) · INDEPENDENT · VISION STUDY

01 · THE PREMISE

WHAT IT IS

Provenance, as a UI primitive.

This project came from a steady concern I had while using social feeds: how easily AI-generated misinformation can blend into everything else, and how little shared language we have for knowing what to trust. I wanted to sketch a trust layer that could help someone quickly distinguish what reads as credible, what may be synthetic or assisted, and what simply needs more context — without disrupting the cadence of browsing.

Signal sketches how platforms could surface AI-generated and AI-assisted content inside the feed itself: a single, interoperable cue that travels with media instead of burying attribution in menus or moderation screens.

The framing is pragmatic. When misinformation and synthetic media move at feed speed, people need cues that compound as they skim — transparency without interrogation, voluntary depth when doubt appears. Signal treats provenance less like philosophy and more like a legible overlay on today's browsing behavior.

Early tile studies — where the mark lives before chrome, overlays, or headlines compete for attention.
02 · THE LENS

THE REAL QUESTION

What does trust look like at a glance?

Today's trust tooling is fragmented: community notes, verification marks, fact-check labels, breaker screens — each a different UX contract, vocabulary, and volume. Useful, but rarely composable across platforms and rarely calibrated for continuous scrolling.

Signal proposes the opposite stance for that layer of the UI: quiet, ambient, peripheral signal — a mark that earns meaning through consistency, reveals depth only on intent, and never shouts louder than the content it annotates.

03 · THE THINKING

PROCESS

Designed across three feeds, then abstracted.

I built three feed-native mocks first — long-form text, dense image timelines, TikTok/Reels-length vertical stacks. Each pass forced placement, motion, and density rules that abstraction alone would miss. Only when all three survived real scroll speeds did the shared primitive stabilize.

Product definition spread — taxonomy, feed constraints, and detection scenarios traced in one working surface.

From there Claude became a blunt instrument for adversarial QA: synthesized voice over real footage, screenshot chains, illustrative AI sold as documentary. Each failure mode became either a discrete state or a deliberate gap in disclosure — the system exposes what detection can justify, nothing more.

PROMPT · ADVERSARIAL EDGE CASESClaude · stress test
You are an adversarial reviewer of a proposed trust UI called Signal. It claims to mark content with: ORIGIN (human/AI/mixed), MODEL (named), REVIEW (none/automated/human), and HISTORY (1st/repost/derivative). Give me 10 content scenarios where this taxonomy collapses or misleads. Be specific. Include at least 2 cases where the *absence* of a label is more misleading than the wrong one.

Two answers rebuilt the UX: model absence as a readable state instead of emptiness and keep expansion user-initiated so detection never preempts someone's reading tempo. What's on the timeline stays thin; disclosure depth only opens after an explicit gesture.

04 · THE SYSTEM

STRUCTURE

Four axes, one mark.

Content maps to Origin, Model lineage, Review posture, and History (original versus repost versus derivative tooling). Discrete states collapse into one glyph anchored like a credibility affordance — adaptive density hides secondary facets until hover, focus, or tap.

Mobile keeps the cue tight and tap-expandable; desktop allows hover choreography without autoplay banners. Density scales down on fast-vertical video, relaxes slightly on Instagram-style tiles, and lengthens subtly on long-form surfaces where thumbnails sit longer — same grammar, tuned to dwell time and gesture vocabulary.

System anatomy — how four orthogonal claims compress into one legible footprint.
05 · THE PRODUCT

THE FEEDS

Three platforms, one grammar.

On TikTok-style stacks the mark nests near captions and survives rapid skip behavior — cue visible on the first beat, expandable only if someone pauses. Instagram-style grids pull the cue closer to thumbnails where intent forms before fullscreen. Long-form thumbnails (YouTube, essays, Substacks-in-feed) widen the substrate so disclosure can whisper alongside titles without hijacking skim reading.

Platform adaptation collage — placements tuned to pacing: vertical skim, dense grid thumbnails, lingering long-form previews.

Contextual disclosure depth follows interaction, not autoplay overlays: quick tap or hover peels into provenance lineage, reviewer notes when they exist, and human-readable explanations of classifier confidence ranges. Interaction-triggered transparency keeps feed-native pacing intact while signaling that richer truth is reachable.

Three native readings — skim, browse, linger — stitched by the same expandable affordance.

06 · THE CRAFT

CLOSE-UP

The expand interaction.

The expand gesture is annotation, not modal chrome: a recessed panel mono-set like a marginal note naming each claim plus how it was derived. Hover on desktop lifts it gently; tap on phones pins it beside the thumbnail without jumping the viewport.

Motion stays quiet enough to disappear when ignored. Nothing animates toward the user unsolicited — initiation always reads as deliberate, transparency as something you summon.

07 · THE OUTCOME

RESULT

What the concept argues for.

Signal is a concept, not a shipped product. The artifact pair is deliberate: this page keeps the curated narrative tight while an interactive prototype and a design archive carry flow-level proof and iterative residue.

The output is prototyping trust infrastructure ahead of consensus standards — proposing interaction patterns platforms could converge on before regulators or SDKs prescribe them. Designing that layer explicitly is how product craft steers eventual platform behavior.

08 · REFLECTION

WHAT I LEARNED

Small marks carry the future.

The invisible work is interoperability: glyphs that behave predictably regardless of TikTok choreography, IG density, or YouTube pacing. Investing there reframed AI interfaces more broadly — I now sketch detection and disclosure as choreography problems, not static badge systems.

Trust, in practice here, is interaction design inside hostile attention budgets. Keeping that small surface honest changed how I approach model-forward UI: prototype the infrastructure mark first, negotiate visual delight second.