Disregard the number?
Users don't.
Why "just ignore the price" microcopy silently fails — and what to design instead when a single number can hijack a user's judgment.
The number you saw is the number you'll use.
This study tested whether users can consciously ignore a biasing number once they've been exposed to it — a direct challenge to the "fine print fixes everything" assumption baked into countless product flows.
The findings reshape how teams should think about price displays, default values, form placeholders, and any numeric framing that shapes user judgment. Spoiler: a corrective disclaimer is not a fix.
The "just ignore it" assumption.
Product teams routinely assume that disclaimers, "starting from" labels, and corrective microcopy neutralize a misleading reference point.
But there was no clear answer to a foundational question: can users actually disregard a number once they've seen it — even when explicitly told to?
If the answer is no, a huge swath of accepted UX patterns is quietly steering user behavior in ways teams don't realize.
A clean 2×2 on numerical judgment.
A controlled between-subjects experiment with 100 participants — structured like a classic A/B/C/D test.
- SETUP Four user groups, randomly assigned, each shown the same numerical estimation task.
- VAR. 1 Anchor magnitude — a high reference point (65%) vs. a low reference point (10%).
- VAR. 2 Instruction — a "please disregard this number" prompt vs. no prompt (control).
- METRIC The estimate users provided, measured against the anchor they were exposed to.
- LOGIC How far did each group drift toward its assigned anchor, and did the disregard prompt actually change anything?
Four "aha" moments that change how you ship a number.
High anchors are sticky — even with explicit warnings.
Users told to ignore the high number produced estimates nearly identical to users who got no warning at all. The instruction did essentially nothing. Once the high number is in, it stays in.
Low anchors are suppressible.
When the anchor was low, the disregard instruction meaningfully shifted user estimates upward. Users could override the bias here — but only in this direction.
The effect is asymmetrical.
The most surprising pattern: a single mitigation strategy ("just tell users to ignore it") worked for one direction of bias but failed completely for the other. Disregard is not a symmetric tool.
Passive reading isn't enough — users need an active reason to push back.
Simply being told a value was wrong didn't trigger the deeper reasoning needed to override it. Effortful engagement is what unlocks correction. Designers have to earn the override.
Read thisThe disregard instruction nudged the low anchor by +10.8 points — but barely budged the high anchor (a non-significant drift of just 4 points). The same mitigation, applied symmetrically, produced two completely different outcomes.
Six moves your team should make tomorrow.
Audit for high-ceiling anchors.
Inflated MSRPs, premium-tier prices listed first, and aggressive default selections will persist in user memory even when corrected later. Treat them as permanent influencers, not neutral context.
Don't rely on disclaimers alone.
Microcopy like "this is just an example" is not a reliable mitigation. If the framing matters, redesign the framing — don't paper over it with text.
Build in "consider-the-opposite" moments.
Prompts that ask users to actively reason against a default ("Why might this not be right for you?") consistently reduce bias in the literature.
Treat anchoring as an ethics question.
Onboarding flows, pricing tables, donation amounts, and tip selectors all carry anchoring power. Use it transparently — and flag exploitative patterns as dark patterns in design review.
Audit your own research instruments.
When running surveys or usability tests, watch for anchoring contamination in your own work. Rating-scale endpoints, example responses, and prior questions can quietly bias your data.
Default with the user's interest in mind.
If a default value will dominate the user's judgment regardless of warnings, the choice of default is the design decision. Pick the one that serves the user, not just the funnel.
So what?
Anchoring isn't an abstract cognitive curiosity — it's a direct revenue and trust lever.
It governs how users interpret pricing pages, evaluate plan tiers, set retirement contributions, choose insurance coverage, and tip on receipts. It is operating right now, in your product, whether or not you're tracking it.
That's a compliance risk, an ethics risk, and — increasingly — a regulatory risk. The asymmetry finding has a sharper edge: a quick disclaimer or corrective tooltip is not the safety net teams assume it is.
This work translates a foundational behavioral science finding into a practical design heuristic: every numeric value shown to a user is a commitment. Choose them deliberately, and don't assume you can take them back.
Video overview - Anchoring and Judgment Bias: Disregarding Under Uncertainty (Berg & Moss, 2022) [video created using NotebookLM, 2026]