Disregard the expert?
Users don't.
Why a "trusted" source silently quadruples cognitive bias — and what to design when the messenger is doing more work than the message.
The messenger changes the math.
This study tested whether the perceived credibility of a source meaningfully changes how strongly its number pulls a user's judgment off course — a direct challenge to the assumption that an "expert-attributed" value is a neutral reference point.
The findings reshape how teams should think about AI-generated suggestions, expert-branded defaults, premium-tier pricing, and any numeric value attached to a source cue. Spoiler: trust is a multiplier, not a safeguard.
The "credible source" assumption.
Product teams routinely treat expert-attributed numbers as informational — a way to help users decide. But helping and steering live very close together.
Prior research showed users can't override an anchor even when explicitly told to ignore it. That left a sharper, more practical question: does it matter who delivered the number?
If credibility silently amplifies bias rather than reducing it, an entire generation of AI-suggestion UX, premium pricing patterns, and "trusted source" labels is steering user behavior in ways teams aren't tracking.
A clean 2×2 on source & bias.
A controlled between-subjects experiment with 88 participants — four cells, randomly assigned, structured like an A/B/C/D test on the messenger.
- SETUP Four user groups, randomly assigned, each shown the same numerical estimation task with one variable changed.
- VAR. 1 Anchor magnitude — a high reference value (65%) vs. a low reference value (10%) attached to the same task.
- VAR. 2 Source credibility — the anchor was attributed to political science professors (high credibility) or middle-school students (low credibility).
- CHECK A 7-point credibility rating confirmed the source framing landed as intended before the primary analysis ran.
- METRIC The estimate users provided, measured against the anchor they were exposed to and the source they were told it came from.
- LOGIC How far did each group drift toward its assigned anchor, and did the credibility cue change the size of that drift?
Four "aha" moments that change how you ship a source.
Trusted sources nearly quadrupled the bias.
With a credible source, the spread between low- and high-anchor estimates was 43 points. With a non-credible source, the same spread shrank to just 12 points. Same numbers. Same task. The messenger did the rest.
Skepticism is a built-in self-correction.
When users distrusted the source, they spontaneously generated counter-evidence — without being prompted. The bias mitigation came from framing, not from instructing the user to think harder.
Implicit framing beats explicit warning.
"Ignore this number" consistently fails. Reframing the source as untrustworthy consistently works. Context delivered as narrative outperforms context delivered as directive.
Bias scales directly with belief.
The more credible users found the source, the closer their estimates moved toward the anchor. Trust isn't a switch — it's a dial. Every notch up amplifies the pull of the number.
Read thisThe credibility cue widened the anchoring spread from 12 points in the non-credible condition to a striking 43 points in the credible condition. Same numbers. Different messengers. 3.6× the bias.
Six moves your team should make tomorrow.
Audit pre-filled defaults for credibility cues.
Premium-feeling UX inflates user estimates more than discount UX. A trusted brand presenting a high reference value pulls willingness-to-pay further than the same number from a generic source. The chrome is part of the anchor.
Reveal source credibility before the number, not after.
Late-arriving warnings decay faster than the anchor itself — a documented sleeper effect. If the source matters to the user's evaluation, it has to arrive in time to be encoded with the value.
Surface uncertainty signals on AI-suggested values.
Users counter-reason against unreliable sources automatically. Confidence bands, hedging language, and visible model limits are not just transparency — they're active debiasing tools.
Match friction to the stakes of changing a default.
Pre-filled fields become sludge when removing them costs more attention than choosing them. If the default will dominate the user's judgment anyway, the default is the design decision.
Delay source identification in high-stakes review.
For pricing committees, hiring rubrics, and clinical second-opinions, let evaluators score the raw value before the messenger is revealed. This protects judgment from credibility-based amplification at the institutional level.
Treat "expert-attributed" numbers as ethics decisions.
Attaching a credible source to a value isn't decoration — it's a multiplier on user behavior. Flag exploitative attribution patterns in design review the same way you'd flag any other dark pattern.
So what?
Source credibility isn't a neutral label — it's a force multiplier on every number a user sees.
It governs how users interpret AI assistant suggestions, evaluate "recommended by experts" pricing, weigh clinical decision support, and act on algorithmically generated forecasts. It is operating right now, in your product, whether or not you're tracking it.
That's a compliance risk, an ethics risk, and — as AI suggestion UX scales into pricing, hiring, and healthcare — a regulatory risk. The amplifier finding has a sharper edge than it looks: a credibility badge or expert attribution is not the trust signal teams assume it is. It is also a bias signal.
This work translates a peer-reviewed behavioral science finding into a practical design heuristic: every numeric value attached to a credible source is a commitment scaled by trust. Choose the messenger as deliberately as the message.