Source Quality
Not all news sources are equal. We assess each source for reliability, so you know where the news comes from.
How does it work?
Every news source in our system gets a tier, a classification indicating how its reliability has been established. This happens automatically when a source is added, based on external databases and manual review.
The tiers
Reliability confirmed by independent databases or manually reviewed by our editorial team. These sources have a credibility score from 0 to 10.
Examples: Reuters, BBC, Nature, AP News, The Lancet, public broadcasters
Deliberately added to our source collection, but not externally verified. These sources were chosen because they fit our lenses, but don't have an independent credibility score.
Examples: specialized publications, regional media, non-profit news services
Source is not in our database. This doesn't mean the source is unreliable. It only means we haven't been able to establish its reliability.
Credibility score
Verified sources receive a score from 0 to 10. This score is based on independent assessments of media reliability.
| Score | Rating | Examples |
|---|---|---|
| 9.0 – 10.0 | Very high | Nature, The Lancet, NIH, EU institutions |
| 7.5 – 8.9 | High | Reuters, BBC, AP, arXiv, public broadcasters |
| 6.0 – 7.4 | Medium | Major newspapers, think tanks |
| 4.0 – 5.9 | Neutral | Mixed factual reporting |
| < 4.0 | Low | State media, tabloids. Rarely in our selection. |
Where do the scores come from?
Credibility scores are computed as a weighted average across three independent databases. When multiple databases cover the same source, their assessments reinforce each other:
- IDIAP Research Institute: Academic database with NewsGuard scores and reliability labels for ~5,300 domains
- Media Bias/Fact Check: Independent assessment of factual reporting and political bias for ~4,400 domains
- Wikipedia Perennial Sources: Community-consensus reliability ratings maintained by Wikipedia editors for ~420 domains
For sources not covered by these databases, our editorial team assigns scores manually. Where a manual score overlaps with an external database, we run automated checks to flag significant disagreements. This helps us catch both our own mistakes and cases where external databases may be outdated.
Current coverage
We currently track ~1,000 source domains. In the interest of transparency, here is how their reliability is established:
| Method | Domains | What it means |
|---|---|---|
| External databases | ~270 (27%) | Score backed by IDIAP, MBFC, and/or Wikipedia. Nearly all confirmed by 2+ independent sources |
| Editorial review | ~650 (65%) | Score assigned by our team. These are our judgment calls, not independently verified |
| Unscored | ~80 (8%) | In our collection but no credibility data available. Shown without a score. |
We're working to increase external coverage. The majority of our editorial scores cover specialized, regional, and non-English sources that mainstream media databases don't track.
Source type
Beyond reliability, we also classify sources by type:
- Wire service: Reuters, AP, AFP
- Academic: Nature, The Lancet, arXiv
- Public broadcaster: BBC, NOS, NPO
- NGO / non-profit: Positive News, Solutions Journalism Network
- Newspaper: The Guardian, El País, de Volkskrant
- Government / institutional: EU, WHO, NIH
Our editorial stance
- We show all tiers. We don't hide articles from curated or unknown sources.
- No score doesn't mean unreliable. It means we couldn't verify.
- A good source can publish a bad article. We assess at the domain level, not per article.
The source code of our quality system is public on GitHub.
Last updated: March 2026