Ever plugged something into an AI search engine and gotten a neat, bullet-pointed summary—without a single mention of who actually wrote the story? The Columbia Journalism Review (CJR) tested eight different AI-powered search tools to see how they credit their news sources. The verdict? They mostly don’t. Below, we’ll break down how CJR figured that out and why it’s such a big deal.
How CJR Put AI Searches to the Test—In Simple Terms
- Pick Common News Questions Researchers at CJR asked each AI engine about real news events—things happening in politics, sports, or local happenings.
- Look for Links or Mentions They paid attention to whether the AI search results included original article links, named the reporter or publication, or simply gave a vague “some website said.”
- Compare Across Eight Tools They did this with multiple queries on eight popular AI search platforms. They noted each platform’s style—some offered a quick reference at the bottom, others none at all—and measured the consistency.
- Spot Patterns Finally, they compared how frequently each AI tool actually linked back to a credible source. Spoiler: almost all of them left out key citations or only offered partial references.
Basically, it was a straightforward test: “Ask about news, see if the AI says who wrote the news.”
Why Does This Matter?
1. You Can’t Double-Check the Story
Ever read something online and think, “I want more details”—like the in-depth interview or full data? If the AI won’t show you the original piece, you might never see that background.
2. Journalists Lose Visibility
Many outlets rely on page views and brand recognition. If AI tools yank out the best info but fail to credit the original story, it’s a raw deal for reporters. That can reduce ad revenue, audience trust, and future story budgets.
3. Trust and Transparency
If you can’t see where the AI found its facts, you’re basically taking its word. That’s tricky in a world where misinformation is already a problem. Transparency helps us decide what’s legit and what’s not.
My Take: The Human-Centered Angle
I firmly believe technology should help, not overshadow, the real people behind our digital world. These AI search engines might be amazing at sifting through mountains of data, but ignoring who originally wrote a piece is the opposite of “human-centered innovation.”
- Respecting Authorship: It’s about giving journalists, bloggers, or local news reporters the credit they deserve.
- Integrity in Transformation: AI is powerful, but if we allow it to blur the lines about who did the work, we’re losing a key piece of authenticity.
What Can Be Done About It?
- Better Design: AI platforms could build in automatic citation features—like a “source trace” that lists each website and writer. Yes, it’s extra coding, but it’s an upgrade that fosters trust.
- User Feedback: If we all keep clicking “Where did you get this?” or complaining when sources are missing, companies may feel the pressure to respond with actual references.
- Support Tools That Link Properly: If one AI search engine consistently shows you the real authors, while another lumps them into “Various Outlets,” choose the more transparent one. Our usage patterns can encourage better behavior.
Looking Ahead: More AI, More Responsibility
The more AI search becomes part of daily life, the more we risk turning news into a soulless “just the facts” feed. CJR’s study is a reminder that for all their cleverness, AI tools still rely on real human writers and data-gatherers—and those creators deserve more than a passing nod.
Bottom Line: Next time you see a quick “AI summary” of a news item, pause and ask, “Who actually reported this?” Because if nobody’s acknowledging the behind-the-scenes journalism, we might be hurting the very people who keep us informed.