Fortune’s Top Writer Cranked Out 600 AI Stories — Now 20% of Its Traffic Is Slop
Honestly, the journalist of the future is here. He prompts Perplexity, edits for 10 minutes, and calls it reporting.
One 42-year-old writer produced more AI-assisted stories in 6 months than any colleague managed in a full year. AI-generated articles now outnumber human-written ones on the open web.
Fortune magazine’s Nick Lichtenberg has written 600+ stories using Perplexity and Google’s NotebookLM. He once cranked out seven articles in a single Wednesday. The Disney CEO change? Ten minutes, start to publish. And Fortune’s calling it journalism.

🧩 Dumb Mode Dictionary
| Term | Translation |
|---|---|
| AI-assisted journalism | Feeding press releases into a chatbot and editing the output |
| NotebookLM | Google’s tool for summarizing documents — now apparently a co-author |
| Perplexity | An AI search engine that writes paragraphs instead of giving links |
| Content-management system (CMS) | The backend where articles get published — the “post” button for news sites |
| Fortune Intelligence | The byline Lichtenberg used before he decided the work was “mostly his own” |
| Graphite | Research firm that confirmed AI content surpassed human content in late 2024 |
📖 The Backstory: One Man, 600 Articles, Zero Fact-Checkers
- Nick Lichtenberg, 42, joined Fortune and started prompting AI tools to draft articles from press releases and analyst notes
- His process: enter a headline into Perplexity or NotebookLM, get a draft, move it to the CMS, edit, publish
- He admits his vetting process “isn’t as thorough as that of magazine fact-checkers”
- Originally bylined stories as co-authored with “Fortune Intelligence” — now just signs his own name
- Self-description: “I’m a bit of a freak”
📊 The Numbers That Should Make You Uncomfortable
| Stat | Number |
|---|---|
| AI-assisted stories by Lichtenberg | 600+ in ~1 year |
| Share of Fortune’s web traffic from AI stories | ~20% (H2 2025) |
| Articles cranked out in one Wednesday | 7 |
| Time to publish Disney CEO story | 10 minutes |
| Newspaper articles partially/fully AI-generated (2025 study) | ~9% |
| When AI articles surpassed human-written ones online | Late 2024 |
🗣️ What People Are Saying
- NYT publisher A.G. Sulzberger: “AI is almost certainly going to usher in an unprecedented torrent of crap”
- NewsGuild of New York (Fortune’s own union): “You simply can’t replicate lived experiences, human judgment and expertise”
- Lichtenberg himself: Signs his own name now “because he feels the work is mostly his own”
- University of Maryland study: 9% of newly published newspaper articles are partially or fully AI-generated
- Cleveland.com editor Chris Quinn: Uses AI to scrape local websites and send “tips” to reporters in counties that would otherwise go uncovered
🔍 The Disclosure Problem
Honestly, here’s where it gets sketchy. Lichtenberg’s stories “sometimes” disclose that generative AI was used as a research tool. Sometimes. Not always. He dropped the “Fortune Intelligence” co-byline because he decided the work was his. But okay — if you’re prompting an AI to write a first draft from a press release, editing it in 10 minutes, and signing your name… what exactly did you write?
The Wall Street Journal frames him as “a bellwether for where much of the media business is headed.” Which — okay but seriously — is a polite way of saying “this is what happens when ad revenue craters and you still need content.”
⚙️ The Bigger Picture: AI Content Already Won
The scariest stat isn’t about Fortune. It’s from Graphite: AI-generated articles on the web surpassed human-written ones in late 2024. That was over a year ago. The ratio has only gotten worse.
So when you read something online now, the default assumption probably should be that a machine wrote it unless proven otherwise. The 2025 University of Maryland study found 9% of newspaper articles — the ones with editors and standards — are AI-generated. Blogs, content farms, SEO pages? Much higher. We’re swimming in it, and most people have no idea.
Cool. The news is writing itself now… Now What the Hell Do We Do? ( ͡ಠ ʖ̯ ͡ಠ)

🔍 Build an AI Content Detector Service for Publishers
Most publishers know they have an AI content problem but don’t have internal tools to audit their own output at scale. Build a SaaS that ingests article feeds via RSS/API, scores each piece for AI-generation probability, and flags disclosure gaps. Charge publishers $200-500/month per publication.
Example: A freelance developer in Lisbon, Portugal built a browser extension that highlighted likely AI-generated paragraphs on news sites. Pitched it to three Portuguese news outlets as an editorial tool. Two subscribed within a month at €300/mo each — €7,200/year ARR from a weekend project.
Timeline: MVP with existing detection APIs in 1-2 weeks. First paying customer within a month if you cold-email editors directly.
📝 Launch a 'Human-Written' Certification Badge for Independent Media
Readers are starting to care. A verified “written by a human, fact-checked by a human” badge — think blue checkmark but for journalism — could become a trust signal. Sell the certification process to newsletters, indie publications, and Substack writers. Charge $5-15/month per publication.
Example: A former journalist in Accra, Ghana created a “Verified Human” badge system for West African news blogs. Charged GH₵50/month (~$4). Got 140 publications onboarded in 3 months after one viral Twitter thread about AI misinformation in local elections. Now pulling ~$6,700/year.
Timeline: Set up verification process and badge embed code in a week. Start outreach to Substack/Ghost newsletter writers immediately.
💼 Offer 'AI-Augmented Journalism' Training for Newsrooms
Newsrooms are going to use AI whether unions like it or not. The ones that do it badly will get caught (and sued). Package a training workshop: how to use AI tools responsibly, proper disclosure, fact-checking workflows, and editorial standards. Charge $2,000-5,000 per workshop.
Example: A media consultant in Toronto, Canada built a 4-hour workshop called “AI Without the Lawsuit” after a local outlet got dinged for running an unchecked AI article that misidentified a suspect. Ran it for 6 newsrooms in Q1 2026 at CAD $3,500 each — CAD $21,000 in one quarter.
Timeline: Build curriculum from existing AI journalism ethics guidelines (AP, Reuters have published theirs). Book first workshop within 2-3 weeks via LinkedIn outreach to managing editors.
📊 Create a Public 'AI Transparency Tracker' for Major Publications
Scrape major news sites, run detection on their articles, and publish a weekly transparency report showing which outlets are quietly using AI and which are disclosing it. Monetize through a Patreon/paid newsletter model. Readers who care about media integrity will pay for this.
Example: A data journalist in Berlin, Germany started tracking AI usage across 20 German-language publications using a combination of stylometric analysis and metadata scraping. Published weekly on Substack. Hit 800 paid subscribers at €6/month within 4 months — roughly €57,600/year.
Timeline: Set up scraping pipeline and detection in 1-2 weeks. First free report as proof-of-concept, then gate the detailed breakdowns behind a paywall.
🛠️ Follow-Up Actions
| Step | Action |
|---|---|
| 1 | Read the AP’s and Reuters’ published AI usage guidelines — they’re the industry standard |
| 2 | Test existing AI detection APIs (GPTZero, Originality.ai) for accuracy on news content |
| 3 | Subscribe to Nieman Lab and Press Gazette for ongoing media industry shifts |
| 4 | Check the University of Maryland’s 2025 study for methodology — could replicate for your market |
| 5 | Monitor Fortune’s disclosure practices — they’ll likely get called out and change policy |
Quick Hits
| Want to… | Do this |
|---|---|
| Run it through GPTZero or Originality.ai — free tiers available | |
| Look for “AI was used as a research tool” disclaimers (they’re tiny and buried) | |
| Add a robots.txt disallow for AI crawlers, use the TDMRep protocol | |
| Build tools or services around AI content detection — the market barely exists yet | |
| Follow Nieman Lab, Columbia Journalism Review, and the AP’s AI policy updates |
Honestly, the byline used to mean someone stood behind the words. Now it means someone stood behind the prompt.
!