A Reddit consumer claiming to be a whistleblower from a meals supply app has been outed as a pretend. The consumer wrote a viral put up alleging that the corporate he labored for was exploiting its drivers and customers.
“You guys at all times suspect the algorithms are rigged in opposition to you, however the actuality is definitely a lot extra miserable than the conspiracy theories,” the supposed whistleblower wrote.
He claimed to be drunk and on the library to make use of its public Wi-Fi, the place he was typing this lengthy screed about how the corporate was exploiting authorized loopholes to steal drivers’ ideas and wages with impunity.
These claims have been, sadly, plausible — DoorDash truly was sued for stealing ideas from drivers, leading to a $16.75 million settlement. However on this case, the poster had made up his story.
Individuals lie on the web on a regular basis. However it’s not so widespread for such posts to hit the entrance web page of Reddit, garner over 87,000 upvotes, and get crossposted to different platforms like X, the place it received one other 208,000 likes and 36.8 million impressions.
Casey Newton, the journalist behind Platformer, wrote that he contacted the Reddit poster, who then contacted him on Sign. The Redditor shared what appeared like a photograph of his UberEats worker badge, in addition to an 18-page “inner doc” outlining the corporate’s use of AI to find out the “desperation rating” of particular person drivers. However as Newton tried to confirm that the whistleblower’s account was legit, he realized that he was being baited into an AI hoax.
“For many of my profession up till this level, the doc shared with me by the whistleblower would have appeared extremely credible largely as a result of it might have taken so lengthy to place collectively,” Newton wrote. “Who would take the time to place collectively an in depth, 18-page technical doc about market dynamics simply to troll a reporter? Who would go to the difficulty of making a pretend badge?”
Techcrunch occasion
San Francisco
|
October 13-15, 2026
There have at all times been dangerous actors looking for to deceive reporters, however the prevalence of AI instruments has made fact-checking require much more rigor.
Generative AI fashions typically fail to detect if a picture or video is artificial, making it difficult to find out if content material is actual. On this case, Newton was ready to make use of Google’s Gemini to substantiate that the picture was made with the AI software, because of Google’s SynthID watermark, which might stand up to cropping, compression, filtering, and different makes an attempt to change a picture.
Max Spero — founding father of Pangram Labs, an organization that makes a detection software for AI-generated textual content — works immediately with the issue of distinguishing actual and pretend content material.
“AI slop on the web has gotten loads worse, and I feel a part of that is because of the elevated use of LLMs, however different elements as effectively,” Spero instructed iinfoai. “There’s corporations with tens of millions in income that may pay for ‘natural engagement’ on Reddit, which is definitely simply that they’re going to attempt to go viral on Reddit with AI-generated posts that point out your model identify.”
Instruments like Pangram may also help decide if textual content is AI-generated, however particularly in relation to multimedia content material, these instruments aren’t at all times dependable — and even when an artificial put up is confirmed to be pretend, it may need already gone viral earlier than being debunked. So for now, we’re left scrolling social media like detectives, second-guessing if something we see is actual.
Working example: Once I instructed an editor that I wished to jot down concerning the “viral AI meals supply hoax that was on Reddit this weekend,” she thought I used to be speaking about one thing else. Sure — there was multiple “viral AI meals supply hoax on Reddit this weekend.”
