Two Stanford researchers have fallen down a LinkedIn rabbit hole, finding over 1,000 fake profiles using AI-generated faces at the bottom.
Renée DiResta and Josh Goldstein from the Stanford Internet Observatory made the discovery after DiResta was messaged by a profile reported to belong to a “Keenan Ramsey”. It looked like a normal software sales pitch at first glance, but upon further investigation, it became apparent that Ramsey was an entirely fictitious person.
While the picture appeared to be a standard corporate headshot, it also included multiple red flags that point to it being an AI-generated face like those generated by websites like This Person Does Not Exist. DiResta was specifically tipped off by the alignment of Ramsey’s eyes (the dead center of the photo), her earrings (she was only wearing one) and her hair, several bits of which blurred into the background.
This isn’t the first time a ring of AI-faced bots have taken to social media. In 2021, multiple accounts ostensibly belonging to Amazon warehouse employees were banned from Twitter, with many of their profile pictures appearing to come from the same type of AI as the latest LinkedIn ones. Amazon denied any link to, or responsibility for, the profiles and their tweets.
The Twitter incident is different and arguably worse: Instead of trying to make a sale in a less-than-forthright manner, Twitter bots from Amazon and other sources typically push misinformation and propaganda, both for corporations and governments.
“It’s not a story of mis- or disinfomation, but rather the intersection of a fairly mundane business use case w/AI technology, and resulting questions of ethics & expectations. What are our assumptions when we encounter others on social networks? What actions cross the line to manipulation,” DiRestra said on Twitter.
NPR looked into DiRestra and Goldstein’s claims and found more than 70 businesses linked to the fake profiles. Several of the businesses said they had hired outside marketers, but expressed surprise when told about the fake LinkedIn profiles. The businesses also denied authorizing the campaigns.
Accounts like Ramsey’s are used by companies to pitch software to potential new customers, and whenever a target responds they’re redirected to a real person. With this technique, companies are able to greatly broaden their reach without having to hire new people, NPR said.
What’s in an AI face?
The fake faces used by Ramsey and the countless army of bots like her are generated by general adversarial networks, or GANs. A GAN uses two bots, one to generate fake faces and another to detect them, to produce the best possible results: Only when the detection bot can’t distinguish between a real and fake face is the image passed along.
It can be tricky to tell a GAN-generated face from a real one, but there are some telltale signs:
- Backgrounds are often indistinct, blurry, or irregular
- Clothes often appear to be irregular, with collars being inconsistent, lines being imprecise and other similar artefacts
- Teeth can appear irregular or blend into lips
- Hair appears to have excess flyaways, which can vanish and reappear, while longer hair can look imprecise
- Reflections and lighting can be irregular
- Glitches in skin
- Missing or irregular accessories
- The person’s eyes are perfectly centered in the image