“LinkedIn doesn’t know me anymore,” someone complained to me recently. “What do you mean?” I asked. She explained that the platform has replaced the old “recommended jobs” section, which used to show her quite useful job openings based on her previous searches and CV, with an AI search engine that asks you to describe your ideal job in freeform text. The results it brings up aren’t nearly as relevant.
This is just one of many ways in which the professionals’ social media platform, which has embraced artificial intelligence with ferocious zeal, is being gradually “enshittified”, to borrow tech writer Cory Doctorow’s phrase. Each new embrace of AI tools promises to make hiring, job searching, networking and even posting a bit easier or more fruitful. Instead, AI seems to have made the user’s experience more alienating, and to have helped foster a genre of LinkedIn-speak which bears all the hallmarks of the worst AI writing on the internet.
Let’s start with my opening example—which, to be fair, is in beta testing mode and can be switched off. Instead of the AI assistant being like an intuitive digital servant, pulling up the best jobs based on your ruminations, users are confronted with a new and annoying task: crafting prompts for the AI. But the non-AI search bar worked perfectly well as it was.
Then there is the AI writing assistant, which is available to users who pay for the platform’s £29.99 per month premium service to help them craft their posts. LinkedIn’s CEO Ryan Roslansky recently admitted that users aren’t using the tool as much as he anticipated. It seems that sounding like a human being to your colleagues and clients is put at, well, a premium.
And then there are the ways in which users are deploying outputs from external AI chatbots on the platform, something with which LinkedIn is struggling to cope. According to the New York Times, the number of job applications submitted via the platform increased by 45 per cent in the year to June, now clocking in at an average of 11,000 per minute. Some of that increase is attributable to more people applying for jobs as employment growth tapers, but generative AI tools have undoubtedly contributed. Meanwhile, applicants are using AI to apply for more jobs than they used to, at the expense of quality and suitability. Many employers are now inundated with useless CVs.
Perhaps proactively searching for candidates might yield better results. But can you be sure the people in your search results are real? In 2022, the Stanford Internet Observatory uncovered more than 1,000 LinkedIn profiles using photographs created with AI, often with completely made-up names and CVs. Such accounts can be made to spread spam or misinformation, or to generate “leads” for marketing and sales professionals (ie a bot will send you a private message asking if you’re interested in a product or service). LinkedIn subsequently investigated and removed many of the profiles identified by Stanford. But according to the platform’s most recent transparency report, the problem has intensified. Administrators had to remove more than 100 million fake accounts in 2024—80.6 million at the point of registration, 19.7 million after registration but intercepted proactively by LinkedIn, and 265,700 following user complaints.
LinkedIn is more than a hiring service these days, however; it’s a social media network with more than a billion users. After many professionals fled Elon Musk’s X due to his divisive politics and platform changes, LinkedIn was one of the places, alongside Bluesky, where many émigrés went.
There is a good chance the posts in your LinkedIn news feed are also AI-generated, or at least made created with the assistance of a chatbot. AI-detection platform Originality.ai recently found that 54 per cent of the long-form LinkedIn posts in an 8,795-post sample were AI-generated or -assisted.
The platform risks losing users’ trust. LinkedIn’s news feed does not rely on posts being innately interesting—indeed, it arguably relies on this least among social platforms—but on the authority of each user’s voice based on their professional background. People trust the research and development manager of a major publicly listed company more than an anonymous X account with an anime avatar. The platform’s algorithm prioritises well-connected and credentialed people over companies for this reason (and to force firms to cough up for advertising). But if users don’t trust that the posts they are reading contain the original thoughts of their professional connections—or that the author is even a real person—that authority collapses, as does the value of LinkedIn as a social network.
Last year, James Ball wrote in Prospect about the Dead Internet Theory—the suspicion that you might be the only human left online in an internet full of bots. As the deluge of AI slop increases exponentially, Ball suggested there is more than a grain of truth to this conspiratorial joke. Today’s internet is filled with fake AI accounts posting and responding to each other. If we are witnessing the slow death of the internet, LinkedIn might just be the frontier, where AI-generated supportive comments (“congrats on the promotion Greg”) respond to fake posts by company executives who are swamped by fake job applications.
LinkedIn’s executives take a much sunnier view, however. The company’s head of feed relevance, Adam Walkiewicz, responded to headlines about Originality.ai’s research by reiterating that the site encourages the proper use of AI to “help with review of a draft or to beat the blank page problem [i.e., writer’s block]”.
Roslansky, meanwhile, champions the use of AI at the firm so fervently he admits to using it himself when writing emails to his boss, Microsoft’s CEO Satya Nadella (Microsoft purchased LinkedIn in December 2016 for US$26.2 billion). Roslansky had better be a true believer—since June, Nadella gave him the additional responsibility of leading Microsoft’s “Copilot” AI product, which is integrated into many Microsoft products.
AI positivity is deeply ingrained in LinkedIn’s executive history and culture. Founder Reid Hoffman, who is no longer at the company but remains a Microsoft board member and retains a stake in LinkedIn through his investment firm Greylock Partners, is perhaps the most prominent booster of AI’s “good side” in public debate. He recently co-wrote a book titled Superagency: What Could Possibly Go Right with Our AI Future. Perhaps not coincidentally, he is heavily invested in various AI companies.
Hoffman has even created an AI “twin” of himself called ReidAI, which was built on ChatGPT and trained on 20 years of Hoffman’s books, speeches, podcasts and other content. ReidAI recently “presented” at an industry conference in Silicon Valley. That’s one way to get out of a boring professional development seminar, I suppose. Appearing on The Late Show with Stephen Colbert, the real Reid implored viewers not to prematurely judge chatbots. “I think everybody should try to use AI,” he told a sceptical Colbert, who replied, “People should also try heroin!”
Unfortunately for Hoffman and his LinkedIn protégés, regulators are starting to share Colbert’s cynicism. Last year LinkedIn had to suspend its use of UK user data for training its AI models after concerns were raised by the Information Commissioner’s Office.
If LinkedIn’s pivot to AI is causing regulatory headaches and not enthusing its user base, why are its executives still so bullish? They might be ahead of the curve, but this might also be down to their longing for higher growth.
The company’s revenue growth is healthy (8 per cent) but has slowed since 2023 and remains reliant on premium subscriptions and recruitment products. Many predict that LinkedIn’s future growth lies in advertising. For advertisers to fork out more money to a platform which has historically had much higher “cost-per-click” metrics than its rivals, they need to grab and sustain audience attention. Thus, there has been an algorithm change to reward quality of content over quantity, and a pivot to prioritising videos, which have increased engagement time on other platforms. In the end, every platform is trying to become TikTok.
The brief promise of the Musk X-odus—that LinkedIn might become a leading forum for public debate—hasn’t been realised. Bluesky has overtaken LinkedIn as the preferred platform for the hip, intellectual and left-leaning. AI might fill some of the gap in terms of content volume, but based on Originality.ai’s research, its engagement is likely to be lower. People are still repelled by telltale signs of fake humanity.
The sad truth is that on LinkedIn writing which appears automated is a problem even when users aren’t relying on ChatGPT. The professional “hustle” culture that has flourished on the platform often feels inauthentic and predictable, making it easily mimicable but ultimately unsatisfying, much like AI-generated writing.
Vapid “thought leadership” from dull corporate drones—sometimes dubbed “LinkedInfluencers”—too often competes for space on the platform with more insightful posts from earnest professionals, and is often prioritised by the algorithm. Meanwhile, users posting about more topical or controversial matters, such as the war in Gaza or racism, have complained about their posts being disproportionately deprioritised and even censored. And this week, LinkedIn quietly removed a line in its “hateful and derogatory content” policy that stated the company prohibits the misgendering and “deadnaming” of transgender individuals. Such curatorial choices reflect the platform’s dual and sometimes conflicting purposes—as a place to find genuinely useful information in your field, and as a place to show off an inflated and sanitised personal brand to prospective employers or clients, for whom political controversy might be a turnoff.
The tone of writing from LinkedInfluencers is often so formulaic, it’s as if LinkedIn has a house style guide: begin long posts with a quirky anecdote, then over-use line spaces to conjure profundity from essentially mundane details. About halfway in, insert a pivot line relating the story to a vague lesson one can apply to their professional lives: “And then it hit me... the qualities of a good dog owner are exactly the same qualities we need among leaders in in-house litigation teams.”
There are many examples of this format being taken to hilarious extremes. My favourite is from Bryan Shankman, who last year began a post like so: “I proposed to my girlfriend this weekend. 💍 Here’s what it taught me about B2B sales...” A close second is Janney Hujic, a former banker at DBS Bank in Singapore, who posted about how she bumped into the bank’s ex-chairman Piyush Gupta and shared an inspirational conversation, only for the real Gupta to comment, “Sorry to disillusion you. That isn’t me!” These examples are particularly absurd, but they typify a broader sense of inertia with LinkedInfluencers. As Zat Astha wrote about Hujic’s post, “the problem isn’t fakery. It’s how boring, predictable, and polished we’ve all become in order to survive the feed.”
To see how easily reproducible by AI this dominant genre of post was, I asked ChatGPT to write me a long LinkedIn post relating a non-corporate occurrence to professional life (see below). The result was uncanny. Ask yourself: can you tell the difference between what’s real and fake on LinkedIn?
Last weekend, while walking through a park after a morning run, I saw a crumpled soda can lying near a bench. Dozens of people had passed by—it wasn't mine, and technically, it wasn't "my problem." But I picked it up and tossed it into a nearby recycling bin.
It was such a small act. But it stayed with me all day.
Because it reminded me of something bigger-something I see (and try to embody) in professional life:
True leaders pick up the trash. Even when no one's watching. Even when it's not in their job description.
- It made me reflect on moments at work that seem just as minor:
- Fixing a broken link in a team document, even though you didn't create it
- Jumping into a meeting early to make sure the tech is working
- Staying five minutes late to help a colleague meet a deadline
- Offering constructive feedback even when it's uncomfortable
These actions don't come with trophies. No promotions are handed out for taking ownership of someone else's mess. But they send a quiet, powerful signal:
"I care. I'm invested. I take responsibility-even when I don't have to."
And here's the paradox:
It's often these small, "unseen" actions that define how trusted, respected, and ultimately successful we become.
No one puts "picks up trash" on a resume. But it shows up in how others talk about you.
"She always follows through."
"He doesn't let things slip."
"They make the whole team better."
In a world obsessed with scaling, optimizing, and delegating, it's easy to overlook the power of doing the small things well. But in my experience, how we handle the seemingly insignificant often determines what we're trusted with next.
So, whether it's a can on the ground or a problem outside your job title-consider picking it up. Not because you have to. But because it says something about the kind of professional (and person) you are.
Sometimes the path to bigger opportunities begins with the smallest of actions.
#Leadership #ProfessionalGrowth #Ownership #WorkCulture #PersonalDevelopment