Your private LinkedIn messages might not be so private after all.
The professional networking platform faces a major lawsuit over the unauthorized use of AI data leak using user information.
Premium subscribers discovered their private communications were secretly used to train AI models – without their knowledge or consent.
And the numbers paint a concerning picture. LinkedIn, with its billion-plus users generates $1.7 billion from premium subscriptions alone, now stands accused of betraying user trust at a massive scale.
Here’s what happened: LinkedIn quietly added a privacy setting in September 2024 that automatically opted users into AI data sharing. The platform then updated its privacy policy a month later to reflect these changes.
But by then, the damage was done.
The consequences? Users aren’t taking this lightly. They’re demanding $1,000 per person in damages under the federal Stored Communications Act.
Plus, they want extra compensation for the reduced value of their premium subscriptions.
LinkedIn’s AI Data Collection Controversy
LinkedIn automatically signed you up for their AI training program in a September 2024 privacy update if you’re outside the European Union, European Economic Area, or Switzerland.
Here’s what LinkedIn’s new terms actually mean for you:
Your posts, profile information, LinkedIn messages, and other content are all fair game for AI training. LinkedIn partnered with Microsoft’s Azure OpenAI service to power these AI models. Sure, they claim to use “privacy-enhancing technologies” to strip personal data from training sets, but what does that really mean?
The platform has rolled out several data controls:
- An opt-out switch for “Data for Generative AI Improvement.”
- Controls for Social, Economic, and Workplace Research.
- A form to object to machine learning data processing.
Privacy experts aren’t happy either—they argue users shouldn’t need to constantly check every company’s data policies.
Interestingly, reports show LinkedIn started these changes before they even updated their privacy policy.
That’s 830 million members outside protected regions who had their data collected without clear notice.
AI Data Leak Legal Implications and User Rights
The legal storm has hit LinkedIn hard. A class action lawsuit now sits on a San Jose federal court desk, packed with allegations about privacy rights violations and data protection breaches.
Premium users also want extra compensation for their devalued subscriptions and violations of California’s competition laws.
LinkedIn didn’t just bend the rules—they broke several legal agreements:
- Their own Subscription Agreement (Section 3.2) says no sharing of confidential information
- The Data Protection Agreement (Sections 5.1 and 5.5) restricts data processing
- The Stored Communications Act strictly forbids the unauthorized sharing of communications [8]
The platform’s legal troubles don’t end there. Remember the €310 million fine from Ireland’s Data Protection Commission for previous privacy violations with LinkedIn?
This new controversy could trigger even stricter oversight. After all, we’re talking about “incredibly sensitive and potentially life-altering information about employment, intellectual property, and compensation.”
💡 Read more about: Ethical considerations and best practices for AI implementation
Business Impact and Trust Issues
Want to know how bad data breaches hurt? Studies show that 46% of organizations suffer reputation damage, and 87% of customers would jump ship to competitors after such incidents.
Companies that lose more than 5% of their customers watch $3.94 million vanish from their revenue.
Even those losing just 2% of customers face a $2.67 million sales drop. Stock prices don’t escape either — they typically plunge 5% after breach announcements.
But money isn’t the only thing at stake. Users talk — and they’re not happy:
- 85% share their bad experiences with others.
- 34% take their complaints to social media.
- 20% speak up directly on company websites.
LinkedIn built its empire on trust — the same trust that’s now hanging by a thread.
In fact, a PwC survey drops another bombshell: 69% of consumers already think organizations can’t protect them from cyber-attacks.
What’s next for LinkedIn? They’re scrambling to patch up user trust with better security and clearer communication. The clock’s ticking, and their market position — not to mention their user base — hangs in the balance.
While EU users enjoy strong privacy protection, the rest of LinkedIn’s global community remains exposed.
References
[1] – https://www.reuters.com/legal/microsofts-linkedin-sued-disclosing-customer-information-train-ai-models-2025-01-22/
[2] – https://www.techmonitor.ai/data/linkedin-faces-class-action-lawsuit-ai-data-sharing-practices/
[3] – https://www.silicon.co.uk/e-regulation/legal/linkedin-sued-over-alleged-use-of-private-messages-to-train-ai-596630