Hong Kong’s privacy watchdog has urged LinkedIn users to review privacy settings as the platform resumes using personal data for generative AI training on November 3, 2025. Users in Hong Kong can opt out via account settings to control data usage, ensuring compliance with local privacy laws.
-
Hong Kong’s Office of the Privacy Commissioner for Personal Data issued a reminder for users to check LinkedIn’s updated policy on AI data training.
-
The platform will use profiles, posts, and public activity from regions including Hong Kong, but excludes private messages and users under 18.
-
Opting out involves navigating to Data privacy settings and disabling the generative AI improvement toggle, as guided by the watchdog.
Discover how Hong Kong’s privacy watchdog is safeguarding LinkedIn users’ data in AI training. Learn opt-out steps and implications for personal privacy in 2025—review your settings today!
What is LinkedIn’s Plan for Using Personal Data in Generative AI Training?
LinkedIn AI training data usage involves the professional networking platform incorporating member profiles, posts, resumes, and public activities to enhance its generative AI models. Starting November 3, 2025, this practice resumes after regulatory pauses, affecting users in Hong Kong, the EU, UK, EEA, Switzerland, and Canada. The initiative aims to improve AI features while promising user control through opt-out options to align with privacy standards.
How Can Hong Kong LinkedIn Users Opt Out of AI Data Training?
Hong Kong’s privacy authority, the Office of the Privacy Commissioner for Personal Data, has detailed the opt-out process to empower users. Individuals should access the “Data privacy” section in their LinkedIn account settings, locate “Data for Generative AI Improvement,” and toggle off the option labeled “Use my data for training content creation AI models.” This step ensures personal information remains excluded from AI development, reflecting ongoing dialogues between the watchdog and LinkedIn since late 2024. According to statements from Privacy Commissioner Ada Chung Lai-ling, such controls are essential for informed consent under the Personal Data (Privacy) Ordinance. The authority has monitored compliance, confirming that LinkedIn will process data only with explicit permission, excluding sensitive elements like private messages. This measure addresses earlier concerns over default opt-in settings that raised privacy alarms in 2024.
Steps to opt out of AI data training. Source: LinkedInThe resumed AI training follows LinkedIn’s September 2025 announcement, which outlined the scope of data involvement. Public content and professional details will fuel improvements in AI-driven recommendations and content generation, but the platform assures adherence to regional regulations. Hong Kong users, in particular, benefit from these safeguards, as the watchdog’s intervention in late 2024 temporarily halted the process. Subsequent engagements from October 2024 through April 2025 led to commitments from LinkedIn for enhanced transparency and user autonomy. Expert analyses from privacy advocates, such as those cited in regulatory reports, emphasize the importance of granular controls in the era of widespread AI adoption. Statistics from global privacy surveys indicate that over 70% of users prioritize data control in social platforms, underscoring the relevance of these updates.
In a broader context, this development mirrors trends across tech giants. For instance, Meta’s resumption of similar practices on Facebook and Instagram after regulatory reviews last year highlights a pattern of balancing innovation with privacy. LinkedIn’s approach, shared with affiliates like Microsoft—which has invested heavily in OpenAI—aims to refine AI without compromising user trust. Neema Raphael, Goldman Sachs’ chief data officer, recently noted in industry discussions that AI models like ChatGPT and Gemini face data scarcity, potentially hindering progress unless new ethical sources are tapped.
Steps to opt out of AI data training. Source: LinkedInThe Office of the Privacy Commissioner continues its oversight, vowing to protect Hong Kong residents’ data rights amid evolving AI landscapes. This vigilance ensures that platforms like LinkedIn operate within legal bounds, fostering a secure digital environment for professionals worldwide.
Frequently Asked Questions
What Data Will LinkedIn Use for Generative AI Training in Hong Kong?
LinkedIn will utilize publicly available user profiles, posts, resumes, and activity data for AI training, excluding private messages and content from users under 18. This applies to Hong Kong members starting November 3, 2025, with opt-out options available to prevent inclusion, as mandated by local privacy regulations.
How Does Hong Kong’s Privacy Watchdog Ensure LinkedIn Complies with Data Laws?
The Office of the Privacy Commissioner for Personal Data engages directly with LinkedIn, reviewing policies and enforcing opt-out mechanisms under the Personal Data (Privacy) Ordinance. Through continuous monitoring and dialogues since 2024, the watchdog guarantees user consent and data protection, aligning AI practices with Hong Kong’s stringent standards for a safe online experience.
Key Takeaways
- Review Privacy Settings Promptly: Hong Kong LinkedIn users should immediately check and adjust data privacy options to opt out of AI training, starting November 3, 2025, to maintain control over personal information.
- Regulatory Oversight Matters: The privacy watchdog’s interventions since late 2024 have secured promises from LinkedIn for compliance, excluding sensitive data and minors, demonstrating effective global privacy enforcement.
- AI Data Challenges Ahead: As models exhaust traditional data sources, platforms may innovate with agentic AI; users should stay informed and proactive to navigate emerging privacy risks in professional networking.
Conclusion
Hong Kong’s proactive stance by the privacy watchdog on LinkedIn AI training data usage exemplifies balanced technological advancement with user rights protection. By urging reviews of privacy settings and enforcing opt-out accessibility, authorities ensure compliance amid global AI expansions. As data scarcity pressures innovation toward autonomous systems, professionals are encouraged to prioritize consent and vigilance, shaping a more secure digital future for networking and beyond.




