Defend your Privacy from AI giants | Generative ai Tools List | Generative ai use Cases in Banking | Generative ai in Banking Mckinsey | Turtles AI

Defend your Privacy from AI giants
Isabella V24 June 2024

 

 


PROTECTION OF PERSONAL DATA FROM ARTIFICIAL INTELLIGENCE MODELS

 

Privacy protection is an increasing challenge with the development of big data language models. Meta, one of the world’s biggest data companies, announced on June 10, 2024 that all posts and images on Instagram and Facebook, excluding comments and private messages, can be used to train their AI systems.

This sparked protests from digital rights associations, such as Austria’s Noyb, leading Meta to retract.

European users can object to the use of their data thanks to the GDPR, although there is no guarantee that Meta will accept requests.

The reasons for objecting are many: privacy protection, the risk of exposure to cyber criminals, and the fact that Meta is using the data for free to make a profit.

Even if Meta promises to respect objections, it may still use the data in some circumstances, such as when it appears in content shared by other users.

If you want to oppose it, we show you how to do so below.

FACEBOOK : FB users can fill out a form to object to the use of their data, indicating their country of residence and the reason for the objection. Meta’s response is not guaranteed and even if accepted, however, it does not offer retroactive protection.

ADOBE : Adobe also analyzes user content to improve its products and services. It is possible to disable this option in the account settings.

GOOGLE GEMINI : for users who do not want to allow the company to use their conversations, they can disable this feature in the activity settings.

QUORA : currently does not use user content for training its language models, but still offers an option in the settings to prevent this in the future.

LINKEDIN : uses various user data to train its generative AI tools, but a consumer association, "Altroconsumo," has filed a complaint with the Privacy Guarantor to stop this practice.

SUBSTACK : allows users to block the use of their own content for AI training in settings, but warns that this may limit the visibility of publications.

For all other companies, it is advisable to look in the privacy settings of their respective platforms to manage data use.