Our notion of what privacy is changes over time and with the evolution of technology. When web 2.0 invaded the world, social networks transformed the concepts of private and public, exposing us all to new social rules. The advent of programmatic media, e-commerce, and virtual interactions among brands and people caused an avalanche of data in the digital environment. What we consume, what we like, who we are, where we are going. It's all somewhere on the web.
In Brazil, specifically, the scenario began to change with the arrival of the General Data Protection Law (LGPD), which followed the similar European regulation (GDPR) and, since 2019, has been establishing limits in a land that until now has belonged to no one. For companies that collected large volumes of information about customers and prospects to improve marketing strategies, the data protection laws were a game changer.
With the new rules, data captured or acquired without a legal basis could no longer be freely used for brand communication, under the risk of the company being sanctioned or fined. What initially seemed like a problem turned into an opportunity though. LGPD has made companies' approach to consumers more assertive and effective, increasing conversion rates. Yes, the story of talking only to those who want to listen to you really does work.
A new breakthrough could occur now with the arrival of ChatGPT and the discussions on ethics, copyright, productivity, and the future of work around the topic of artificial intelligence. But the truth is that while these conversations unfold, there is already simple and clear guidance that companies and individuals can follow to ensure security about sharing data on AI platforms.
Some high-level principles that are easy to adopt include:
Conscious of the potential risks associated with AI, proactively introducing measures to ensure responsible and ethical usage will allow innovation to thrive with both human oversight and collaboration.