What to Know About Data Clean Rooms and Generative AI
Data clean rooms increasingly are a go-to for data collaboration between partner companies or internal enterprise teams in a way that offers compatibility with most privacy requirements. The idea of integrating new technologies like generative AI has raised questions about consent, bias and risk. When it comes to upholding protections of consumer data, LiveRamp recommends avoiding untested technologies.
Machine learning, artificial intelligence and most recently, generative AI, have brought exciting opportunities to the world of marketing. The technology has benefited marketers in a number of ways, automating everything from customer experience and SEO to content creation and image generation.
While this automation is undoubtedly enticing, our industry must proceed with caution as stewards of customer information. After making massive strides to strengthen protections of individuals’ data in recent years, we can’t let all be unwritten by over-indexing on untested technologies.
Generative AI Does Not Meet Data Privacy or Ethics Standards (Yet)
The transition away from third-party cookies has been an immense undertaking for the advertising function, but as a result, the way companies engage with consumers has become more transparent and effective, a win-win for all. The same could be said for using data pseudonymisation over solutions like hashed emails (HEMs), that when used on their own and without proper controls, leave risk for re-identification of personally identifiable information (PII). More recently, the adoption of privacy-enhanced technologies (PETs) ensures data collaboration can occur while minimising the movement of data. For this reason, people-based identifiers, pseudonymisation of data, and PETs are core tenets of LiveRamp’s offerings as a Data Collaboration Platform.
The marketplace is now experiencing rapid adoption of data clean rooms to propel the capabilities of data forward in a privacy-centric and secure way. Introducing something like ChatGPT into a clean room environment poses immense risk to privacy and credibility without the ability to control for things like accuracy, bias, fakes and lack of consent in the data, which is pulled from across the ether of the internet.
Protecting Your Clean Room Data
There’s a reason voices across industries are sounding the alarms. Concerns with generative AI and consent were a hot topic at the International Association of Privacy Professionals (IAPP)’s Global Privacy Summit, notable tech founders have penned an open letter airing caution, and governments from Italy to the White House have either enacted bans or are proposing rules.
If companies choose to incorporate generative AI into their marketing functions today, they should be asking their partners very specific questions to avoid risk. These will vary by business and use case, but may include:
- Does the data being used to inform the algorithm have individuals’ consent?
- How can the use of generative AI control for biases in data, including systemic biases such as demographics, or selection biases, when data isn’t adequately randomised?
- What checks and balances are in place to ensure accuracy of data analysis?
Generative AI is a promising disruptor in marketing and adjacent operations, but in its current form, can cause more damage than good. Marketers and their partners must reflect on how they’re using this technology and whether the benefits outweigh the risks.
At LiveRamp, we hold ourselves accountable to the highest standards of privacy, security and compliance. We look forward to seeing generative AI mature in hopes of eventual readiness in the privacy-centric world.
Data Clean Room Use Cases and Fundamentals: Learn More
Enhanced data clean rooms offer a privacy-conscious way for marketers to collaborate and achieve better outcomes. For those seeking practical guidance on how to get started, check out The Rise of the Data Clean Room.