The Clearview AI Ruling Is Troubling

Posted in

listen to or read this Deep Dive

written by a human – voiced by their AI twin

Backstory

Ever heard of Clearview AI?

It’s an AI company that has scraped 30 billion facial images from social media. They probably have yours. The facial recognition is staggeringly accurate. Riding on the same wave of tech innovation currently leading us into an AI-first world of work.

They sell to governments and enforcement agencies: a point of growing significance given the current climate of geo-political change. They also recently won an important victory in the UK courts against an ICO ruling (the data & privacy regulator). There are implications.

A Tale Of Our Times

So why am I writing about Hoan Ton-That’s resignation and should you even care about it?

It’s a perfect story for the times we now live in. Disruption is breaking out everywhere. Politically, Economically, Socially and Technologically. They all interconnect which magnifies the impact.  If ever there was a need to spin up an in-house PEST framework, the time is now!

For the same reasons, it is also the right time to be investing in your leadership’s foundation understanding of AI technologies. This is because deeper understanding results in better organisational decision making. If disruption is like a storm that demands expert navigation, then Clearview is just one blip on the radar that needs interpreting and responding to.

Without broad executive understanding of how and why this generation of AI disrupts, organisations lack the agency to shape whether it is going to empower them or not.

What do I mean by that?

The default version of AI’s impact as told in the public narrative and endorsed by powerful vested interests is not going to work for the benefit of the majority. But this isn’t inevitable. It’s just the most commonly reported version right now of how things might end up.

As ever, there is choice in how we apply AI and the benefits we want from it. That is provided we can articulate our needs and understand the significance of stories such as Clearview AI and adapt as a result.

So back to the Clearview story and what’s been going on.

Ton-That’s departure after years of legal battles, regulatory fines exceeding $100 million, and struggles to secure federal contracts under the Biden administration, shows the challenges facing organisations that operate at the intersection of artificial intelligence and privacy rights.

AI Readiness Is More Than The Technology

This is an issue lurking under the surface of all AI powered customer engagement. Still something being largely ignored as an industry debate.

His replacement by co-CEOs Hal Lambert and Richard Schwartz – both with deep connections to the current US administration – suggests a deliberate shift toward leveraging anticipated policy changes in biometric surveillance and immigration enforcement.

This leadership transition has happened at a time of unresolved ethical debates about Clearview’s core technology, which scraped 30-60 billion online images without consent to build its facial recognition database.

Of course, they are not alone. In another post, I’ll cover the growing legal backlash towards the way data is acquired to train LLM foundation models.  

But back to the issue of your face amongst 30+billion others being scraped. Is consent required? It depends. We live in a world of differing regulatory philosophies. Organisations can expect to deal with Yes-No-Maybe as emerging legal responses.

Are there other consequences to be aware of?

For instance, does any liability flow downstream from LLM builders to corporate users in these judgements? Last year, Microsoft’s top legal boss promised to cover costs for any customer caught on the wrong side of legal judgements using their AI.  No doubt a promise shareholders hope is already forgotten!

Might public sentiment swing from indifference to greater concern, once people start to feel tangible threats to their freedoms and privacy? Just a few years ago most of us would flick away such an idea as being highly improbable. But catch up on current news for five minutes and things now look much less certain!

As a promoted use case, Clearview can identify masked faces of individuals exercising their democratic right to protest. This is done by matching them against equivalent social profiles in their AI optimised database. At which point, maybe personal data becoming a commodity for state control starts to feel like a conscious threat to many more people.

As stated earlier, AI can empower organisations or not. We can use it in ways that people trust or not. Becoming an AI first organisation is a different mountain to climb if stakeholder trust in AI starts to sour from stories like this one.

Of course, there are too many variables to have certainty on any of these emerging scenarios. And taking sides is not the point either.

My point is that AI readiness is not just about the rewiring. It’s about understanding and responding to the broader consequences for people, organisations and the way they are regulated.

With that in mind, here’s more on the backstory which illustrate the challenges organisations face in crafting relevant AI policy.  

How Clearview Wriggled Free

Clearview’s core ethical challenge remains its creation of a biometric database without individual consent. The company’s automated web scraping of 30 billion public images, later expanded to 60 billion, were sourced from social media, employment sites, and news platforms.

It resulted in the UK Information Commissioner’s Office delivering a £7.5 million fine and enforcement notice against Clearview. A decision that was then successfully appealed.

Clearview AI’s legal team argued that its facial recognition services were exclusively provided to non-UK law enforcement and national security agencies. Thereby positioning its activities as extensions of foreign sovereign operations. The tribunal agreed.

That fancy legal footwork had consequences.

It now insulates Clearview from UK GDPR compliance obligations. And in doing so creates a permissive environment for unchecked biometric harvesting. Individuals cannot exercise GDPR rights like erasure or access against Clearview as a result.

It also enables other multinational AI organisations to use the same argument: that data collection and algorithmic processing reside outside UK oversight if end-use aligns with foreign government functions. And there are no prizes for guessing how attractive that becomes.

Of course, the story does not end here. Here’s the counter punch.

The ICO warned the tribunal that its decision undermined the UK’s ability to protect residents’ biometric data from mass scraping by offshore entities. In response, proposed amendments to the UK Data Protection and Digital Information Bill could expand the ICO’s jurisdiction over foreign entities processing UK data, irrespective of end-use.

The Consequences

Parallel efforts in the EU’s AI Act to classify facial recognition databases as high-risk systems may also pressure Clearview to alter its scraping practices.

Unsurprisingly 78% of surveyed EU privacy experts condemn the practice. While legal scholars argue that treating publicly available images as free for commercial exploitation violates the OECD’s Guidelines on AI’s “respect for human rights and democratic values.”

Aligned with this push back, ethical AI frameworks from the Institute of Electrical and Electronics Engineers and Algorithmic Justice League increasingly treat non-consensual facial recognition as a form of data violence.

On the other side of this hotly contested debate, the company claims its First Amendment rights protect web scraping. And JD Vance’s recent message to Europe was that AI regulation undermines its competitiveness.

So how should organisations respond?

Field Awareness and Foundation Understanding

Are these just competing views that will resolve over time? Therefore isn’t something organisations need to consider. Or is a more proactive approach needed?

This depends on your values and priorities as an organisation.

For instance, independent audits reveal Clearview’s algorithms exhibit significant racial and gender bias, with false positive rates up to 10% higher for African American women compared to white males. 

These disparities stem from training data skewed toward overrepresented demographics on scraped platforms like LinkedIn and Facebook. And to be clear, this type of algorithmic bias is not an issue exclusive to Clearview. But, given current US policy on DEI, many organisations are now reassessing their own policy. It might be a current debate in yours as well.

Whatever your answer, is it underpinned with clear reasoning? And does it align with the way you are developing your AI strategy and its intended impact on customers and colleagues?

This is a clear example of why organisations cannot afford to ignore these type of issues and should be proactively equipping themselves to engage.

Navigating disruption and enabling safe passage to an AI powered version of your organisation requires foundation understanding. Anyone in any kind of leadership role needs a mental model and vocabulary on how AI works, the challenges it generates and the North Stars that are up for grabs.

AI readiness is not just about a measurable end state in the sense of milestones and maturity models. It is also about having sufficient field awareness to understand what’s going on and decode the signal from the noise. Agentic AI being an obvious example right now.

AI readiness is about engaging with the world as it changes and picking out the issues that matter to your organisation’s ability to thrive in an AI first world.

And even though it feels like a race without breaks, sometimes it’s worth re-equipping before trying to advance further.