Why do we need to rehumanise technology, and why should you care?
Human-centered values are imperative to the continued development and success of new technology in the fourth industrial revolution but with the pace of change in today’s technology-driven world, it has become difficult to tell whether new technology products rely more on being able to use people than serve us.
Technology needs to be re-humanised. We need to go back to core human values to teach machines to build the products and services for a future that will benefit more people than it can harm. This means that when we come up with any innovative technology solution to a problem we want to see solved, we need to think not just about the ends, but the means and potential negative implications as well. And organisations need to live and breathe inclusive human values publicly and continuously retain consumer trust. Here is why.
From the mid 20th century and the invention of the computer, such machines and digital automation could make our jobs easier, but not obsolete. Through the connectedness of the internet, social media platforms sprouted in small closed communities such as university campuses to enhance our physical networks within our physical proximity (Facebook).
Exponential change
The fourth industrial revolution is happening all around us. What makes it such an unprecedented one is that unlike the previous gradual, relatively evenly paced linear progression that we are used to, and can keep a degree of control over, the progression of new technological developments is exponential. Technology that may be touted as having an enormous impact doesn’t live up to the initial hype and its progress seems disappointingly slow. But while most of us dismiss these as science fiction and carry on with our daily lives, those in the field continue to chip away until small incremental iterations compound and snowball to bring their visions to life.
Artificial intelligence (AI), autonomous vehicles, 3D printing, genome sequencing to name just a few. Your smartphone has millions of times more computing power than the NASA computers that sent Apollo 11 to the moon. Since several years ago Google’s driverless cars have proven to have fewer car accidents than humans have ever been able to achieve, and my five-year-old daughter will likely not have to experience the rite of passage that is passing a driving test to get a driver’s license, as her car will drive itself. This kind of change is happening across the globe, in almost every industry and every country. The speed and consequences of it are too much and too fast for us to keep up with, let alone get a grip or control over.
Our data feeds the machines
Almost all organisations now rely on user data to drive decision-making, be it marketing, or to determine how to develop their products. Beyond this, machine learning, a way to develop automation of products or services, relies on vast swathes of historical data to train machines on how to make judgments or decisions that humans used to. Once trained, algorithms, rather than people, decide everything - from what you watch next on Netflix, to whether you get the loan or that job you’ve applied to.
High-profile instances of large technology companies misappropriating personal data (such as Meta’s -then Facebook’s - sale of personal data to Cambridge Analytica) have started to open our eyes to how much value is derived from data that we give away for free. But how our data is used by organisations, particularly those larger ones who purport to give us ‘free’ products, is rarely made abundantly clear. For example, not many realise that when Facebook runs campaigns to pull pictures of themselves as they did in their “ten years challenge” in 2019, they’re doing so to train their facial recognition machines. Similarly, until recently, every time we identified traffic lights, or road signs to prove that “I am not a robot” when completing online forms, we’ve been helping Google to train its autonomous vehicle machines. Yes, many products give us a degree of convenience, but these are clear examples of technology using us rather than serving us.
The regulation doesn’t cut it
When something moves this fast, regulators and lawmakers can’t keep up. And we’re not well set up to make laws that govern the whole world. With rapidly evolving technologies, despite the best of intentions, legislating too early and too rigidly can not only end up leaving end-users like you and me without the protections we deserve, but can further perpetuate the issues that the regulation is intended to address.
In 2018, the EU led the way with the GDPR (General Data Protection Regulation), the most robust data privacy legislation we had seen. Whilst the GDPR is a European law protecting individuals resident in the EU, any international organisation which has, or markets to, end-users or customers in the EU is subject to the GDPR’s remit. Now other states and nations from California, to China, are seeking to overhaul their privacy laws to align more with the GDPR. In addition, there is a push to regulate AI more generally. The EU, Singapore, and now even China have proposed regulations to govern the use of AI, largely based on pillars of core principles, and designating risk categories for AI based on how much human oversight their development has.
But important questions need to be asked about who these regulations serve? And if it’s intended to serve end-users of these technologies and keep them protected, how easy is it for these laws to hit the mark?
For starters, lawmakers are not typically technologists. Despite their virtuous intentions to protect individuals, the methods to implement the rules follow a one-size-fits-all approach that doesn’t account for an organisation's size, resources, or technology infrastructure. The Googles, Facebooks and Amazons of our world have the power and resources to attack from all angles, whether it be paying to comply with what they can, lobbying against whatever they don’t like, or brazenly breaching those laws and paying the fines, which, however high are not enough to make a significant dent in their bottom lines. For instance, Amazon was fined a record 746 million Euros concerning its data practices related to Amazon Alexa product in July 2021, and a few months later, Facebook’s WhatsApp was fined 225 million Euros for failing to properly explain its data processing practices in its privacy notice.
And the startups and creative innovators that seek to disrupt the technology market can’t really afford to comply.
The consumers that the laws are intended to protect now have to deal with reams of terms, conditions and privacy notices in the struggle to understand how their data is being used. And for those that do know their rights, it’s not always so simple to enforce. This is where we need to come back to the first principles of what it means to be human, what drives us, and how technology can continue to serve us. We can’t beat the machines, so we need to double down on our capacity for creativity and compassion to direct the machines in the right way.
This article was first published in SAARI Collective, you can read it here.
Comments