The future trap
It seems as though everyone is obsessed with the future. What will future digital markets look like? Will artificial intelligence (AI) create a positive or negative future? What will the future of work look like? How can we better prepare our businesses and families for the future? In our rush to peer forward, we are missing many of the early indicators of how our current digital culture and our present actions are shaping the very future we are scrambling to predict.
New products, new markets and new technologies do not happen in a vacuum. They emerge as a direct product of the cultural context of innovation, which includes our relationship with technology and our attitude about its value and role in our lives. Take Big Data, hailed by many executives as ‘the new oil.’ At first, this description seems apt: data is a valuable and plentiful resource that can be extracted, refined and sold. It is easy to think of data as something that is impersonal and separate from the humans who are creating it.
And yet, every day we share our locations, our search queries, our caloric intake, our sleep quality, our hormonal state, our favourite movies and books, our closest relationships, our purchasing patterns, our conversations and our habits. We live in an age of normalised digital intimacy, where we share our most private thoughts with our devices without a second thought. Data is not the new oil; it is the new blood.
To put it another way, the data we are sharing is an extension of our identities: who we are, our hopes, our fears and our deepest secrets. We overlook this human cost, forgetting that these ‘insights’ we are chasing to improve business functions are being ‘mined’ from real people. If we do not think data is a precious human by-product that should be respected, then none of the products and services that are created within this technological context will mirror that understanding either. If human-centricity is not at the core of your innovation strategy today, how can you expect your products and services to create a human-centric tomorrow?
“We live in an age of normalised digital intimacy, where we share our most private thoughts with our devices without a second thought. Data is not the new oil; it is the new blood.”
Our technological landscape is dominated by a handful of powerful companies that have created information pipelines, propelling the rise of the first global digital culture through the unprecedented exchange of ideas, content and philosophies across borders and oceans. As policymakers struggle to counteract the rise of misinformation, protect consumer data privacy and address the misuse of social networks by malicious actors, they often only examine how the technology is being applied instead of deep-diving into the values and worldviews that are embedded within the code of the platforms themselves.
Technology is the manifestation of belief systems, and each one of these organisations has a specific idea of what they believe the future should look like. Whether it is Mark Zuckerberg’s push for radical transparency, Jeff Bezos’ ambition of a globally connected supply chain or Jack Dorsey’s belief that complex ideas can be summarised into 280 characters, their views are actively shaping the world we live in today. Technology not only enables the global exchange of ideas, but itself contains values and norms that influence not only our behaviours, but entire economies and societies as well. As individuals, our use of these platforms translates into an endorsement, and we become active co-creators of bringing that vision to life.
We talk about the future as though one day we will wake up and it will have suddenly arrived. The irony is that in order to predict a better future, we must invest in a better today. Leaders must be concerned with the technological conditions we are actively creating in the present. Instead of asking ‘will AI be good or bad for the future,’ we should be asking: ‘whose vision of the future are we currently supporting through our investment of capital, attention, and time?’
“We talk about the future as though one day we will wake up and it will have suddenly arrived. The irony is that in order to predict a better future, we must invest in a better today.”
Consider the fact that in our rush to ‘future-proof’ careers, we became so focused on funnelling kids towards STEM (science, technology, engineering and maths) careers at the expense of the arts, history, sociology and other human-centric disciplines. What is the result? We have a generation of brilliant technical minds that know how to code, but not what to code; that understand the algorithms, but not the human context needed to deploy them ethically. Will we be surprised when the world they dream up also prioritises technology over empathy? Companies are already struggling with the unintentional consequences of biases. We have seen hiring algorithms exclude women from recruiting pipelines and social media filters that beautify users by whitening their skin tones. We all have biases and we need to be extra careful about what we pass on to machines as we teach them how to interpret our complex world. Otherwise, how can we be surprised when those same biases are amplified at a large scale in the tools we build for tomorrow?
The good news is this: it is not too late to make positive changes. This is not a technology issue, it is a leadership issue that requires focusing on the implications of today’s decisions. As investors we can insist on supporting organisations whose vision of the future is one that is sustainable and ethical. It is the only thing we can do and it is the only thing that matters. If we do not actively push for diversity, ethical standards, data protections, transparency, oversight and equality today, then we should not expect them to appear magically in the future.