Computing Fast, Thinking Slow & Long Term
From the landing on Mars, to political deepfakes, exponential advances in computing are expanding our world and confounding our sense of reality. Progress in machine learning and artificial intelligence have swiftly outpaced expectations in terms of performance and potency. There is a risk for all of us to be hypnotized by their power and surrender our responsibility in harnessing this power toward what matters and in understanding the boundaries of what technology can do for us and what remains our exclusive duty and responsibility. Should we choose to, there is an opportunity to engage the potential of machine learning, virtual reality, and augmented reality to connect us, broaden our horizons, and enhance the impact of our decisions.
In his book, Thinking, Fast and Slow, Daniel Kahneman describes human thinking as a constant collaboration between two systems. The first, which is fast, intuitive and approximate, is always on. The second, which is slow, deliberate, and meticulous, comes on only if necessary, only when called upon. The former is the locus of our expertise, but also our heuristics and biases. It is the system we use for day-to-day navigation of our world, for performing routine actions, and for delivering snap decisions. The latter is conscious, focused, demanding our full attention to the process and the outcome. As we grow and mature, we increase our level of control over these two systems. We regulate when to rely on the speed and efficiency of our intuitive system, and when to be more cautious and employ our deliberative, slower, more precise thinking. It is this thoughtful regulatory system that takes advantage of both systems and balances their qualities.
Computing systems have infiltrated almost every decision, policy, and plan made by us, and more often for us on our behalf. Like humans relying on two systems of judgment with different attributes, human societies are now irreversibly relying on two systems of judgement and decision making: the computing system, and the human system. Computing systems are fast; their scope and domain of impact have been expanding thanks to progress in machine learning, and high performance computing. On the other hand, more often than not, these systems have been amplifying human intentions, good and bad, influencing human thinking rather than complementing it. We are still far from a well regulated computing system and culture with complementary decision making processes balancing each other and correcting each other shortcomings.
Indeed our computer systems often reproduce, magnify, and legitimize disparities and inequities via the people building them, the data they are trained on, and the interest they serve. The relatively narrow profile of system designers led to face recognition software that works well for them (mostly white males of European descent) but very poorly for everyone else. Most natural language processing systems derive their semantics from word associations from existing texts in which gender, race and other stereotypes are deeply embedded. They perpetuate the gender and racial stereotypes and do so in an opaque way and often with serious consequences.
Systems often have stakeholders bereft of altruistism. Hospital emergency triage systems sponsored, not by health professionals but by insurance companies, calculate priority based on the estimated future cost of care, an estimate calculated from historical data. Besides the questionable ethics of such a metric, it ended up encoding another bias: black patients have historically been prescribed less costly care. As a result, they were given lower priority in the triage systems and consequently magnified this inequity without bringing it to light. We continue to discover the many ways in which our computing systems, particularly machine learning systems, amplify not only our intentions, but also unintentional and subconscious biases that are deeply embedded in our way of thinking, which are not always readily accessible to us.
The above state of affairs is not a fatality but a choice, albeit often tacit. Computing systems have the potential to complement and develop our thinking capacity beyond what we have been using them for. I focus here on two examples where computing systems can be used to expand and develop human thinking and lead to richer and better regulated overall systems, namely, system level thinking, and long term thinking, both necessary components in the formation of an equitable and sustainable world for current and future generations.
In the last century, science has been primarily reductionist, focusing on the premise that every system is the sum of its parts and that understanding the components of a system leads to understanding the whole. In this approach, phenomena are repeatable, and explainable in terms of action and immediate reaction. Such a model is very appealing as it reduced complexity and enabled most of the scientific discoveries and innovations of the last 100 years.
As powerful as it is, this model has limited effectiveness, especially when it comes to so-called complex systems. These are systems where actions and consequences are not related on a one-to-one basis, instead, they are additive. It’s not a single action, but the accumulation of repeated actions that eventually results in a reaction. In these systems, the reaction is not immediate, but happens with non negligible delay, clouding the connection. In these systems, the reaction is not deterministic at the detail level, but in the form of patterns. Many natural and human-made systems are complex systems. They cannot be understood in a reductionist fashion, and as a result evade our intuitive thinking and challenge deliberate analysis. Most of the challenges we face as a society and as a human civilization are complex systems issues, whether it is climate change, race relations or socio-economics and geographic inequities. In all of these cases, human thinking is limited or untrained in understanding the remote but real connections between our actions and inactions, and the systemic consequences we observe and experience. Computer simulation tools and agent-based models have developed significantly in the last decade, enabling us to better understand, reason about, and experiment with complex systems.
The second area of much urgent need of regulation is long term thinking. Humans’ propensity to prioritize the urgent over the important, and their gluttony for short term gratification has been supported and encouraged by technology. As a result, we are making decisions today that are endangering the voiceless humans of tomorrow. Our brains are unable or unwilling to grasp the connection between our actions today, and the pain and suffering they generate in the future. Roam Krznaric, author of The Good Ancestor: A Radical Prescription for Long Term Thinking, describes our behavior as, “the present colonizing the future,” whereby we treat the future as, “a distant colonial post where we dump ecological degradation, nuclear waste, public debt and technological risk.” At the same time, Krznaric sees technology as a powerful resource that can be directed towards more long term thinking and higher generational equity. Scenario planning and simulations, virtual and augmented reality have proven effective tools in making the invisible visible, the abstract concrete, and the far away immediate and present. The Future Energy Lab created by Superflex allowed UAE government officials to breathe the air of 2034 based on different scenarios. This led the government to make significant investment in renewable energy.
Humans rely heavily on the complementarity of two thinking systems; a fast, intuitive and approximate one, and a slow, deliberate and meticulous one. These two systems are not fixed. As we regulate their use, explore and learn new deliberate analytical approaches, we also hone our intuitive system and evolve it. Similarly, societies rely on computing systems that are fast, efficient, and able to process large quantities of data in order to process more and more complex tasks. We must not surrender our responsibility to harness this power in order to educate, hone, and complement human thinking and focus it on what is important and urgent.