|
It's common to frame discussions of societal transitions by focusing on the new skills that become essential. But instead of looking at what we're learning, perhaps we should consider the opposite: What becomes safe to forget? In 2018, Science magazine asked dozens of young scientists what schools should be teaching the next generation. Many said that we should reduce the time spent on memorizing facts, and give more space for more creative pursuits. As the internet grows ever more powerful and comprehensive, why bother to remember and retain information? If students can access the world's knowledge on a smartphone, why should they be required to carry so much of it around in their heads?[br][br]
[br][br]Civilizations evolve through strategic forgetting of what were once considered vital life skills. After the agrarian revolution of the Neolithic era, a farm worker could afford to let go of much woodland lore, skills for animal tracking, and other knowledge vital for hunting and gathering. In subsequent millennia, when societies industrialized, reading and writing became vital, while the knowledge of plowing and harvesting could fall by the wayside.[br][br]Many of us now rapidly get lost without our smartphone GPS. So what's next? With driverless cars, will we forget how to drive ourselves? Surrounded by voice-recognition AIs that can parse the most subtle utterances, will we forget how to spell? And does it matter? Most of us no longer know how to grow the food we eat or build the homes we live in, after all. We don't understand animal husbandry, or how to spin wool, or perhaps even how to change the spark plugs in a car. Most of us don't need to know these things because we are members of what social psychologists call"transactive memory networks."[br][br]We are constantly engaged in "memory transactions" with a community of "memory partners," through activities such as conversation, reading, and writing. As members of these networks, most people no longer need to remember most things. This is not because that knowledge has been entirely forgotten or lost, but because someone or something else retains it. We just need to know whom to talk to, or where to go to look it up. The inherited talent for such cooperative behavior is a gift from evolution, and it expands our effective memory capacity enormously.[br][br]What's new, however, is that many of our memory partners are now smart machines. But an AI — such as Google search — is a memory partner like no other. It's more like a memory "super-partner," immediately responsive, always available. And it gives us access to a large fraction of the entire store of human knowledge.[br][br]Researchers have identified several pitfalls in the current situation. For one, our ancestors evolved within groups of other humans, a kind of peer-to-peer memory network. Yet information from other people is invariably colored by various forms of bias and motivated reasoning. They dissemble and rationalize. They can be mistaken. We have learned to be alive to these flaws in others, and in ourselves. But the presentation of AI algorithms inclines many people to believe that these algorithms are necessarily correct and "objective." Put simply, this is magical thinking.[br][br]The most advanced smart technologies today are trained through a repeated testing and scoring process, where human beings still ultimately sense-check and decide on the correct answers. Because machines must be trained on finite data-sets, with humans refereeing from the sidelines, algorithms have a tendency to amplify our pre-existing biases — about race, gender and more. An internal recruitment tool used by Amazon until 2017 presents a classic case: trained on the decisions of its internal HR department, the company found that the algorithm was systematically sidelining female candidates. If we're not vigilant, our AI super-partners can become super-bigots.[br][br]A second quandary relates to the ease of accessing information. In the realm of the non-digital, the effort required to seek out knowledge from other people, or go to the library, makes it clear to us what knowledge lies in other brains or books, and what lies in our own head. But researchers have found that the sheer agility of the internet's response can lead to the mistaken belief, encoded in later memories, that the knowledge we sought was part of what we knew all along. Perhaps these results show that we have an instinct for the "extended mind," an idea first proposed in 1998 by the philosophers David Chalmers and Andy Clark. They suggest that we should think of our mind as not only contained within the physical brain, but also extending outward to include memory and reasoning aids: the likes of notepads, pencils, computers, tablets, and the cloud.[br][br]Given our increasingly seamless access to external knowledge, perhaps we are developing an ever-more extended "I" — a latent persona whose inflated self-image involves a blurring of where knowledge resides in our memory network. If so, what happens when brain-computer interfaces and even brain-to-brain interfaces become common, perhaps via neural implants? These technologies are currently under development for use by locked-in patients, stroke victims, or those with advanced ALS, or motor neurone disease. But they are likely to become far more common when the technology is perfected — performance enhancers in a competitive world.[br][br] |
|