As educators, we find ourselves at a unique juncture – one where technological innovation and the quest for human identity converge. Within this context, understanding the intersection of human beingness and generative AI's presence holds significant implications for how we guide our students towards a meaningful future.
Thomas Chamorro-Premuzic describes the human needs that are driving the pervasive integration of AI into our lives.
The foundations of our hyper-connected world are largely the same universal needs that have always underpinned the main grammar of human life. These three basic needs can help us understand the main motives for using Al in everyday life.
First, Al fulfills our relatedness need, that is, the desire to connect and get along with others, widening and deepening our relationships and staying in touch with friends... One of the consequences of being a hyper-social and group-oriented species is an obsession with understanding or at least trying to interpret what people do and why. Whether we realize it or not, this obsession has fueled the vast application of Al to social networking platforms.
Second, AI can be seen as an attempt to boost our productivity and efficiency, and improve our living standards... To be sure, we can (and should) examine whether this has been accomplished or not, but the intention is always there...
Third, Al is also deployed to find meaning, translating information into insights, helping us to make sense of an otherwise ambiguous and complex world. For better or worse, most of the facts, opinions, and knowledge we access today been curated, organized, and filtered by AI, which is why Al can be equally powerful for informing or misinforming us.
Beyond these universal needs, how do we define and maintain the uniqueness of human life and continue to express our humanity?
Brian Christian was already asking these questions in 2011 in his book The Most Human Human: What Artificial Intelligence Teaches Us About Being Alive.
Is it appropriate to allow our definition of our own uniqueness to be, in some sense, reactionary to the advancing front of technology? And why is it that we are so compelled to feel unique in the first place?
Is this retreat a good thing or a bad thing? For instance, does the fact that computers are so good at mathematics in some sense take away an arena of human activity, or does it free us from having to do a nonhuman activity, liberating us into a more human life?
The latter view would seem to be the more appealing, but it starts to seem less so we can imagine a point in the future where the number of "human activities" left to be "liberated" into has grown uncomfortably small. What then?”
It is our dispositions, distinct from intellectual traits like knowledge or skills, that wield the power to govern how our human assets are utilised.
Empathy (as will be mentioned later on) is one important aspect, however, is not enough because we are wired to feel empathy for those who are similar to us, which can hinder diversity and inclusion. To achieve an inclusive society, relying solely on instincts is insufficient. We need to develop these human capabilities by actively engaging in intentional and deliberate kindness towards others.
Another crucial factor is self-awareness, which humans possess more than machines. Understanding ourselves better and recognising how we are evolving or potentially devolving in the age of AI is an intrinsically human characteristic. AI models like ChatGPT may respond that they lack self-awareness, but their awareness of their own lack of self-awareness is intriguing. On the other hand, humans often claim to be self-aware, even when their everyday actions and interactions can demonstrate little signs of it.
Understanding human cognition is another important element. According to Annie Murphy, we think best when we think with our bodies, our spaces, and our relationships. Research emerging from three related areas of investigation has convincingly demonstrated the centrality of extra-neural resources to our thinking processes.
First, there is the study of embodied cognition, which explores the role of the body in our thinking: for example, how making hand gestures increases the fluency of our speech and deepens our understanding of abstract concepts.
Second, there is the study of situated cognition, which examines the influence of place on our thinking: for instance, how environmental cues that convey a sense of belonging, or a sense of personal control, enhance our performance in that space.
And third, there is the study of distributed cognition, which probes the effects of thinking with others - such as how people working in groups can coordinate their individual areas of expertise (a process called "transactive memory"), and how groups can work together to produce results that exceed their members' individual contributions (a phenomenon known as "collective intelligence").
Finally, creativity is significant. We can still inject creativity into our own lives by being less predictable and engaging in behaviours that go beyond what AI models anticipate. If algorithms know everything about us, we might as well be automated.
Upping the Human Element at Work
In terms of professions, several authors propose that incorporating a greater human element into one's work is vital across all knowledge-intensive occupations.
Adrian Wooldridge, for example, offers these ideas for columnists.
1. Live an interesting life. Most columnists these days proceed from school to university to the newsroom without pausing to get their boots dirty. This immerses them in a world of experiences and opinions that ChatGPT can easily recycle. Great columnists have rich personalities formed by a cacophony of experiences.
2. Cultivate ever-changing hinterlands... But having a few hinterlands is not enough: You need to keep acquiring new ones, lest AI learns to imitate them. The great management guru Peter Drucker recommended that people should devote time every day to studying new subjects that were completely unrelated to their daily lives.
3. Generate new knowledge. A worrying number of columnists spend their days doing what AI can do better and faster: searching the internet. The only salvation in the long term is to put new stuff into the machine rather than extracting old stuff. There are two ways of doing this.
One way is to immerse yourself in the real world: Interview new people (and observe their peculiarities) or visit new places (and smell the air) or stroll the factory floor.
The second way is to think fresh thoughts. This involves not just immersing yourself in the world of thought but also mixing and matching ideas from different intellectual traditions.
4. Avoid predictability. AI thrives on predictable patterns in opinion or rhetoric: The only way to survive is to keep surprising us. If you think Trump is the devil, try getting inside the skin of a Trump supporter; if you’re an anti-wokist, try seeing what the world looks like to a struggling trans teenager.
Some of the best columnists embrace a host of mutually incompatible beliefs and struggle to reconcile them:
5. Most important of all: Be funny. Humor seems to be the most inimitably human of all attributes. Certainly, AI’s attempts to be funny have thus far been dismal . It is also the acme of the columnist’s art.
What applies to columnists applies to all knowledge workers. The answer to this challenge is not to destroy the Chatting machines, tempting though that is, but to rediscover what makes us human.
We need to immerse ourselves in the world rather than communing with computers all day: AI cannot feel the direction of the wind changing or sense the atmosphere of a shareholder meeting. We need to cultivate sources in the human world over dinner and a drink: Chatbots can’t get people drunk and wheedle out indiscretions. We need to relish the imperfections of the human species…”
Bryan Alexander suggests the following roles for educators
1. Teaching prompt engineering (showing how use the tech: the best ways to write a prompt, how to iterate results, how to go beyond simple content generation). This includes teaching learners how to interact with AI to teach them best, apart from the human instructor.
2. Instilling a critical stance about technology. This should certainly include criticizing AI, which can take place in various ways and through different disciplines – i.e., science and technology studies, rhetoric and composition, computer science, etc.
3. Offering students emotional support, both in the class context and in their lives.
4. Facilitating group work. Right now Bard, Bing, etc. are good at interacting with a single user, but don’t seem to have much capacity for wrangling clusters of students.
5. Guiding students through a curriculum, or answering the “what to learn now?” question. Teachers do this within classes, as well as through advising.
6. Related to 6: teaching students what they need to know and aren’t interested in. This might be according to an instructor’s views, or what a larger authority (state government, community) prefers. For one example, it could take the form of encouraging an arts-loving student to learn math.
7. Structuring learning over the long haul. It’s easy now to learn something small on demand (what’s the French word for “cat”? what happens inside a biological cell?), but people have a harder time persisting in learning over weeks and years.
8. Protecting students in their learning process. This can be defense against political attacks (example: studying evolution in a creationist context) as well as in terms of social, interpersonal issues. (Related to #3 above)
9. Nurturing curiosity. Generative AI can satisfy one’s curiosity, but how to spark and support it?
10. Teaching critical thinking. It’s not easy to find consensus among educators about what that means or how we do it, and I think we overstate how much we actually do this, but it’s something we tend to value highly. (#2 above is part of this.)
11. Teaching, inspiring, and supporting creativity. There are other sources for this, but teachers can be good at helping students exercise and explore their creative sides.
12. Modeling. Former student and current teacher Justin Kirkes thoughtfully explained on Facebook: “Modeling vulnerability in the learning process, excitement in exploration, curiosity in the unknown. These aren’t behaviors that are always innate in learners, but can be called forth.
Upping the Human Element in Class
Finally, David Brooks recommends that students in higher education develop the following distinctly human skills and dispositions that machines will find hard to replicate.
A distinct personal voice.
A.I. often churns out the kind of impersonal bureaucratic prose that is found in corporate communications or academic journals. You’ll want to develop a voice as distinct as those of George Orwell, Joan Didion, Tom Wolfe and James Baldwin, so take classes in which you are reading distinctive and flamboyant voices so you can craft your own.
Presentation skills.
The ability to create and give a good speech, connect with an audience, and organize fun and productive gatherings seem like a suite of skills that A.I. will not replicate.
A childlike talent for creativity.
“When you interact for a while with a system like GPT-3, you notice that it tends to veer from the banal to the completely nonsensical,” Alison Gopnik, famed for her studies on the minds of children, observes. “Somehow children find the creative sweet spot between the obvious and the crazy.” You want to take classes — whether they are about coding or painting — that unleash your creativity, that give you a chance to exercise and hone your imaginative powers.
Unusual worldviews.
A.I. can be just a text-prediction machine. A.I. is good at predicting what word should come next, so you want to be really good at being unpredictable, departing from the conventional. Stock your mind with worldviews from faraway times, unusual people and unfamiliar places: Epicureanism, Stoicism, Thomism, Taoism, etc. People with contrarian mentalities and idiosyncratic worldviews will be valuable in an age when conventional thinking is turbo powered.
Empathy.
Machine thinking is great for understanding the behavioral patterns across populations. It is not great for understanding the unique individual right in front of you. If you want to be able to do this, good humanities classes are really useful. By studying literature, drama, biography and history, you learn about what goes on in the minds of other people.
Situational Awareness.
A person with this skill has a feel for the unique contours of the situation she is in the middle of. She has an intuitive awareness of when to follow the rules and when to break the rules, a feel for the flow of events, a special sensitivity, not necessarily conscious, for how fast to move and what decisions to take that will prevent her from crashing on the rocks. This sensitivity flows from experience, historical knowledge, humility in the face of uncertainty, and having led a reflective and interesting life. It is a kind of knowledge held in the body as well as the brain.
Education has traditionally emphasised knowing. However, in recent years, there has been a growing recognition that beingness - attending to the core human condition, who we are rather than what we know - should also be a central attribute of education. This involves turning our focus towards recognising interconnectedness and the inherent messiness of our personal narrative, sustaining attention, navigating complexity and uncertainty, developing compassion towards self and others, and strengthening our ability to make sense and meaning of the world around us, at an individual, group and societal level.
As generative AI becomes a co-agent in our knowledge work, the endeavor to nurture and elevate the human element will ensure the continued relevance of higher education.