Let us now assume, for the sake of argument, that these machines are a genuine possibility, and look at the consequences of constructing them. To do so would of course meet with great opposition, unless we have advanced greatly in religious toleration from the days of Galileo. There would be great opposition from the intellectuals who were afraid of being put out of a job. It is probable though that the intellectuals would be mistaken about this. There would be plenty to do, trying to understand what the machines were trying to say, i.e. in trying to keep one’s intelligence up to the standard set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s ‘Erewhon’. – Alan Turing, “Intelligent Machinery: A heretical theory”, 1951
Introduction
The business of machines conversing with each other “to sharpen their wits” is what I wrote about in the previous post in this column. In this article I will explore the issue in more detail.
First, it’s worth clarifying just what happens in the novel Erewhon, because, in fact, in that story the machines don’t take control. Rather, the Erewhonians purge their society of all modern machinery in a preemptive strike.
One of the most learned professors of hypothetics wrote an extraordinary book (from which I propose to give extracts later on), proving that the machines were ultimately destined to supplant the race of man, and to become instinct with a vitality as different from, and superior to, that of animals, as animal to vegetable life. So convincing was his reasoning, or unreasoning, to this effect, that he carried the country with him; and they made a clean sweep of all machinery that had not been in use for more than two hundred and seventy-one years (which period was arrived at after a series of compromises), and strictly forbade all further improvements and inventions[.] – Samuel Butler, “Erewhon, or Over the Range”, 1910 revised edition
Recently, Professor Stephen Hawking’s voiced concern about machines taking over has been circulating in the popular press, followed closely by an announcement of a “One-Hundred Year Study of Artificial Intelligence” to rigourously assess AI’s place in society.
It seems to be a good time to consider Butler’s fiction again. I’ll do that briefly next. The article then turns to a discussion around a (superficially) different topic, namely our current technologies and techniques of education. At the conclusion of the article, I’ll bring these themes together with some closing comments.
The most striking thing to confront the foreign visitor to Erewhon is that illness is regarded as immoral, and is punished as a crime. Typhus, or pulmonary disease, for example, generally leads to a sentence of hard labour. This calls to mind a eugenics programme, and Butler hints at this in his descriptions: “The men were as handsome as the women beautiful.” Conversely, what we typically think of as everyday immorality is in turn regarded as an illness to be treated. This is generally handled as a personal matter, requiring a robust and highly physicalised form of psychotherapy, but theft or graft, for example, does not result in state intervention. People unable to afford private treatment are treated in hospital. On the other hand, lapses of judgement, misfortune, and hardship are punishable by law, much like physical illness.
The broader theme underlying this complicated transvaluation of values is that a society can regulate itself a range of very different ways, and that culture decides which features belong to the personal sphere, which belong to the public sphere, and which belong to the state.
One can well imagine the relevance of these considerations to Alan Turing, considering his treatment at the hands of 1950’s British society (cf. Regina [E II R] v. Turing and Murray). Social stigma, medicalisation, and the eventual political reinterpretation of homosexuality as a “disability” were compounding features of the then-current mode of regulating sexual behaviour.
We can read Butler’s imagined holocaust of machinery against the evolution of a real-world “social machine” through Butler’s Victorian time frame and into our own (cynically so-called) Orwellian age. One does indeed find echoes of the same Erewhonian trauma in Orwell’s Newspeak, in which social engagement is reduced to political participation in its most impoverished sense. In reality things have progressed somewhat differently, although Orwell’s dark vision remains relevant.
Turing for his part would seem to side with Butler against the “tyranny of conventional propriety,” but his perspective on intelligent machinery is anything but Erewhonian. As argued in the earlier post, there is much that remains to be done to bring Turing’s vision to fruition. This particularly concerns the “education” of machines, but extends to other aspects of their lifecycle as well.[1]
But considering all the contention surrounding the issue, one might well ask why we would wish to develop intelligent machines in the first place. There is seems to be no shortage of pop culture prognostics surrounding the theme of cybernetic revolt. How far do we want to do down this route? Could we stop if we tried?
One way to begin to think through these questions is to look at how Turing’s machines have contributed to transforming our society so far. The effect has been quite comprehensive, surely more than steam in its day. The following section explores these themes as they relate to contemporary approaches to the education of human beings.
Concerning Co-Learning and Authority
This section draws on on a panel discussion between Howard Rheingold, Mia Zamora, Alec Couros, Lee Skallerup Bessette, Charlotte Pierce, and myself about contemporary educational practices, both in the classroom and online. Even without AI, this gives a glimpse of things to come. The panel was convened as part of the Connected Courses initiative, “a collaborative community of faculty in higher education developing networked, open courses that embody the principles of connected learning and the values of the open web.” A video version of the conversation is online, and I added a transcript in the comments. As the transcript is rather long, I will not reproduce it here, but will instead summarise some the highlights and my reflections. The theme of the discussion was Co-Learning and authority, and questions for discussion included:
-
What is co-learning? We have heard the term “peer learning” – is there a distinction between these two terms?
-
What is it about this moment in education that might call for significant paradigm shifts in both learning and teaching? Why co-learning now?
-
How can teachers empower students as co-learners? How does the instructor support a co-learning environment? What strategies do we use to break traditional hierarchical roles in teaching?
Howard, Charlotte, and I were also aiming to spread the good word about the freely available Peeragogy Handbook, our collaboratively written guide to peer learning, created together with about 30 other co-authors, and now entering its 3rd yearly edition. According to our enthusiastic and inclusive point of view, the precedents for this project range from Plato’s Academy to the Peer-2-Peer University, from Reggio Emilia to the Royal Society, and from Lev Vygotsky to Silicon Valley. But of course, we’re biased.
I also put in a plug for Po Bronson and Ashley Merryman’s Nurture Shock which I had just finished reading. It includes quite a few topics that seemed relevant to our theme, including an overview of the remarkably effective Tools of the Mind curriculum that teaches disciplined thinking through creative collaborative play. It also includes the interesting observation that kids and teenagers with a high degree of social sensitivity can be total jerks.
The connection between popularity, social dominance, meanness and cruelty is hardly a surprise to any teacher – the dynamic is plainly visible at most schools. It’s long been an anchetype in literature and movies, from Emma to Heathers and Mean Girls. … Aggression is not simply a breakdown or lapse of social skills. Rather, many acts of aggression require highly attuned social skills to pull off, and even physical aggression is often the mark of a child who is “socially savvy,” not socially deviant. – Po Bronson and Ashley Merryman, “Nurture Shock”, 2009 (pp. 189, 191)
During the panel discussion, we saw what happens when several socially sensitive and reasonably like-minded people negotiate their own relationships of, and to, authority. Lee Skallerup Bessette pointed out that
The first ones to embrace the peer learning and co-learning style that I’m trying to advance in the classroom are the smart-aleckey students. The kids who are going to stand up to me, back talk, all that kind of stuff. Once I get them co-learning, then they start pushing me further – and they say “Oh, I can do this? This is what I’ve been waiting for my entire life.”
To which I felt I had to make the rather smart-alecky response:
So, that’s really cool. Do you think you could invite some of those students to edit the Peeragogy Handbook, and that they would be excited by that?
And here Lee replied:
If I were still teaching, I would be doing it. There’s an irony to being in a Center for Teaching and Learning. “You’re such a great teacher: now come out of the classroom, and you will now do curriculum or professional development.” But there’s something to be said for teaching the teachers in that sense.
A hopeful outlook in this last sentence! Nevertheless, among all of the like-minded people from the education sector who have made public statements in support of peer learning or co-learning, or even made contributions to the Handbook, none have yet been prepared to adopt the Handbook as a classroom text, to be used and collaboratively co-written in the process. As Lee suggests, such an activity may be more suitable for education students than for English composition students, but the model of collaborative investigation is relevant across the disciplines.
The shared sentiment among panel discussants was that life is routinely sucked out of students in “mainstream” education. As Mia put it, regardless of how progressive and student-centred we are with preschool education, “when they get to grade school, we see this hidden curriculum of right and wrong answers.” One should not assume that this curriculum is politically neutral.[2] When authority becomes authoritarian, creativity becomes the noteworthy exception to a rather dismal norm.
Preschool children, on average, ask their parents about 100 questions a day. Why, why, why – sometimes parents just wish it’d stop. Tragically, it does stop. By middle school they’ve pretty much stopped asking. It’s no coincidence that this same time is when student motivation and engagement plummet. They didn’t stop asking questions because they lost interest: it’s the other way around. They lost interest because they stopped asking questions. – Bronson and Merriman, “The Creativity Crisis”, 2010
This brings me back to the theme from the first section. Without a doubt education, and culture more broadly, is a form of machinery, albeit a “soft” machine. It is, moreover, self-replicating: reproducing a global society with all of its evolving organs and cellular divisions. And this machine is changing rapidly. As Howard Rheingold remarked in the panel discussion:
Now, I challenge you, type in “How to do…” anything, and you will find a YouTube video with a 14 year old explaining how to do it. That didn’t exist 1000, 100, or 20 years ago. For the people I’m getting in my classrooms, the web has always been there. Increasingly, we will find students coming into the classroom who know that learning can happen in a collaborative way.
YouTube represents only one of many ways to socialise online, one of many quasi-public information goods accessible in a read-write manner to anyone with a computer. Anyone online is accessible to everyone else in a mediated fashion. This is an excellent prototype for Turing’s “machine thinking method.”
The web is, paradoxically, entirely collective and entirely individualized. This mirrors broader effects of globalism and rampant competitive consumerism. Zoe Williams, writing recently in The Guardian, argues that childrearing is increasingly “removed from the public domain … and recast as individual responsibility.” Today’s technologies bring us together in a mediated fashion, tearing apart and utterly refashioning the fabric of society.
Institutions and authorities designed for the far simpler reality of just a hundred years ago have burst their banks; have found their timeworn principles inadequate to a flash influx of insight and revelation, an unruly torrent carrying us all struggling towards the edge of a Niagara future in amidst our driftwood debris of outmoded ideologies. – Alan Moore, in “Dear Chelsea Manning: birthday messages from Edward Snowden, Terry Gilliam and more”, 2014
Conclusion
The precipitous nature of current affairs suggests that we would do well to learn what we can from the last hundred years, where the data is already in. This would also teach us to be humble in our efforts to predict the future.
Neither the internet nor global warming were at stake for Turing, much less for Butler. But both of these are the legacy of the machine age. It behoves us Orwellians to think “machinery” together with “intelligence,” since in a world of unintelligent machinery we seem liable to drown in our own waste. As global temperatures rise, the question of how to make intelligent machinery becomes increasingly pressing. We cannot go backwards, so we must try to find a way forwards. The question is how.
One hint comes from a recent study that suggests that the way to make a group smarter is to add more women to the team.
Part of that finding can be explained by differences in social sensitivity, which we found is also important to group performance. Many studies have shown that women tend to score higher on tests of social sensitivity than men do. So what is really important is to have people who are high in social sensitivity, whether they are men or women. – Anita Woolley and Thomas W. Malone, “Defend your research: What makes a team smarter? More Women”, 2011
As we saw above, social sensitivity is not a panacea for social ills, but all too often it hasn’t even been a part of the picture. In particular, the result mentioned above is reminiscent of the Gilligan-Kohlberg Controversy in philosophical ethics. Carol Gilligan advocated for the theme of an “ethic of care” alongside Lawrence Kohlberg’s “ethic of justice.” According to the ethic of care, it’s more important to take into account the relationships that obtain in a given situation than a narrative of a more abstract principles and norms coming from society at large. These are more or less gendered ethical stances:[3]
… not because care is essentially associated with women or part of women’s nature, but because women for a combination of psychological and political reasons voiced relational realities that were otherwise unspoken or dismissed as inconsequential. – Carol Gilligan, 1995, quoted in Leena Kakkori and Rauno Huttunen, “Gilligan-Kohlberg Controversy and Preliminary Conclusion”, n.d.
Intelligent machinery would have a long way to go before it would become aware even of social norms, much less the contingencies of relationships, which among humans generally have more to do with embodied life than with logic and rules. The idea that intelligence and social life can only be comprehended together is perhaps a startling claim. The primitive state of contemporary AI gives us a sense of how much we have left to learn about the way our own minds work.
In the mean time, it is all well and good for a computer system to generate a poem – or a novel – but we in turn should ask what purpose this serves, and for whom.[4] For example, the computer program may succeed in impressing other programs and secure its chances in the genetic crossover mating pool. Or perhaps it may impress human readers in a mad grab for cryptocurrency.
Compared with the 1950s, computers are faster, they are good at Chess, and Go; they are good at Jeopardy; they are good for email and whatever goes beyond email, like Google Hangouts. But as I indicated in the previous post, we’re just now starting to build machinery that can go beyond this.
In the trailer for the forthcoming feature film “Chappie”, Hugh Jackman’s character says that “the problem with artificial intelligence is that it’s way too unpredictable.” Another common complaint is that AI is too predictable: computers do what you tell them to do, nothing more. (And this is how we will outwit them after the cybernetic revolt, or so the story goes.) In “Intelligent Machinery” Alan Turing pointed out that interesting computational systems would make mistakes; and 16 years later, Marvin Minsky ellaborated on the point that computers can exhibit complex interactions with their world, and can write their own endlessly complex programs in response. AI can indeed be unpredictable.
However, it is still the case that, outside of fiction, machines aren’t likely to evolve on their own for some time to come. As Turing explained, people would need to be concerned with their education. But this is not just a matter of stuffing in ever more facts as per the “banking model” of education justly derided by Paolo Freire. The same concerns that apply to human children apply here. Indeed, morphogenesis would be a better word than “education” for the process at stake. It is notable that this concept was of interest both to Alan Turing in some of his last published work[5], and to contemporary therapists and social workers.[6]
If it takes a village to raise a child, it will take a global village to raise intelligent machines. This will only proceed together with an effort to shape and understanding our own (collective) social intelligence.
-
In “Darwin Among the Machines” (1863), Butler mentions “flirtation, courtship, and matrimony.”↩
-
“When you control a man’s thinking you do not have to worry about his actions.” Carter Godwin Woodson, “The Mis-Education of the Negro”, 1933.↩
-
In “Erewhon Revisited” (1916), Butler makes the charmingly non-PC remark: “in all times and places it is woman who decides whether society is to condone an offence or no.”↩
-
“They only have six plots, but they swap them round a bit.” George Orwell, “1984” (1948).↩
-
Turing, A. M. (1952). “The Chemical Basis of Morphogenesis”. Philosophical Transactions of the Royal Society of London 237 (641): 37–72.↩
-
“Systems theorists have employed the terms ‘morphostasis’ and ‘morphogenesis’ to describe a system’s ability to remain stable within the context of change, and conversely to change within the context of stability.” Francis J. Turner, “Social Work Treatment: Interlocking Theoretical Approaches”, 2011↩