Possible Futures: Utopias, Dystopias, AI, and Artificial Personhood Introduction

Tom Flynn

Even in this time of COVID-19 and economic dislocation, of social unrest and climate threats, it pays to consider the farther future. Assuming that humanity survives our admittedly staggering short-term challenges, what might await us in the long run—a utopia of technological and cultural promise? A dystopia of ever-further crushed illusions? After decades of being “ten years away,” meaningful and powerful artificial intelligence seems on our threshold. Does that portend an infinite canvas for human potential, or will we become the house pets of our machines? Artificial intelligence is one thing; artificial personhood is another. Are we on the cusp of that breakthrough? And if so, what does that foreshadow? (In a limited way, this feature represents a further extension of the cover feature “About Those Other Apocalypses …” [FI, June/July 2020].)

On a yet larger scale, what does the future hold for the human enterprise writ large? Some have theorized that the reason we haven’t detected other intelligences among the stars despite decades of searching is that technological civilizations are inherently unstable; perhaps most last mere decades, then destroy themselves in warfare or by consuming the natural resources on which their technologies depend. If true, this principle might account for the absence of other detectable civilizations at this moment in the cosmos. It also hints that our own civilization’s time is limited. On the other side of the ledger, could future breakthroughs propel us into a quasi-Teilhardian utopia in which human intelligence, or its descendant, perfuses the cosmos?

In this feature, three divergent thinkers ponder such “big questions.”

In “How to Build a Conscious Robot,” Henry Grynnsten examines the practicality and the ethicality of creating a true synthetic person. For many of us, our first exposure to a synthetic person took the form of Lieutenant Commander Data, the android character in the TV science-fiction series Star Trek: The Next Generation (1987–1994). The ethical—and for that matter, legal—implications of Data’s personhood were only hashed out in the courts of the fictional United Federation of Planets as the series moved forward. Also, coincidentally or not, Data’s being housed in a humanoid body echoes one of Grynnsten’s contentions: if we set out to build a conscious robot, we must begin with a body-and-brain architecture.

In “Is Intelligence Toxic?,” computer scientist and entrepreneur Paul Bassett pierces deeper into the question of whether intelligence, particularly technological intelligence, is self-limiting. Bassett marshals compelling evidence why the human project may be doomed to end in failure, but he is not wholly without hope. “Part of the problem is that a way out may exist, but no one has thought of it yet,” he writes near the conclusion of his essay.

Finally, in “Post-Humans on a Sterile Promontory,” Center for Inquiry Communications Director Paul Fidalgo offers a sweeping perspective that encompasses both utopian and dystopian views of the human future, each of which he has entertained at various points in his life. It’s one of the lengthiest articles Free Inquiry has ever published and just might be one of the most profound. If its thrust can be captured in a single sentence, maybe it’s this one: “It is a choice for humanity to embrace its lot as a quintessence of dust, destined only to sleep and feed or to be infinite in faculty, angelic in action, and godlike in apprehension.”

Readers, you are invited to set the virus, the economy, racial justice controversies, and near-term climate crises aside, just for an interval, and ponder a trio of long views.

Tom Flynn

Tom Flynn (1955-2021) was editor of Free Inquiry, executive director of the Council for Secular Humanism, director of the Robert Green Ingersoll Birthplace Museum, and editor of The New Encyclopedia of Unbelief (2007).


This article is available to subscribers only.
Subscribe now or log in to read this article.