Too many books today are getting it all wrong about the risks of AI. Many vendors and dreamers are telling us there is no risk. It’s all going to be peachy and positive. Others are screeching that AI will inevitably lead to the Apocalypse.
Richard Susskind gets it right. His new book, How To Think About AI: A Guide for the Perplexed, analyzes the risks in a balanced and thoughtful way. It is one of the most thought-provoking and useful books in years. Here’s an excerpt from my review at LLRX.com:
In Chapters 8 and 9, Susskind analyzes AI risks using the following chart:
| CATEGORIES OF AI RISK | |
| Category 1: Existential Risks | Threats to the long-term survival or potential of humanity. |
| Category 2: Risks of Catastrophe | Large-scale disasters or societal disruptions short of extinction. |
| Category 3: Political Risks | Impacts on democracy, governance, surveillance, and autonomy. |
| Category 4: Socio-Economic Risks | Effects on employment, inequality, social cohesion, and bias. |
| Category 5: Risks of Unreliability | Issues arising from AI errors, inaccuracies, or “hallucinations.” |
| Category 6: Risks of Reliance | Dangers of over-dependence or inappropriate trust in AI systems. |
| Category 7: Risks of Inaction | Negative consequences of failing to develop or deploy beneficial AI. |

Having laid out the risks, Susskind provides suggestions for dealing with them. He emphasizes measured urgency rather than end-of-the-world hysteria. His message is that policymakers and the public need to grasp the size and speed of current AI shifts, not because disaster is inevitable, but because decisions made in the next few years will ripple for decades. The subtext: Burying your head in the sand isn’t a neutral act — it quietly hands the steering wheel to whoever is paying attention.
The final three chapters address philosophical ideas and speculation as to what the future may hold for AI — and humanity. Discussions of Plato’s allegory of the cave, umwelten and Kant’s distinctions between phenomena and noumena most likely won’t engage the attention of every lawyer, but Susskind’s conclusion most likely will:
“My guess is that we have at least a decade to decide what we want for humanity and then to act upon that decision — if necessary, emphatically and pre-emptively — through nation and international law. [O]ur future will depend largely on how we react over the next few years. conflict resolution or prevention of legal problems could reduce or replace litigation as we know it.”
