Eric Drexler of Oxford’s Future of Humanity Institute has published a new book called Reframing Superintelligence: Comprehensive AI Services as General Intelligence.
The book’s basic thesis is that the discussion around super-intelligence to date has suffered from implicit and poorly justified background assumptions — particularly the idea that advanced AGI systems would necessarily take the form of “rational utility-directed agents”.
Drexler argues a number of key points, including:
–One can, in principle, achieve super-intelligent and highly general and programmable sets of services useful to humans, which embody broad knowledge about the real world, without creating such rational utility-directed agents, thus sidestepping the problem of “agent alignment” and leaving humans in full control.
–A sketch of architectural principles for how to do so, through an abstract systems model Drexler terms Comprehensive AI Services (CAIS)
–The need to clarify key distinctions between concepts, such as reinforcement learning based training versus the construction of “reward seeking” agents, which are sometimes conflated
–A set of safety-related concerns that Drexler claims are pressing and do need attention, within the proposed framework
In addition, Drexler provides a fun conceptual synthesis of modern AI, including a chapter on “How do neural and symbolic technologies mesh?” which touches on many of the concerns raised by Gary Marcus in his essay on “Why robot brains need symbols“.
I will be keen to see whether any coherent and lucid counter-arguments emerge.