GarciaMarzaCalvo2024-SecondAgeOfArtificialIntelligence
Domingo García‑Marzá and Patrici Calvo (2024) The Second Age of Artificial Intelligence
Bibliographic info
García‑Marzá, Domingo, and Patrici Calvo. “The Second Age of Artificial Intelligence.” In Algorithmic Democracy: A Critical Perspective Based on Deliberative Democracy (Philosophy and Politics – Critical Explorations, vol. 29), 25–36. Cham, Switzerland: Springer Nature Switzerland AG, 2024. https://doi.org/10.1007/978-3-031-53015-9_2
Commentary
“Perhaps how wonderful! Think, that for all time, all conflicts
are finally evitable. Only the Machines, from now on, are
inevitable!”
—Isaac Asimov, I, Robot (1950, p. 218)
The epigraph invites a feeling that machines make conflict disappear and that their rise is a matter of fate. This chapter pushes against that mood by telling a story about conditions. Early artificial intelligence generated bold promises, interest waned when those promises outran available methods and hardware, and momentum returned only when processing power, learning techniques, and abundant data began to line up. I read the chapter as a causal account. Under the right technical and social conditions, automating parts of cognition becomes possible, and once it spreads the effects reach research, culture, business, and democratic life.
Within the first period, the text does not claim that philosophy settled the question of machine minds. It shows how critics used well known arguments to puncture confidence in strong AI as it was then conceived, especially in disembodied, logic-first programs. Gödel’s incompleteness is presented through Penrose as a challenge to the idea that a single formal procedure could capture all human mathematical insight. That is not a proof against AI, but it made sweeping predictions look fragile. The embodiment criticism targets approaches that ignored the body and real-world interaction. If a program manipulates abstract representations without contact with the environment, the chapter says those representations lack semantic content, which again undercuts claims that symbol shuffling alone would yield general intelligence. Searle’s point about intentionality adds a third pressure. A system can follow rules while having no aboutness, so success at computation is not yet mindedness. None of these arguments close the case. In context they explain why expectations cooled in the mid-1970s, alongside tight limits in computing capacity and data.
The second period is defined more by enablers than slogans. Increased processing and memory, brain-inspired learning, and the convergence of the Internet of Things with Big Data supply the fuel that earlier systems lacked. With that fuel, deep learning moves from theory into everyday applications. Public examples, such as a machine-generated research summary or AI-assisted chip design, signal uptake. They are not proofs that institutions have been remade, and the chapter is careful to separate illustration from explanation. That distinction matters for Asimov’s line. What looks inevitable in the epigraph is, on this reading, the contingent result of capacity, method, and data aligning.
As the scope widens, the authors describe a world where physical and behavioural reality can be reduced to data, processed into information, and applied as knowledge. The promise is clear. There are gains in scale, speed, and monitoring. The risks are clear as well. Treating people as data streams invites reification. If decision systems mediate participation or draft public positions, there is a danger of mistaking calculated agreement for public deliberation. This is where the epigraph returns as a warning. Making conflict evitable by calculation is different from working through disagreement among equals. For a digital-ethics audience, the practical response is to require contextual limits on data collection and inference, provenance and disclosure for generated content and aggregation rules, and a right to reasons with a path to contest outcomes before AI-mediated results are treated as legitimate in public settings.
Excerpts & Key Quotes
From over-promise to a reset
- Page 28:
However, in the mid-1970s, many theorists and practitioners of Artificial intelligence began to realize that both the predictions made by the founders of the discipline and the expectations they had generated over the following three decades were overly optimistic, unachievable, lacking in conceptual rigour or highly unrealistic, and this led to public loss of interest.
Comment:
I use this passage to mark the moment the first age hype breaks. The authors list the objections that dented confidence (Gödel, embodiment, intentionality) and point to hard limits in data and compute. Together, that mix made strong AI predictions look shaky, so many researchers pulled back.
What the “second age” means
- Page 29:
a second era of Artificial intelligence, one whose main characteristic is the progressive automation of human beings’ cognitive activities
Comment:
I treat this as the working definition of the second age. By “automation of cognition” the authors mean systems that draft, classify, predict, and coordinate in places where humans did the thinking. That scope is why the chapter steps beyond labs into politics and culture.
Why the turn happened
- Page 30:
A second era of AI has dawned, and this can be linked to the following factors: computers’ increased processing and memory capacity; our deeper knowledge of the human brain’s structure and the way it functions, which has been attained through the emergence of the diverse branches of neuroscience and their ability to generate, collect and process huge amounts of data about everything by exploiting the potential of hyperconnectivity, datafication and the algorithmization of reality; and finally the synergistic convergence of AI with two other disciplines of scientific knowledge and technological application, the Internet of Things (IoT) and Big Data.
Comment:
This is the why. More computing power lets models train, brain inspired methods give them a way to learn, and IoT and Big Data supply the raw material. Together, that is what moved deep learning from an idea to something we actually use.
Promise and tension of datafication
- Page 31:
We are therefore at the dawn of a new era in which algorithms equipped with Artificial intelligence are capable—or will be capable in the short and medium term—of reducing all physical and behavioural reality to mass data and metadata, as well as processing and converting this data into relevant information and then into applicable knowledge
Comment:
I read this as power and risk at once. Turning lives into data helps with scale and monitoring, but it can shrink people to numbers. If a system gates services or shapes policy, I want tight purpose limits, no sensitive inference, and a record of who was excluded or burdened and why.
The democratic red line
- Page 36:
Finally, as will be seen in the chapters that follow, it is very important to highlight concerns about the current trend towards the progressive replacement of people by AI algorithms in political and economic deliberation and decision-making processes (Calvo 2019, 2020b).
Comment:
This is my line for legitimacy. When algorithms mediate or decide, someone has to be answerable, people need reasons they can understand, and there must be a real way to challenge the outcome. Without that, computation crowds out the human work of reaching a shared position.