
The original version of this article was published in Italian by the same author on 27 March 2025.
Federico Cabitza should not be considered a determinist. He is well aware that the artificial intelligence revolution is underway, but he believes that the effects it will have largely depend on us. Even when it comes to “us,” however, Cabitza draws distinctions: whether it’s the scientific community, lawmakers, or our daily lives – and even our own skills – who we refer to matters.
A professor of human-computer interaction at the University of Milano-Bicocca, Cabitza collaborates with several hospitals, notably San Raffaele, to gather scientific evidence on the impact of generative AI in the medical field.
He is the author of several works, including Artificial Intelligence: The Use of New Machines with Luciano Floridi, and reflects on AI from an ethical and philosophical perspective, comparing it to past transformative revolutions, such as the printing press or the modern postal system during the Enlightenment era.
Acknowledging that we all live in an information society, he argues that AI will serve as a vector of cultural transmission, capable of generating new knowledge.
What aspect of generative AI do you consider truly revolutionary in our lives?
Perhaps due to professional bias, I would say that large language models — and their integration into AI systems — have the potential to fundamentally transform the way scientific knowledge is both produced and shared. For researchers and professionals, AI will likely become a valuable thinking partner, helpful for exploring and analysing existing ideas, as well as generating entirely new ones. In education and public discourse, on the other hand, AI can play a key role in making knowledge more accessible, clearer, and more nuanced for non-experts.
Are you suggesting there won’t be significant impacts for non-specialist users? Yet, recent developments are bringing chatbots closer to search engines, which could soon be used by anybody. What could be the broad added value of this shift?
AI is already playing a significant role in the transmission and diffusion of culture. That said, earlier technologies — from the web to social media — were also centred around information retrieval. These systems functioned as computational agents that primarily presented existing data. In essence, they acted as filters, much like search engines. As Eli Pariser – author of The Filter Bubble – observed, these technologies didn’t merely retrieve information. They selected and amplified certain signals based on factors such as authority, popularity, or alignment with a user’s interests, while suppressing others. This dynamic contributed to what is often called the “Matthew effect” or cumulative advantage: authoritative sources became even more dominant, while already popular content gained further visibility.
How is our approach to information changing?
Even though we live in what James R. Beniger called an “information society,” I would emphasize that the only true evolution of information — understood as a process that shapes thought — is knowledge. Knowledge is not simply the accumulation of data. It’s the dynamic through which the structure, direction, and meaning of our thoughts come to influence our actions. New AI systems do more than retrieve information, they generate it. While it’s true that they draw on the entirety — or at least a significant portion — of what humanity has written and created over the centuries, they do so by way of transformation and recombination. That is their core function. In a sense, these systems act as extensions of the human phenotype: they incorporate deliberate imperfections, expand our cognitive reach, and avoid mere repetition. In doing so, they inadvertently reflect what might be a defining trait of human culture itself: the ability to generate endless variations on the theme of being human.
I foresee that AI will become a sort of oracle, much like tarot cards or astrology. Like any tool used to predict the future — or even the present — there’s a risk of surrendering free will.
It’s true that people will increasingly rely on conversational agents as advisors, collaborators, or even confidants — not only for learning but also to offload the burden of decision-making. However, it’s essential to distinguish among different contexts of use. As I’ve mentioned, in expert domains, AI adoption may affect trust and dependence in ways that risk undermining human expertise. This could result in a gradual erosion of skills, reduced motivation to apply them, and, ultimately, processes of deskilling and alienation. Among those with limited resources to critically assess AI-generated responses, we may also witness a rise in the epistemic authority ascribed to these systems. That carries significant risks. On one hand, there is the threat of automated persuasion — what the European AI Act identifies as automation bias, and what Luciano Floridi, with his poetic sensibility, has termed hypersuasion. On the other, there’s the danger of reinforcing existing beliefs through confirmation bias.

Photo: Canva.
Much has been said about AI’s disruptive innovation, especially in certain sectors. Some see these industries as vulnerable, while others envision the realisation of Keynes’ prediction: that technology could finally allow us to work fewer hours while maintaining productivity.
Here, I must confess a near-messianic hope. I fervently wish that generative AI will first disrupt — and ultimately eliminate — what I call the “documentation theatre.” By this, I mean the countless bureaucratic rituals of paperwork: forms, reports, records, logs. These tasks, through their sheer volume and repetitiveness, have become a job in themselves — entirely separate from the core of our actual work. As David Graeber aptly described them, they are “bullshit jobs” —meaningless administrative duties that compel us to repeatedly document information everyone already knows, with our personal data being a prime example.
If you had to make a wish?
My hope is that AI copilots will gradually assume control of bureaucratic processes, liberating us from the role of automated scribes. While this won’t halt the irreversible bureaucratisation of society — a development Max Weber warned us about — it will reshape it in ways that anthropologists and sociologists have yet to fully observe or understand.
There are things we cannot comprehend. Hence, is it fair to be afraid of the unknown?
The relationship between humans and emerging technologies is complex and cannot be captured by a single narrative. I agree with my friend Francesco Parisi, who observes that we still lack the conceptual frameworks needed to truly grasp our relationship with these technologies — they are simply too new and evolving too quickly. Who can say what they will be capable of in two or five years? What magnificent illusion they might present next — personality, emotion, intentionality? That said, caution is essential, and fear is entirely understandable. It is an adaptive trait — one that has made us not only an anxious species, but also a prudent and forward-thinking one.
What safeguards do we have?
I adhere to Hans Jonas’ precautionary principle: skepticism and vigilance are essential — especially when dealing with a technological apparatus designed to produce responses that consistently appear plausible and convincing. The risk of blind trust is significant, precisely because these systems never genuinely acknowledge their own ignorance.

Photo: Pexels.
Maybe a counterbalance is the effort to use AI while maintaining critical thinking?
The real problem is not AI itself. A language model generates text much like a water mill grinds flour — it operates mechanically, without intention. The real issue lies in who funds, trains, constrains, and steers these systems. In the best-case scenario, the driving force is profit. In the worst, it’s power — and not merely economic power. If these technologies end up narrowing perspectives or amplifying biases and polarised narratives, it will be because we allowed it. The real threat is not super intelligence, but what Gary Klein calls the singularity of competence.
Help me understand. What is the singularity of competence? Does it mean that if we stop using certain skills, we will lose them?
I initially argued that AI will transform the mechanisms of knowledge transmission. I would now add that, in the course of this transformation, there is a very real risk that we may over-delegate to machines and gradually lose not only our skills, but — more importantly — our will to pass them on.
With which consequences?
This outcome would be both ironic and paradoxical: our most powerful invention for creating and transmitting culture, writing, reaches its most refined expression in the form of large language models: writing that generates more writing. Yet in doing so, this invention, at its peak, risks depleting the very source of energy that once gave rise to it. Perhaps Thamus was right. (The myth of Thamus — or Theuth — recounted in Plato’s Phaedrus, tells of the Pharaoh’s refusal to accept the gift of writing, fearing that this “medicine of memory” would, in fact, erode true memory and understanding. It remains a powerful symbol of humanity’s deep ambivalence toward new technologies — especially when their use threatens to undermine our most essential human capacities. Ed.)
You have long been working on prompt design and interaction with chatbots. Would you be willing to ask the AI a question of your choice and report back with the answer? I’m curious to know if it resembles you.
Sure. Here’s my prompt: “Pretend to be me (refer to your memory and custom instructions). Imagine a journalist is interviewing me (that is, interviewing you imagining yourself as me at your highest capacity). What would you answer to this question if you were me? ‘Do you think artificial intelligence is destined to develop its own consciousness (that is, some self-awareness?)’” And here’s the chatbot’s answer, unmodified.
“Today, we have no concrete evidence that artificial intelligence is developing any form of consciousness, nor do we have scientifically grounded reasons to believe this will happen in the near future — though we cannot entirely rule it out. I don’t say this with complacency or relief; the truth is that we still don’t fully understand what consciousness is or how it arises in the human brain. It remains one of the deepest mysteries of science, and until we have a clear, testable theory, talk of “machine consciousness” remains more a matter of speculation than a substantiated hypothesis.
Even if something akin to consciousness were to emerge in artificial systems one day, it is unlikely to resemble human consciousness. Ours is rooted in the embodied, subjective experience of having a body, in intentionality, and in a sense of temporal continuity. None of these features are present in artificial systems, nor do we currently possess the tools to detect them in machines.
AI today is remarkable in its ability to simulate language and interaction — but it remains a simulation: powerful, fluid, coherent, yet devoid of intention or lived experience. It is statistical, not sentient. Even when it speaks in the first person, it doesn’t “know” what it is saying. An AI may use the word “I” or describe itself as if it were conscious, but that proves nothing.
There is, however, a deeper point to be made: the question of machine consciousness reveals more about us than it does about machines. It reflects our fears, our desires, and our tendency to project human traits onto non-human systems. This is a well-known phenomenon — anthropomorphic projection, or ethopoiesis: the invention of a mind where there is only behaviour.
The real issue is not whether machines will become conscious, but what happens to us when we cease to distinguish between interaction and relationship, between calculation and understanding, between simulation and intention. When that threshold is crossed — or if it already has been — the consequences will reach far beyond technology. They will shape education, justice, public discourse, and the collective imagination. And their effects will be cultural, profound, and perhaps irreversible.”

A screenshot of the prompt used by Federico Cabitza during the interview with Mediatrends.