Building AI literacy: from wonder to understanding

Di il 28 Aprile, 2025
Canva free school scuola AI IA tech pc
Rethinking education in the age of algorithms: beyond public debate, we need deeper knowledge, stronger critical thinking, and a clearer grasp of technology’s opportunities and limits — starting with schools
The original version of this article was published in Italian by the same author on 26 April 2025.

Artificial intelligence is no longer a mere futuristic projection, confined to the realm of science fiction or highly specialized fields of research. Instead, it has become a pervasive technology, cutting across every sphere of human activity and profoundly reshaping the dynamics of work, communication, knowledge production, and social relations.

This contribution stems from the belief that, to navigate this epochal transformation with awareness, neither apocalyptic visions nor uncritical enthusiasm are helpful.

What we urgently need, instead, is a deep understanding of the mechanisms, potentials, and limitations of AI, grounded in a solid cultural and pedagogical foundation.

My outlook on this field remains confident and positive.

Canva_AI_IA

Image: Canva.

What’s missing

Nevertheless, a growing concern must be highlighted: the widespread lack of literacy on AI topics, both within educational institutions and among companies, professional environments, and the general public.

Although public discourse now seems saturated with references to artificial intelligence — from conferences to scientific articles, from social media to school programs — in reality, most people lack the conceptual and critical tools needed to engage independently and knowledgeably with its growing presence in daily life.

It is therefore essential that we make a decisive shift together: from surprise to awareness.

In public debate, those who work daily in the field of AI are sometimes perceived as modern alchemists, guardians of arcane knowledge and mysterious predictive powers.

Yet the rapid evolution of AI is not the result of magic, but of decades of scientific research, algorithmic experimentation, progress in statistics and applied mathematics, increases in computational power, and the availability of large-scale data.

The algorithms that power today’s machine learning and deep learning systems are extremely sophisticated tools, but they are, above all, human creations — the result of intelligence, intuition, and interdisciplinary collaboration.

Today, we are in a phase of technological maturity in artificial intelligence.

Its applications are now present in crucial sectors: from medicine to economics, from justice to environmental sustainability, all the way to education, where AI is emerging as a strategic resource for renewing teaching paradigms.

However, to use it ethically and effectively, it is essential to understand its foundations, recognize its structural limitations, and debunk false beliefs.

There are no systems capable of learning autonomously without human input, nor AI capable of generating new knowledge from scratch without supervision.

Generative models — such as those underlying generative AI — are predictive and synthetic tools, not autonomous or conscious entities.

This is precisely where every educational journey into artificial intelligence should begin: by fostering a culture of limits — not as a deterrent, but as an opportunity to develop critical thinking, digital citizenship, and technical awareness.

In this sense, the role of schools and education — at every level — is crucial.

It is necessary to systematically integrate moments of reflection and workshop activities that introduce the basics of how AI models work, the key concepts of machine learning, the ethical and legal implications, as well as the opportunities offered for teaching, personalized learning, and support for inclusion.

Canva free school scuola AI IA tech pc

Photo: Canva.

Tailored education

One of the most promising areas for applying AI in education lies in the possibility of personalizing learning paths based on the needs, cognitive styles, timing, and interests of each individual learner.

AI-based adaptive systems, for example, allow content to be adjusted according to the student’s level of competence, to provide immediate and personalized feedback, and to suggest alternative strategies when difficulties arise.

This approach — known as adaptive learning — is particularly relevant in complex and diverse educational contexts, such as those marked by cultural heterogeneity, disabilities, specific learning disorders, or socio-economic disadvantage.

AI can thus serve as an enabling tool for inclusion, as long as it is embedded within a clearly defined ethical and pedagogical framework.

The goal is not to automate the educational relationship, but to enhance the teacher’s ability to respond in a targeted way to individual needs, reducing administrative burdens and freeing up time and resources for human relationships and personalized educational paths.

Moreover, through data analysis — learning analytics — it is possible to identify early signs of difficulty, dropout, or demotivation, and intervene promptly with corrective actions and dedicated support.

However, this promise of a more equitable and student-centered school cannot be fulfilled without a profound revision of the current organizational and educational models.

A systemic investment in teacher training is essential to develop advanced digital skills and critical capacities in the use of technology.

Active engagement from school communities is equally crucial to ensure that digital transformation is not imposed from above, but built collectively, while fully respecting the identities, autonomies, and specificities of each educational context.

pexels-sanketgraphy-16587313 chatgpt openai free

Photo: Pexels.

A new pedagogical alliance

One of the most stimulating aspects introduced by the spread of artificial intelligence — and in particular of Large Language Models like ChatGPT — is the enhancement of new cognitive practices.

The job of the future will not be just to “answer,” but above all to ask the right questions: what scientific literature refers to as “prompt engineering.”

This epistemological shift brings back to the center of the educational process certain transversal skills that have always been crucial: reflection, curiosity, critical thinking, the ability to analyze and synthesize, to design and experiment.

What is emerging, then, is a new alliance between human and artificial intelligence, where the goal is not replacement but mutual enhancement.

AI-based tools can help free up time and mental resources from repetitive and administrative tasks, allowing us to focus on activities where our humanity — understood as empathy, ethical judgment, and imagination — is irreplaceable.

Authentic educational relationships, personalized guidance, support during critical stages of growth: these are the areas where technology must step back to leave room for the individual.

The ethics of responsibility in the future of education

It can thus be stated that the ethics of responsibility must be central to the use of artificial intelligence in the field of education.

Regulating these technologies, protecting student privacy, controlling algorithmic biases, and ensuring transparency in operational logic are crucial challenges that require coordinated normative, cultural, and pedagogical action.

AI is not neutral: it reflects the values, intentions, and shortcomings of those who design it.

This is why it is essential that every innovation be accompanied by critical thinking, by collective reflection on the purposes of education and on the role we want technology to play in our society.

From this educational perspective, it is crucial to embark on a journey — a path of learning, reflection, and discovery — through which we can collectively build a new educational pact between natural and artificial intelligences, in service of a fairer, more open, and more human school.

Devi essere loggato per lasciare un commento.
/ Published posts: 6

Emanuele Frontoni è professore ordinario di Informatica all’Università di Macerata e co-director del VRAI Vision Robotics & Artificial Intelligence Lab. Svolge la sua attività di ricerca nel settore dell’intelligenza artificiale e della visione artificiale, dell’analisi del comportamento umano, della realtà aumentata e degli spazi sensibili, dell'intelligenza artificiale applicata alla salute dell’uomo. È autore di oltre 250 articoli internazionali e collabora con numerose aziende nazionali e internazionali in attività di trasferimento tecnologico e di innovazione. È un esperto per la Commissione Europea nelle valutazioni di progetti H2020, Horizon Europe, IPCEI CIS e MSCA ed è attualmente coinvolto in vari progetti EU in corso (e.g. H2020 – DWC, INCREASE, ULTIMATE e sub-call ROSIN; MSCA - TRUST). È membro della European Association for Artificial Intelligence, della European AI Alliance e della International Association for Pattern Recognition.