
The original version of this article was published in Italian by the same author on 18 March 2025.
The boundary between human and machine creativity is becoming increasingly blurred thanks to the evolution of Generative Artificial Intelligence, commonly known as GAI.
This technology has the potential to redefine how we approach education, business management, institutional operations, and even critical sectors such as healthcare, influencing diagnosis and treatment.
However, the goal is not to replace humans but to support and prepare us for a future where human activity is complemented – and even augmented – by automated systems that assist our work.
This shift paves the way for a more efficient and productive landscape, even in highly specialised fields.
Italy’s approach to artificial intelligence is shaped by interdisciplinary innovation and a rich pool of talent. Many individuals who have studied in Italian schools and universities are now contributing to global AI advancements, leaving a significant mark on the field.
The state of play
Generative artificial intelligence is emerging as a cornerstone of AI evolution.
Unlike traditional systems that are merely reactive or operate based on predefined rules, it creates entirely new and original content.
Its capabilities extend beyond answering questions or executing tasks: generative AI can autonomously generate art, music, text, and problem-solving solutions by leveraging vast amounts of data and advanced deep learning techniques.
This progress is driven by three key factors.
Firstly, the increase in computational power, enabling AI to tackle complex challenges and process massive datasets in ways that would have been unimaginable just a few years ago.
Second, improved deep learning algorithms, which – despite their current limitations – demonstrate intriguing generalisation capabilities.
Finally, the growing availability of increasingly multimodal datasets, opening new possibilities for the development of generative AI approaches.

Emanuele Frontoni, Professor of Computer Engineering at the University of Macerata and Co-Director of the VRAI Lab (Vision, Robotics & Artificial Intelligence).
How it works
To understand how an AI method – particularly, machine learning – functions, we must look at supervised learning, one of its core methodologies. This approach involves training a model using a pre-labelled dataset.
In this context, “supervised” means that, during training, the model has access to both input features and their corresponding output labels.
The primary objective is to learn a functional mapping between inputs and outputs so that, once trained, the model can accurately predict the output label for new, unseen inputs.
This technique is widely used in tasks such as image classification, time-series forecasting, and text analysis.
The effectiveness of a supervised learning model largely depends on the quality and quantity of training data, as well as the selection of model architecture and hyperparameters. This is what we focus on.
Journalism and its (generative) AI
In today’s media and journalism landscape, artificial intelligence plays an increasingly significant role.
Generative AI, in particular, is transforming how journalistic content is produced and distributed, unlocking new operational and qualitative frontiers, especially in multimedia content creation.
However, it is crucial to clarify that AI’s role is not to replace human creativity and judgment but to amplify them, enhancing accuracy, speed, and transparency in information management and news dissemination.
Today, generative AI enables the creation of realistic and personalized textual, audiovisual, and multimedia content with unprecedented efficiency.
Advanced algorithms, neural networks, and large language models now autonomously generate articles, financial reports, sports event summaries, and weather updates, expanding the editorial capabilities of newsrooms.
Moreover, AI facilitates the production of highly personalised content, increasing user engagement and enriching the overall reading experience.
Yet, human oversight remains indispensable.
Journalists still ensure the truthfulness, ethical integrity, and quality of the final product, preventing an overreliance on efficiency at the expense of creativity.
The challenge lies in fostering intelligent collaboration between humans and algorithms.

The Berlaymont building in Brussels hosts the headquarters of the European Commission. Photo by Canva.
Ethics behind AI Act
With the AI Act, the European Union positions itself as a global leader in ethical AI regulation.
This legislative framework establishes AI governance based on risk levels, defining transparency, accountability, and comprehensibility requirements for automated decision-making processes, known as Explainable AI – XAI.
In journalism, this translates into clear and traceable responsibilities when using algorithm, safeguarding society from potential digital manipulation, such as deepfakes and misleading content.
Since last February, the European regulation has introduced initial mandatory training requirements, an important step, particularly for newsrooms.
We hope that, soon, newspapers and publishers will establish and communicate clear AI usage guidelines to their audiences and stakeholders while adopting new methodologies to manage original content, ensuring a clear distinction from AI-generated material.
AI in universities
For the past year, generative AI has been permitted in our academic theses, provided students explicitly indicate which parts were generated with the support of an LLM.
Moreover, footnotes – traditionally used for bibliographic references – must specify the tool and the prompt that led to the generated text.
Our experience shows that the questions posed to an AI often reflect how well a student has understood a topic.

Adobe headquarters in San Jose, California. Photo by Flickr.
Adobe’s fight against disinformation
Among the various initiatives promoting high-quality online information is the Content Authenticity Initiative – CAI -, a project led by Adobe in collaboration with several tech companies, media organisations, and NGOs.
Its goal is to ensure transparency and reliability in digital content.
This initiative aims to fight disinformation and digital manipulation by providing tools that allow users to verify the origin, history, and any modifications of multimedia content.
For journalism, CAI is particularly important, offering a technological solution to strengthen readers’ trust in the news by enabling editorial teams to certify the provenance and authenticity of their content.
Through CAI, news outlets can track and communicate clear information to their audiences regarding the creation, modification, and publication of texts, images, audio, and video, enhancing editorial transparency and significantly reducing the risk of misleading or altered content.
This initiative serves as a key tool in safeguarding journalism’s credibility in the digital age, improving users’ ability to distinguish between authentic content and manipulated or artificially generated material.
The Content Credentials defined within CAI enable the public to verify, in real time, the origin, authenticity, and any modifications of digital content.
In the future, mastering this skill will be common for every digital citizen.
Enhancing investigative journalism
When exploring new applications, investigative journalism stands to gain from the collaboration between humans and AI.
The concept of Human-AI Teaming is undeniably intriguing.
AI’s ability to rapidly analyse vast amounts of data and detect complex patterns or anomalies makes it an invaluable asset.
With AI assisting in large-scale data processing, journalists can uncover information more efficiently and quickly, exposing corruption schemes, fraud, or hidden conflicts of interest.
This technological support allows reporters to dedicate more time on in-depth analysis, contextualisation, and the ethical verification of sources.
As data-driven investigations evolve, source validation could reach new levels of accuracy. Understanding this evolution – and the emerging professional roles within it – will be essential and fascinating.

Photo by Canva.
Looking ahead
Despite its vast potential, artificial intelligence – particularly its generative capabilities – also presents relevant challenges.
Ethical concerns remain at the heart of the debate: to what extent should we allow machines to “create”? How can we ensure that AI’s outputs align with widely accepted moral and social values?
For citizens, the greatest risk lies in ignorance and fear. We must approach this field with curiosity, exploring both its opportunities and dangers.
However, with balanced oversight, continuous ethical reflection, and regulatory frameworks like those pioneered by the EU, AI has the potential to enrich how we inform, debate, learn, work, and create – delivering solutions that would have been beyond reach not long ago.
And we hope to continue shaping this evolution, keeping humanity at the centre, together with many curious and motivated young minds.