AI models have limited reliability, but they can become tireless teachers

Di il 28 Aprile, 2025
Jacopo Tagliabue NYU AI foto sua
“Unlike fake news,” in science “theorems generated by artificial intelligence can be automatically verified by another machine,” says NYU professor Jacopo Tagliabue
The original version of this article was published in Italian by the same author on 19 April 2025.

Jacopo Tagliabue is one of those rare experts who can make complex topics easy to grasp—even for those with no background at all.

He’s perfectly capable of diving deep into highly technical details, especially when it comes to machine learning. But with the same clear, straightforward approach he uses with his students at New York University, he explained what it means to work in AI at a startup and take it all the way to an IPO, as he did with Coveo.

“My parents thought I was stupid because I didn’t speak at one year old. The real, and unconventional genius in the family is my sister!”

He jokes about the academic journey that took him from the Santa Fe Institute to the Massachusetts Institute of Technology, before venturing into Silicon Valley, where he co-founded Tooso — the “startup of philosophers”— well before AI became a hot topic.

To stay afloat in that narrow strip of land called San Francisco, he relied on three defining traits: curiosity — “everything is interesting if you dig deep enough,” he says, quoting physicist Richard Feynman —, acute sense of humor, as shown by the biography he wrote to describe himself, and a certain tolerance for risk, without which talent gets wasted on small optimizations instead of bold, ambitious projects.

AI journalism IA giornalismo foto free Canva intelligenza artificiale

Photo: Canva.

Alongside his talent for “all things computational,” he nurtures a deep love for literature — because logic and ethics, at the end of the day, are fundamentally the same thing: a duty to oneself.

And it’s with logic and ethics that we should also approach AI.

What aspect of generative AI do you find truly revolutionary for our lives?
Right now, these models still have limited reliability, especially in certain areas. But I still find a lot of potential in them: it’s empowering to ask the dumbest questions to a professor who never gets tired, and without any embarrassment. I myself constantly ask ChatGPT questions about programming, especially when I’m exploring new technologies.

Well, those don’t sound like dumb questions at all. Can you give an example of these technologies you’re referring to? And is ChatGpt able to help you out?
Absolutely. My company develops software mainly using two programming languages: Python and Golang. It’s a bit like saying English and Spanish. In that metaphor, I’m fluent in English and currently learning Spanish. One trick I use to learn is asking ChatGpt to translate a concept I know well in English – Python – to one in Spanish – Golang.

Speaking of translation, thus treason, what’s your view on the impact of AI in the field of journalism?
I think basic news reporting—the kind that sticks to the 5Ws, who, what, where, when, why, — is not doing particularly well. We can’t underestimate the issue of fake news, because it can be catastrophic at all levels of society. There’s a very real risk that, within just a few years, most videos, images, and text online will be AI-generated — with all the limitations and dangers that come with that. There are technical issues, such as the infamous hallucinations these models can produce, but also ethical concerns — mainly that bad actors might use these tools for less-than-noble purposes.

What’s one possible solution?
It might sound basic, but I foresee one main answer: train your critical thinking. With all due respect to the aorist I learnt when studying Greek in high school, the ability to assess and revise is the most useful thing I was taught as a kid. In a world where knowledge is no longer a scarce resource, knowing how to distinguish, to doubt, to prioritise — that’s everything.

NYU - Silver Center_free Wikimedia Commons

The Silver Center at New York University in Manhattan’s Greenwich Village. Photo: Wikimedia Commons.

You teach at NYU. Are you already seeing AI’s impact on faculty work, especially on how students engage with learning?
At NYU, we actively encourage students to use AI, but with a critical mind. There’s simply no point in trying to ban it. We don’t see any harm in getting a bit of help writing code for an assignment. What we strongly discourage, though, is mindless copy-pasting. Whether our guidelines and our students’ judgment are enough to keep this balance, well, that remains to be seen. I grew up memorising European capitals — something I can now ask Siri in two seconds. And frankly, I don’t think that’s entirely a bad thing. What worries me the most is the idea of a generation that is never compelled to think independently, or to write an essay from start to finish on their own.

Beyond misinformation, in what other areas might generative AI create power imbalances?
As always, with great power comes great responsibility, at every level. As I mentioned, some issues are already unfolding: copyright, censorship, misinformation. But there are deeper, thornier questions. Take the growing monopolies of major cloud providers, which hold immense, concentrated volumes of data. That kind of scale creates new asymmetries in itself. And then, at the other end of the spectrum, you have individuals. Very different kinds of individuals. On one side, my student who uses a chatbot to do their homework instead of learning the material. On the other, a lawyer who runs a contract through an AI system for analysis. That said, I’ll always go back to Isaac Asimov’s words: “If knowledge can create problems, it is not through ignorance that we can solve them.”

If you had to comment on the latest AI development—what would you choose? Even something simple, like “apps that changed your life this month.”
I’m a rather boring user, to be honest — I mostly rely on generative AI for programming, where it’s genuinely transformative. As a failed logician, I’ve been fascinated by recent news around AI at math olympiads. It feels like a glimpse into a future where “doing math” becomes a much more collaborative process — human-to-human, and human-to-machine. Unlike fake news, after all, mathematical theorems generated by AI can actually be verified—by another machine. And that’s a game-changer.

Silicon Valley from above_free_Wikimedia_Commons

An aerial view of Silicon Valley. Photo: Wikimedia Commons.

You’re a regular in Silicon Valley. Can you tell us if there are any promising startups or tools in the AI field which are worth watching?
Absolutely. There’s an explosion of innovation happening right now, and even the average consumer can find something compelling. Just among companies close to my own network, I’d highlight Runway, which is building creative tools that use AI to generate videos from still images — think of it like the animated portraits in Harry Potter. Looking at larger players, Duolingo has recently launched a virtual speaking companion designed to help users practice phone conversations and learn languages more naturally. In the conversational AI arena, there’s Character.AI, which lets users interact with customizable artificial personalities. Ever wondered what you’d say to a digital clone of Socrates?

Ask yourself a question and answer it, then ask ChatGPT the same question and compare.
Here’s the prompt: “Who’s better, Blur or Oasis?”

What’s the answer?
As always, ChatGPT took the “centrist position.” It said that it depends on your taste but suggested considering both sides: Blur for their innovation and experimentation, Oasis for their melodies and emotional punch. It says: “If you prefer more varied, experimental, and refined music, Blur might be your pick. If you’re drawn to melodic, instantly catchy anthems with emotional and cultural resonance, then Oasis could be for you. In short, there’s no objective answer — it depends on what you value in music. What matters more to you?”

And you?
Blur.

Devi essere loggato per lasciare un commento.
/ Published posts: 31

Ludovica Taurisano è dottoranda di ricerca in Global History and Governance per la Scuola Superiore Meridionale di Napoli, con un progetto di ricerca sull’editoria popolare e l’informazione politica negli anni Sessanta e Settanta. Con una formazione in teoria e comunicazione politica, si è occupata di processi di costruzione dell’opinione pubblica; ha collaborato con l’Osservatorio sulla Democrazia e l’Osservatorio sul Futuro dell’Editoria di Fondazione Giangiacomo Feltrinelli. Oggi è Program Manager per The European House – Ambrosetti. Scrive di politica e arti performative per Birdmen Magazine, Maremosso, Triennale Milano, il Foglio, Altre Velocità e chiunque glielo chieda. Ogni tanto fa anche cose sul palco.