AI challenges the university: how teaching and assessment are changing | Llm machine learning | Large language models chatgpt | A comprehensive overview of large language models | Turtles AI
AI, increasingly present in universities, is forcing teachers to rethink curricula and assessment methods. A recent study from the University of Illinois analyzes how ChatGPT performs in engineering courses, highlighting opportunities and critical issues in modern education.
Key Points:
- Inconsistent AI performance: ChatGPT performs well on structured exercises, but fails on tasks that require critical thinking and design.
- Rethinking curricula: Teachers must redefine what to teach, balancing practical skills with the development of independent thinking.
- AI as an inevitable tool: Teaching must aim to educate students to use AI consciously and critically.
- Open technological and ethical challenges: Questions persist about the economic sustainability of AI, data privacy and copyright protection.
The entry of AI into university classrooms is forcing the academic world to profoundly review its practices. According to a pilot study conducted by Melkior Ornik, associate professor in the Department of Aerospace Engineering at the University of Illinois at Urbana-Champaign, ChatGPT, in its free version, managed to obtain a grade equivalent to a B- in a mathematics for autonomous systems course. However, the analysis revealed significant inconsistencies in performance depending on the type of exercise proposed: while the AI performed admirably in structured tasks, such as multiple-choice quizzes and numerical calculations, achieving results close to 100%, it showed significant deficiencies in complex problem-solving and design tasks, often equivalent to D-level grades. Ornik, supported by doctoral student Gokul Puthumanaillam, built the experiment with the goal of realistically assessing the extent to which a student with no real skills could rely solely on AI to pass a college course. The results, collected in a preprint titled "The Lazy Student’s Dream: ChatGPT Passing an Engineering Course on Its Own", raise questions about the nature of the knowledge to be transmitted to students today.
Comparing the current situation to the introduction of the calculator in schools, Ornik suggests that, as then, the best approach is not to hinder technological progress, but rather to recalibrate teaching, privileging reasoning, the ability to choose between alternative methods and the recognition of the limits of machines. Similarly to what happened with the abandonment of logarithm and trigonometric tables manuals, today the acceptance of AI implies asking ourselves what is still essential to learn by heart and what can instead be delegated to digital tools. A dialogue, that between tradition and innovation, which also extends to primary education, where the teaching of multiplication tables and mental calculations continues to be defended as fundamental for cognitive development, despite the availability of calculators and smartphones.
Ornik identifies three possible approaches to integrate AI into teaching: counteract it by designing tests that are difficult for machines to replicate, such as oral exams or creative tasks; embrace it as a resource, teaching students to use it effectively; or consider it an inevitable presence, training students to use AI tools critically and consciously. The latter option, which Ornik feels closest to, involves the need to educate people to doubt, to independently verify information and to understand the potential fallacies of algorithms.
In the context of this debate, reflections also emerge on the real prospects for the development of AI. Despite the widespread enthusiasm, the economic sustainability of current models is still uncertain. Operating expenses often exceed revenues and some observers compare the current euphoria to the dot-com bubble of the 2000s. Ornik himself observes with skepticism the exasperated marketing that labels as "smart" even products, such as automated barbecues, based on technologies consolidated for decades.
Beyond economic considerations, there are still open ethical and legal questions: the management of personal data, copyright protection, and the reliability of generated content remain central issues. In this evolving landscape, the University of Illinois is preparing to extend the study to a broader range of engineering courses, exploring changes to teaching materials and assessment methods to adapt them to the era of generative AI.
The goal is twofold: on the one hand, to develop modules dedicated to educating students on critical thinking about AI, showing students concrete examples of errors made by models; on the other, to review curricular content in light of the fundamental question: what still makes sense to teach, and how.
In parallel, even at the political level the first steps are being taken towards integrating AI into school curricula: President Donald Trump has announced initiatives to promote the adoption of AI starting from kindergarten, coordinating efforts between the government, academic institutions, the private sector and foundations. However, the confusion that saw Education Secretary Linda McMahon confuse "AI" with "A1" during a recent public appearance underscores how much work is still needed to bridge the digital skills gap at all levels.
As AI expands in education, the real test will be the ability of the education system to evolve, without losing sight of the essence of human development.