AI Disruptions in Higher Education: Evolutionary Change, Not Revolutionary Overthrow
Cristo Leon, James Lipuma, Maximus Rafla
This paper offered a systems-theoretical analysis of large language models (LLMs) in the context of higher education. It began by first clarifying the conceptual landscape, then introducing key definitions to frame LLMs, not as revolutionary threats, but as evolutionary developments, grounded in decades of natural language processing and machine learning. Then, it examined how the integration of LLMs prompted institutions to seek new forms of homeostasis, balancing innovation with stability through adaptive regulatory feedback loops.
Next, the analysis explored intersections with broader concepts such as agency, authorship, commodification, and cybernetic governance. It argued that LLMs act as boundary objects whose meanings are negotiated across educational, industrial, and policy domains. It then responded to critiques framing LLMs as epistemically corrosive or ethically destabilizing by emphasizing the role of institutional reflexivity in mitigating risks.
Finally, the study concluded that LLMs do not fundamentally disrupt the mission of higher education; instead, they reveal its structural inertia. Their integration highlights the need for recalibrated pedagogical and assessment frameworks on learning processes. Instead of resisting technological change, institutions should evolve into feedback-responsive ecosystems that uphold human-centered values while embracing permissible forms of automation to enhance, rather than displace, intellectual and creative engagement. Full Text
|