In a lecture hall at a Moscow university, a fifth-year student enters a prompt for a marketing case study analysis just thirty minutes before the deadline. The neural network generates a structured response complete with examples and references. After refining the phrasing, she submits her work.
A study recently published in the journal Frontiers in Education examined these exact practices. Researchers surveyed over a thousand students across several countries to analyze how they integrate generative models into their daily studies. Rather than focusing on abstract trends, the research documents specific scenarios ranging from seminar preparation to thesis writing.
The process generally follows a specific pattern. A student defines the task, receives a draft, and then compares it against their own notes and lecture materials. The critical step is not copying, but rather editing and fact-checking. This cycle reduces the cognitive load on working memory, allowing students to shift more quickly from information gathering to actual analysis. The analogy is simple: the neural network acts as a rough assistant arranging furniture in a room, while the student decides where to leave the open space.
According to the study's findings, approximately 65% of respondents have used AI for academic tasks at least once. Positive effects include faster preparation times and a better grasp of complex topics. However, the authors note several limitations, such as a sample consisting primarily of technical and economics students and data collected over only a single semester. Long-term observations on how regular use of these models affects the depth of critical thinking are still lacking.
This reveals a much broader issue. When a digital tool becomes more accessible than a professor's guidance, a gap emerges between those who can craft sophisticated prompts and those who treat AI as a shortcut for finished answers. The higher education system, built on assessing independent work, is beginning to lose its bearings regarding whether to evaluate the final result or the process used to achieve it.
The question is no longer whether to permit neural networks, but how to redesign assignments to demand what AI cannot yet replace: a personal perspective and accountability for the arguments chosen.



