Showing posts with label art. Show all posts
Showing posts with label art. Show all posts

Saturday, August 30, 2025

80% of students now use AI-AI Has Come To Reshape Education

 

AI Has Come To Reshape Education

Tyler Durden's Photo
by Tyler Durden
Tuesday, Sep 09, 2025 - 02:55 AM

Authored by Lika Kobeshavidze via the Foundation for Economic Education (FEE),

In the last few years, everything has changed for college students. Applications capable of writing assignments suddenly became a part of everyday life.


What is the real impact of artificial intelligence (AI)? Is it a convenient tool for personalized learning or a path to academic dishonesty?

Out of nowhere, AI became students’ best friend.

A tool created in 2022 is now a daily habit. Professors may see dishonesty, but students see efficiency. Is AI additional help or a shortcut to avoid learning?

The real problem is a decline in educational standards. Will over-reliance on AI make students smarter, or does it come with darker consequences?

According to a new study by scientists at the Massachusetts Institute of Technology (MIT), the latter might be true.

Researchers found that writing essays with ChatGPT can lead to “cognitive debt” and a “gradual decline in the quality of written assessments.”

Over four months, MIT researchers asked 54 adults to write essays in three groups: those who used ChatGPT, those who used a search engine, and those who used only their own skills. The team tracked brain activity and analyzed the writing to see how engaged participants were. The results were interesting: those who relied on AI showed much less brain engagement and even struggled to remember their own quotes. When later asked to write without AI, they performed the worst of all.

The study was small, with only 18 participants making the final round, but it raises a big question: Does over-reliance on AI make it harder to think for ourselves?

Artificial intelligence is still a relatively new tool. But its rise has created major challenges for academic integrity. This skepticism is not new; people had similar concerns when digital calculators emerged. They were also seen as “easy fixes.” But, in the 1970s, exams were redesigned to match a new reality; instead of calculating by hand, students were expected to use calculators and solve complex problems.

The real challenge is that the institutions haven’t updated their standards or don’t even know how. Teachers still assign the same tasks and expect the same outcomes as five years ago, ignoring the fact that a powerful new tool now exists.

It is essential that current and future generations can think critically and creatively and solve problems. However, AI reshapes what this means. Writing essays by hand is no longer the only way to demonstrate critical thinking, just as long division doesn’t automatically prove numerical skills.

Already, 89 percent of US college students admit to using ChatGPT for homework, despite its limitations. Adaptation is urgent. Some universities, like Stanford, Barnard, and New York University, have begun offering AI literacy courses, where students learn to evaluate the consequences of AI and understand when not to use it.

Europe is taking a slightly different path. The European Commission’s guidelines emphasize transparency, accountability, and fairness in AI education. Instead of banning the tool, they are trying to integrate it into the learning process. Students may use AI to brainstorm ideas, correct grammar, or support research. The goal is for students to understand how it works, its risks, and capabilities.

Asia, however, is the leader of integration. In South Korea and Singapore, students use AI in classrooms and in assessments. Singapore, through its second National Artificial Intelligence Strategy (NAIS 2.0), has positioned itself as a global leader. The plan outlines 15 steps over the next 3–5 years to advance AI in manufacturing, finance, healthcare, education, and public services.

AI is a huge part of today’s world. No matter how much universities or schools wish it never existed, it seems like it’s here to stay.

Just as calculators handle our calculations, AI can support learning.

But the real problem is not reliance on AI, but misuse.

Pretending that this technology doesn’t exist or banning it weakens education.

This is not 1955; not all the students write exams by hand. They need to be challenged and taught how to use AI responsibly and ethically.

In fact, mastering AI should be part of academic success.

Ignoring it leaves students unskilled, unprepared, and ultimately less competitive.

Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times or ZeroHedge.





Over 80% of Middlebury College students use generative AI for coursework, according to a recent survey I conducted with my colleague and fellow economist Zara Contractor. This is one of the fastest technology adoption rates on record, far outpacing the 40% adoption rate among U.S. adults, and it happened in less than two years after ChatGPT’s public launch.

Although we surveyed only one college, our results align with similar studies, providing an emerging picture of the technology’s use in higher education.

Between December 2024 and February 2025, we surveyed over 20% of Middlebury College’s student body, or 634 students, to better understand how students are using artificial intelligence, and published our results in a working paper that has not yet gone through peer review.

What we found challenges the panic-driven narrative around AI in higher education and instead suggests that institutional policy should focus on how AI is used, not whether it should be banned.

Contrary to alarming headlines suggesting that “ChatGPT has unraveled the entire academic project” and “AI Cheating Is Getting Worse,” we discovered that students primarily use AI to enhance their learning rather than to avoid work.

When we asked students about 10 different academic uses of AI – from explaining concepts and summarizing readings to proofreading, creating programming code and, yes, even writing essays – explaining concepts topped the list. Students frequently described AI as an “on-demand tutor,” a resource that was particularly valuable when office hours weren’t available or when they needed immediate help late at night.

Understand how AI is changing society

We grouped AI uses into two types: “augmentation” to describe uses that enhance learning, and “automation” for uses that produce work with minimal effort. We found that 61% of the students who use AI employ these tools for augmentation purposes, while 42% use them for automation tasks like writing essays or generating code.

Even when students used AI to automate tasks, they showed judgment. In open-ended responses, students told us that when they did automate work, it was often during crunch periods like exam week, or for low-stakes tasks like formatting bibliographies and drafting routine emails, not as their default approach to completing meaningful coursework.

Of course, Middlebury is a small liberal arts college with a relatively large portion of wealthy students. What about everywhere else? To find out, we analyzed data from other researchers covering over 130 universities across more than 50 countries. The results mirror our Middlebury findings: Globally, students who use AI tend to be more likely to use it to augment their coursework, rather than automate it.

But should we trust what students tell us about how they use AI? An obvious concern with survey data is that students might underreport uses they see as inappropriate, like essay writing, while overreporting legitimate uses like getting explanations. To verify our findings, we compared them with data from AI company Anthropic, which analyzed actual usage patterns from university email addresses of their chatbot, Claude AI.

Anthropic’s data shows that “technical explanations” represent a major use, matching our finding that students most often use AI to explain concepts. Similarly, Anthropic found that designing practice questions, editing essays and summarizing materials account for a substantial share of student usage, which aligns with our results.

In other words, our self-reported survey data matches actual AI conversation logs.

Why it matters

As writer and academic Hua Hsu recently noted, “There are no reliable figures for how many American students use A.I., just stories about how everyone is doing it.” These stories tend to emphasize extreme examples, like a Columbia student who used AI “to cheat on nearly every assignment.”

But these anecdotes can conflate widespread adoption with universal cheating. Our data confirms that AI use is indeed widespread, but students primarily use it to enhance learning, not replace it. This distinction matters: By painting all AI use as cheating, alarmist coverage may normalize academic dishonesty, making responsible students feel naive for following rules when they believe “everyone else is doing it.”

Moreover, this distorted picture provides biased information to university administrators, who need accurate data about actual student AI usage patterns to craft effective, evidence-based policies.

What’s next

Our findings suggest that extreme policies like blanket bans or unrestricted use carry risks. Prohibitions may disproportionately harm students who benefit most from AI’s tutoring functions while creating unfair advantages for rule breakers. But unrestricted use could enable harmful automation practices that may undermine learning.

Instead of one-size-fits-all policies, our findings lead me to believe that institutions should focus on helping students distinguish beneficial AI uses from potentially harmful ones. Unfortunately, research on AI’s actual learning impacts remains in its infancy – no studies I’m aware of have systematically tested how different types of AI use affect student learning outcomes, or whether AI impacts might be positive for some students but negative for others.

Until that evidence is available, everyone interested in how this technology is changing education must use their best judgment to determine how AI can foster learning.

The Research Brief is a short take on interesting academic work.