ChatGPT: College's Smartest Unenrolled Student

Jack Caputo
12 Mar 2026
HST401
I pledge my Honor that I have abided by the Stevens Honor System

ChatGPT: College's Smartest Unenrolled Student
As a university student, I use ChatGPT all the time.

Being a student of physics/math, I often turn to ChatGPT when I’m stuck on a problem. If I can’t figure out what to try after using my notes, lecture slides, the internet, etc., I feed the question into ChatGPT. I don’t blindly copy down its steps. Instead, I stop reading when I find something ChatGPT does differently, understand what it is/why it can be used, then try my hand at the problem again. Normally, these little “hints” are enough, and are great time savers. Instead of bypassing struggle, a key aspect of true learning, this method allows me to pinpoint the topic/technique and surrounding context without having to slog through other topics in a dense textbook.

I also use it as a know-it-all problem solver, like a friend who’s really good at trivia. Want to find an incredibly niche word or idea that you have no clue how you’d pass into Google? BAM! ChatGPT immediately tells me: negative capability, ambiguity tolerance. Need to troubleshoot setting up Git/Github and don’t know how to navigate the terminal? WHAPOW! I now start all human interaction with `git pull origin main`. Doing a senior project in a field you’re unfamiliar with (machine learning)? WHAZAZA! Concise, formatted explanations of the exact information you need, tailored to the context of your research.

How the author thinks he looks using ChatGPT

I sometimes use it for writing. I start out any official writing (including this one!) by formatting my stream-of-consciousness into a long, bulleted list. While there certainly is a well-defined structure, because I’m just spilling everything out, I don’t care about making the connections between ideas. So, similar ideas that should be connected show up at disjoint parts within the long list. If this list is long and tangled enough, I employ ChatGPT to consume it and spit out a concise form with no repeating ideas. Its pattern-matching does very well to identify the similar ideas strewn across different parts of the list and even connects ideas how I wanted them to. By looking at its output with intense scrutiny, this is similar to the physics/math use case, wherein I merely use ChatGPT for suggestions – good suggestions, but suggestions nonetheless.

Through my testimonials, and almost certainly your own experience, we can surely agree that generative large language models – the only type of AI I use – are extremely powerful and extremely useful. Are they too good at what they do?

As a student, I’ll narrow the scope to examine the perspective of university students. In a conversation with one professor, he briefly brought up thinking about this in the context of game theory: students want to “win” – get good grades – with the least amount of work. Using AI, therefore, is merely the path of least resistance in this optimization problem.

Students have, of course, been cheating forever. Before the AI, ancient students likely slipped stone tablets chiseled with notes into their togas on exam day; more recently, there was frat materials/past exams, then the Chegg/online answers. But this information revolution really is different than any other one. Books/the printing press, then radio, and the internet made information cheaper and more accessible, but people still had to read, listen, or surf the web themselves. Since Chegg, made worse by AI, students are not only easily able to cheat, but can do so without accidentally learning the topic in some capacity along the way.

Of course, AI can just be used to get information more efficiently, and is not always some cheat-monster. But can we trust students to use it that way? Should we not only halt progress on AI [1], but shut it down completely [2], because it’s too powerful for our own good? This was put forward almost 3 years to the day, but it clearly never went anywhere. I think this is a good thing.

The College Board reported in the end of 2025 that most high schoolers use generative AI in their assignments [3]. Aside from my own testimonials, a meta-summary from Campbell University (March 2025) [4] and research from the National Institute of Health (NIH) (Oct 2024) [5] recognize generative AI’s ability to improving student understanding and rate of learning, while allowing them more personalized learning experiences. However, people have their reservations. College Board reported that students, parents, and educators were all perfectly split on whether AI is more beneficial than harmful [3]. Clearly, some direction is needed. Interestingly, both Campbell University and NIH choose to see AI as a force for good, advocating for the careful implementation of AI into curricula, starting from AI literacy and responsible use training for both students and instructors. Even Forbes agrees, stating that institutions should begin writing AI policies, to preserve the benefits of AI through responsible use [6].

All of the above consists of structural changes. On a recent campus visit for a PhD program, I attended an interesting talk by Prof. Diana Sachmpazidi from Rochester Institute of Technology. In the context of Physics departments, she introduced the idea that systematic change requires not only a change in structure, but also culture. This observation, for me, has been quite profound, and it’s interesting to see how truly universal this is. In our current exercise, those structural changes mentioned above would not stick unless paired with a change in cultural attitudes towards AI.

The West’s emphasis on efficiency over all else incentivizes students to shun true learning for the more efficient but less robust learning from AI. The culture perpetuates idealizations of efficiency, and attitudes towards efficiency are primarily what must change.

Perhaps we stigmatize overuse of AI. Stigmas can be beneficial, like those around cigarettes or cheating in a romantic relationship. "Wow, dude, you really had to use ChatGPT as a dictionary? That’s pretty embarrassing. It’s just totally overkill". "Dude, you didn’t even try the homework problem. I mean, I understand using it if you need help, but what are you doing in college if you’re not even trying to learn?" We should stigmatize reaching so readily for it. Like any powerful advancement, it’s unhealthy to abuse it – moderate, responsible use is the name of the game.

To be pragmatic, we won’t shut down AI development. However, we can certainly change our attitudes. This is a watershed moment, and so we should think carefully about our next steps. I hesitate to put responsibility on consumers, because it is corporate tech people creating AI, plus the government encouraging it (remember, this stuff is used to boost economy and for war). Focusing on changing consumer habits misdirects the blame, so that the people actually responsible for creating the problem AI are not held accountable.

The tech giants have adopted an attitude of relentless pursuit toward progress, at the expense of all else. Ceaseless competition between these companies, born simply out of the desire to “win”, is justification enough for full development of AI technologies with no reservations, so they seem to believe. While consumers aren’t the start of the problem, informed and responsible consumption is something we as consumers should be doing, even if it’s more work for ourselves. This holds especially true for more powerful technologies, such as AI.

I suggest that we trade our harmful impulse towards efficiency and optimization for logical, long-term solutions before it is too late. Humanity is at a crossroads. The ubiquity of AI means it touches all information, all classes of people, all forms of media. The implications could bring salvation or catastrophe, but in either case would be unimaginable. It’s up to us to choose wisely.

--------------------------------------------------------------------

[1]https://futureoflife.org/open-letter/pause-giant-ai-experiments/
[2]https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
[3]https://newsroom.collegeboard.org/new-research-majority-high-school-students-use-generative-ai-schoolwork
[4]https://sites.campbell.edu/academictechnology/2025/03/06/ai-in-higher-education-a-summary-of-recent-surveys-of-students-and-faculty/
[5]https://pmc.ncbi.nlm.nih.gov/articles/PMC11505466/
[6]https://www.forbes.com/sites/avivalegatt/2025/09/18/90-of-college-students-use-ai-higher-ed-needs-ai-fluency-support-now/

Comments

Popular posts from this blog

Artificially Informed: How AI is Robbing Students of their Critical Thinking

Scaling the Potential of Vertical Farming Going into 2025 and Beyond

The Biblical Flood That ‘Began’ History! Meltwater Pulse 1A