How ChatGPT spoiled my semester (2024)

77 edent 58 8/7/2025, 5:00:32 AM benborgers.com ↗

Comments (58)

bradley13 · 9h ago
ChatGPT didn't ruin anything. Lazy students did.

I'm a prof, and my experience so far is that - where AI is concerned - there are two kinds of students: (1) those who use AI to support their learning, and (2) those who use AI to do their assignments, thus avoiding learning altogether.

In some classes this has flipped the bell curve on its head: lots of students at either end, and no one in the middle

ViscountPenguin · 9h ago
Quantity is a quality all its own, it's significantly easier to be lazy with LLMs than with oldschool cheating methods, this changes the equilibrium point at which people will cheat. So if you're a student, you're now even more likely to be dragging along deadweight than you were a decade ago.
mattgreenrocks · 7h ago
Yes, I think many aren’t realizing this. It is much easier to be lazy, much faster to get something plausible, and not culturally looked down upon. It is also easier to say, “I’ll ask ChatGPT to help me with my main points,” and fall into having it write the essay for you. Changing the defaults (as they were) changes the system, given enough time.

Perhaps the worst aspect of LLMs is they can support us in our “productivity,” when we’re actually trying to avoid the hard work of thinking. This sort of technology-assisted self-delusion feels very similar to social media: “I’m not hiding from life and people, I’m talking to them on Twitter!”

Frieren · 8h ago
> ChatGPT didn't ruin anything. Lazy students did.

That's a very unuseful way of looking at it. It solves nothing.

ChatGPT has created a problem, we need to look ways of solving it. Unregulated technology is harmful, we need regulations that limit the bad use cases.

To just blame people so, lets be real, your shares on bit tech corporations go up is destructive for society. This tools, like social media, are harming people and should be regulated to work for us. If they cannot then shut them off.

veltas · 7h ago
> That's a very unuseful way of looking at it. It solves nothing.

They're not trying to give you facts that solve things, they're just giving you facts, as they see it. You can do with that info what you will.

pipes · 8h ago
And who gets to decide that?
delusional · 8h ago
Democracy
pipes · 1h ago
Where does democracy start and personal choice end?
rrgok · 6h ago
Ahah good one...
PeterStuer · 7h ago
"Unregulated technology is harmful, we need regulations that limit the bad use cases"

Unfortunatly most often the cure is worse than the poison.

_Algernon_ · 7h ago
The solution is obvious:

Go back to pen-and-paper examinations at a location where students are watched. Do the same for assignments and projects.

rrgok · 5h ago
Better: 15-30min interrogation. I loved those, I can reason through the problem without worrying about writing correct sentence and hoping the other person I understood what I wrote and I understood the question.
integralid · 5h ago
As a person who used to teach at uni a bit, they sound great, but I dread the amount of work to actually spend 20 meaningful minutes with every student in a row to test their knowledge. Grading the assignments was already the most exhausting thing in the whole year.
integralid · 5h ago
I'm not saying I disagree, but that won't work too well for coding assignments, for example
qnleigh · 7h ago
Something like this does seem like the only viable option. I wonder how much it will actually happen.
nullc · 6h ago
This article is about adults who are paying considerable amounts of money to be educated, significantly undermining the education they're paying for by offloading the exercises onto chatbots.

And your response is that we need regulations??

Institutional policy, changes to lesson practices, cover the risk of wasting your education in intro materials... sure!

But state is not your parents, it's certainly not mine. Geesh.

bradley13 · 8h ago
Seriously? Regulate use cases? That's sort of like the very short-lived attempt to regulate automobiles, by requiring someone to walk in front of every vehicle, ringing a bell.

Actual, useful AI is a disruptive technology, just as the automobile was. Trying to regulate use cases is the wrong solution. We need to find out how this technology is going to be integrated into our lives.

CalRobert · 7h ago
That was actually pretty useful regulation when the streets were full of people, horses, commerce, kids, etc.
lmm · 7h ago
It would be pretty useful regulation to bring back, then maybe the streets would be full of people, horses, commerce, and kids again.
suddenlybananas · 8h ago
This is why we should allow kids to drive cars in gym class.
isaacfrond · 7h ago
Actually gym classes are a dumb leftover from the turn of the century (no not that turn, the 1900 turn). It serves no purpose anymore. Modern students get no meaningful skills, no lasting health benefits. It's mandatory exertion for a grade.

Learning to drive a car at school would do a whole lot more good!

integralid · 5h ago
What's wrong with making sure every kid gets to move a bit (exert, as you put it)? It doesn't matter for kids that are already fit, but I think that it's great for health of the bottom 30% who doesn't exercise and doesn't have a proper diet (both are their parents fault, to be clear).
hvb2 · 8h ago
You seem to think that before ChatGPT there were no students cheating?

There will always be people that try to outsmart everyone else and not do the work. The problem here is those people, nothing else.

lm28469 · 7h ago
I never understood this argument... "something was somewhat possible before, hence anything enabling more of it is not a problem and we shouldn't even mention the possibility of it being harmful"
integralid · 5h ago
"It was always possible to buy meth if you knew where to look, so it's not a problem that meth is now sold in grocery stores. The people abusing it are at fault here" yeah, ok.
hvb2 · 4h ago
My point was that there's always been cheating, the tools just change. I'm sure cellphones have been a tool of choice like 10 or 15 years ago.
lm28469 · 1h ago
Literally everything on this planet is a slight iteration of something preexisting, that's hardly an argument. Internet is just a better mail system. A container ship is just a fast mail man. A nuke is just a very big grenade. Yet all of these things revolutionised the world.
mnmalst · 7h ago
Totally agree, especially in cases where new technology lowers the bar to entry considerably, resulting in potentially many more people using it.
Terr_ · 7h ago
Any sufficiently sharp quantitative change is also a qualitative change.

That applies here, with these academic-cheating scenarios: It's not just some incrementally cheaper and more-convenient way to hire somebody to write your paper like you could for decades.

rekenaut · 7h ago
Is it possible that the methods of cheating in the past were a lot more ineffective, risky, or expensive? If a tool like ChatGPT makes cheating a lot easier and less risky, of course more students will use it.

No comments yet

meander_water · 8h ago
Google, Anthropic and OpenAI are well aware of this. They've pushed out features like study mode and guided learning, but they're barely a band aid fix. The people who want to bypass learning and just get the degree now have a cheaper way. Instead of paying someone else to do their homework, they pay a bot.
moffkalast · 8h ago
Well given that a large portion of the jobs today are meaningless busywork that only exist so people have something to do, I guess we now also have the education level to match them.
BOOSTERHIDROGEN · 9h ago
For category 1, they can too fall into temptation of just not trying hard enough solving the homework/assignment. This is interesting era I think.
throwaway290 · 9h ago
No, now there is more people who are not learning. Previously students had to learn. Group 2 was very small because you had to be very lucky to get your assignments done for you without learning at least something yourself. And if you half assed your assignment it would be noticeable by the lecturer. Now it is a no brainer for many people to be in group 2
senectus1 · 8h ago
yup, AI is next level useful when treated as a rubber duck + search engine.

Its absolutely a backstabbing saboteur if you blindly trust everything it outputs.

Inquisitive skepticism is a super power in the age of AI atm.

nomius10 · 6h ago
> ChatGPT didn't ruin anything. Lazy students did.

This is a prime example of thinking exclusively along the lines of rugged individualism. It assigns all blame on the individual, whilst ignoring any systemic or collective causes.

It ignores the socio-economic realities of the students. Especially if they come from a challenged background. To them the important thing is getting the high paying job which represents a ticket out of the lower class, and if that can be optimized, it's a no brainer that they would take that route.

It ignores the fact that the actual credential paper is more important to recruiters than the knowledge gained though the program. Or even that networking and referrals has a much larger weight than raw skill in recruiting than we'd like to admit, from our meritocratic perception.

It ignores the fact that maybe the module itself is not that valuable? We're talking about the US here, and people literally pay out of pocket for education. And yet they cheat/skip it in a heartbeat. The only valid rationale is that there is no value there from an economics lens. They'd rather spend that time doing extracurricular activities that actually improve their chances of getting employed.

It ignores the fact that since the industrial revolution the education system has not evolved at all (merely adding a computer lab does not mean the system was reworked, it's the other way around, the new technology was adapted into the existing system).

The education system has flaws. The incentives in the job marketplace have flaws. There are many factors at play here, and simply arguing that "it's the student's fault" is the equivalent of an ostrich sticking his head in the sand.

sudohalt · 7h ago
The vast majority of people go to school to be "employable" (or because it's what your "supposed to do"), not to learn. The only thing that matters is your grade, therefore people will optimize for that. With grades being the only thing that matters and time pressure (you must graduate by this time) it's no surprise that students offload their work to ChatGPT. They don't care about learning they care about what is actually valued: their grade and graduating. If those things didn't matter people wouldn't use ChatGPT because their incentive would be to learn. You see this with older people who go back to college or take community college classes, they aren't cheating because there is no incentive to cheat.
T4iga · 6h ago
Except since getting my degree no one but my ego has ever cared for the grades i received (which weren't good btw). This is my European perspective. I don’t know how it is in the US.
lysecret · 7h ago
I honestly think the times where you became employable simply with decent grades and a degree are over.
tossandthrow · 8h ago
This

> Some of the sections were written to answer a subtly different question than the one we were supposed to answer. The writing wasn’t even answering the correct question.

Is my absolute biggest issue with LLMs - and it is really week written.

It is like two concepts are really close in latent space, and the LLM projects the question to the wrong representation.

skeaker · 9h ago
Not sure this was the fault of ChatGPT as much as it was the fault of disinterested students bullshitting a class for credits. I've seen similar bad work in group projects when I was a student well before ChatGPT was a thing.
augment_me · 8h ago
I am a PhD student who teaches part-time in some courses, and a difference I have personally ran into is that disinterested students still had agency over what they had written. When you present something, or hand something in that you have written, you still display the information that you know at that time. This makes feedback helpful because you can latch it onto SOMETHING.

When I get obvious LLM-handins, who am I correcting? What point does it have? I can say anything, and the student will not be able to integrate it, because they have no agency or connection to "their" work. Its a massive waste of everyone's time, I have another 5 student in line who actually have engaged with their work, and who will be able to integrate feedback, and their time is being taken by someone who will not be able to understand or use the feedback.

This is the big difference. For a bad student, failing and making errors is a way to progress and integrate feedback, for an LLM-student, its pointless.

what-the-grump · 7h ago
I think only time will tell? Let me flip this on it's head.

LLMs allow me to tackle stuff I have no business tackling because the support from the LLM for the task far exceeds google / stack overflow / [insert data source for industry or task].

Does the concept sink in? Yes and no, I am moving too fast most of the time to retain the solution.

When the task is complex enough and LLM gets it wrong, oh boy is it educational, not only do I have to figure out why the LLM is wrong, I have to now correct my understanding and learn to reason against it.

I was a very bad student, most of the classes didn't make sense to me, bored me out of my mind, I failed a lot. Do I ever feel that way when talking to Chatgpt about a task I have no idea how to solve? No, and guess what we figure it out together.

Another data point, my english writing has improved by using chatgpt to refactor / reformat, more examples, mostly correct english structure. Over time stuff sinks in even if you are not writing it, you are still reading it, and editing.

Lets take code for a minute, is it easier to edit someone else's code or your own? So everyone that has to dive deep into troubleshooting chatgpt's code is somehow dumb/lazy? I don't think so, they are at least as smart as the code.

What would happen if we made a curriculum around using chatgpt, how far would I get in chem 1 if I spent 90 minutes with chatgpt prompts prepared by a professor and a machine that never gets tired of explaining / rephrasing until I get it?

augment_me · 7h ago
I get your point and I agree with it, I don't think this flips anything. What you are describing is an engaged, motivated student who is using the tools for learning when the traditional system did not suit them. I am actually the same, had shit grades, and found traditional lectures pointless.

What I am describing in what I and many colleagues have run into are students who are not engaged or motivated with their work because there is a path of much less resistance, and are using the LLMs to pass learning moments with minimal effort.

When you choose to edit the code of the LLM instead of feeding it all back to it with the added prompt "it does not work, fix it", you have already made a choice in learning.

edit; I do agree on the curriculum change, however, there is a time-window now before there has been a consensus on the new ways of learning and political action from the universities where the learning is to a much higher degree in the hands of students than the universities. And this power can lead both ways to a higher extent than before when the university was in control of this.

tossandthrow · 8h ago
Chatgpt introduces a new vector of being a bad collaborator.

Previously you could write a lot of text that was wrong, but claim that you at least tried.

Now, we need to get used to putting it in the same category that a peer did not contribute anything and that they contributed ai slop.

This is one of the reasons why juniors will vanish - there is no room for "I tried my best".

Edit: clarity

jatins · 9h ago
I am seeing this in job as well where there is increasingly a trend of submitting code that authors haven't thoroughly reviewed themselves and can't reason through
wfhrto · 9h ago
Why should that be a problem? Code no longer needs to be understood. If there are any problems, code can be trivially regenerated with updated descriptive prompts.
jeffhuys · 8h ago
Oh the future is going to be BRIGHT for hackers!
sfn42 · 9h ago
Is this sarcasm or are you really that naive?
nlitsme · 9h ago
I think your 'group' is not communicating very well. Just telling a groupmate 'Now I will take over your work' is not very supportive. When people start editing each others work out of the blue, there seems to be no healthy discussion at all.
michaelchicory · 9h ago
This sucks, though it’s formative. An experience that I value highly from my time studying was working on a group project in a team with misaligned goals: it teaches you how much it matters to find good people to work with in the real world!
nuthje · 7h ago
Might be worthwile to have an actual conversation with your peers instead of just deleting and running over it. Maybe they thought your level of work was lower than that produced by GPT? Maybe they thought it useful to have filler draft instead of starting from a blank sheet? Maybe they have a sick parent at home and needed to fill it in just to move on? Complaining externally and not communicating is just a very tiny step up from what your classmates are doing.

Also having your entire semester spoiled by some incidents induced by random passersby? Come on, university is for growing up, start.

PeterStuer · 7h ago
Ah. The hell of group projects. In my days, if it were a group assignment you would still end up doing all the work yourself 80% of the time as the group just couldn't be arsed to produce anything and rather play a game of chicken.

Worse yet. In 10% of the cases you'ld get some clueless but very opinionated student wanting to be 'manager' and 'editor in chief', contributing nothing but bossing everyone around.

And yes, in 10% of the cases, you would have another student actually smart and helpfull.

So is it better to have LLM slop rather than nothing at all? Probably not, but it is not like those people would have turned in good contributions otherwise.

self_awareness · 9h ago
If it wasn't ChatGPT, those students are more than likely to be the kind that buys solutions so that they still don't have to work.

Some people somehow think that having more while working less is an act of resourcefulness. To some extent it maybe is, because we shouldn't work for work's sake, but making "working less" a life goal doesn't seem right to me.

tossandthrow · 8h ago
The difference in 20$ and 200$ + significantly more effort.

I don't think these students necessarily would have bought.

rekenaut · 7h ago
The difference is much greater than this. It’s $20/month for a machine that can provide instant answers to any prompt in any topic hundreds of times a month vs $200/assignment that may take days to received and you have to edit yourself if you want a change made.

I think it’s quite clear that most students who are using AI now to generate assignments would not have bought.

rrgok · 5h ago
Why "working less" should not be a life goal? I got a undergraduate degree because I can earn more, but I don't need more money. I need more free time. With increased salary I can work less.
PeterStuer · 7h ago
Before LLMs they copy pasted straight from Google or Wolfram Alpha.