Every undergrad study is, as of now, possible to pass with AI models. Some of them not only pass, but pass with flying colors.
The harsh reality is that academia as a whole needs to be revamped. The easy solution would be to revert back to paper only exams, and physical attendance - but that would also exclude a ton students. A huge number of modern students are online students, or similar programs where you don't need to show up physically. Moreover, I don't think universities / colleges themselves want to revert back, as it would mean hiring more people, spending more on buildings, etc.
SecretDreams · 4h ago
So you gave the easy solution. What's the hard solution?
Honestly, the pervasiveness of LLMs looks to really erode the critical thinking of entire future generations. Whatever the solution, we need to be taking these existential threats a lot more seriously than how we treated social media (the plague before this current plague).
theptip · 4h ago
> What's the hard solution?
Ask students to solve harder problems, assuming they will use AI to learn more effectively.
Invert the examination process to include teaching others, which you can’t fake. Or rework it to bring the viva voce into evaluation earlier than PhD.
There are plenty of ideas. The problem is, a generation of teachers likely need to be cycled through for this to really work. Much harder for tenured professors.
Every technical revolution “threatened to erode the critical thinking of a generation”, and sure, the printing press meant that fewer texts were memorized rote… not to say there are no risks this time, but rather that it’s hard to predict in advance. I can easily imagine access to personalized tutors making education much better for those who want/need to learn something.
I’m more worried about post-truth civilization than post-college writing civilization for sure.
SecretDreams · 3h ago
> Every technical revolution “threatened to erode the critical thinking of a generation”
Objectively, many of them did erode some amount of critical thinking, but led to skill transfer to other domains so maybe it was neutral. Some of them were productivity boons and we got the golden age that boomers hail from. Other revolutions have just been a straight degradation in QOL. Social Media and LLMs seem to be in that vein. I'd also throw in gambling ads/micro-transactions and smoking as things that haven't exactly helped society. Out of those four examples, we only tried to course correct on smoking and, after a long period of time, we can see it's a net benefit to not smoke.
> I’m more worried about post-truth civilization than post-college writing civilization for sure.
These are the same civilizations on the same timeline.
theptip · 3h ago
> LLMs seem to be in that vein
My opinion is that even if capabilities halted now, LLMs would be more economically valuable than the internet (compared over the same 50 year trajectory). And I predict that they will not halt any time soon.
Maybe this yields more resources to invest in education like the OP author, and we end up more enriched than ever before:
> I teach at a small liberal-arts college, and I often joke that a student is more likely to hand in a big paper a year late (as recently happened) than to take a dishonorable shortcut. My classes are small and intimate, driven by processes and pedagogical modes, like letting awkward silences linger, that are difficult to scale.
The only thing I’m confident about is volatility, the range of outcomes is wide.
SecretDreams · 3h ago
> Maybe this yields more resources to invest in education like the OP author, and we end up more enriched than ever before:
Maybe maybe maybe
Should we gamble on the lives of future gens for some economic maybes or should we take a minute to think through all probable outcomes and build out some safeguards?
GeoAtreides · 3h ago
the correct assumption is not "AI will coach the student" but "AI will do the homework in its entirety"
anyone who claims otherwise doesn't remember their school days
futureshock · 4h ago
I think the hard solution is to massively increase expectations. Think Star Trek where the grade schoolers are learning quantum mechanics. If everyone has access to the oracle of all human knowledge, then you should teach and test to the maximum of what a student could do with all that power. Find the frontier where the AI fails and the human adds value and teach there.
spacemadness · 1h ago
So many on here keep saying stuff like this but it seems to just ignore any theory of learning. “Just make it harder”. Sure, any examples of how that’d work? “Quantum physics.” OK then, problem solved. That isn’t really explaining anything about how this should work.
SecretDreams · 26m ago
Yes.. but quantum physics does sound pretty great for my kids to learn!
csa · 1h ago
> Honestly, the pervasiveness of LLMs looks to really erode the critical thinking of entire future generations.
Yes and no.
Upper middle class parents as a group will still instill critical thinking skills in their kids.
But the above comment reveals more about SES (socioeconomic status) and education in general rather than something specific to critical thinking or LLMs. The current education environment in the US heavily favors kids from higher SES families for a number of reason. LLMs won’t change this.
The challenge for the education system, imho, is to find a way for lower SES kids to thrive in an LLM environment. Pre-LLM, this was already a challenge, but was possible. Post-LLM, the LLM crutch may be too easy for some lower SES folks to lean on such that they don’t develop the skills they need to develop higher order skills.
koakuma-chan · 4h ago
Don't forget to ban calculators
rescripting · 4h ago
Calculators _are_ banned when students are learning basic arithmetic.
They are only allowed once students can do it on their own, because now you have a foundational understanding and the tool just speeds you up.
SecretDreams · 4h ago
If we don't get AI correctly regulated, future gens will probably not be able to work a calculator. Calculator producers will then go out of business - so no need to ban 'em.
Thanks for the insightful comment!
Tadpole9181 · 3h ago
Yes, every single math class I've ever had in my life - primary or second education - banned calculators or (in engineering) required us to perform a full memory reset in front of the TA.
Using a machine to do the very thing you are supposed to be demonstrating a proficiency in is cheating and harms the legitimacy of the accreditation of the school.
javiramos · 3h ago
the pervasiveness of LLMs looks to really erode the critical thinking of entire future generations
Exactly the threat of AI. With regards to jobs, we'll have a shock but we will adapt as with any other wave of automation.
bbarnett · 4h ago
Indeed.
I suspect this is the true fermi paradox. Once a civilization reaches a certain point, automation becomes harmful to the point that no one knows how do anything on their own. Societal collapse may be back to bronze age, if not more regressed.
pbronez · 3h ago
You don't need AI for this. So much individual productivity depends on the civilization-level platform. Even when you decide to bootstrap stuff and do it from scratch, you're still operating in an environment deeply shaped by the billions of other people around and before you.
I hold remote interviews and I can tell when candidates use AI to answer questions, real time with camera on. They repeat my technical question, pause for a few seconds, their voice drops to a monotone and they quickly recite a bulleted list of low level technical details that sounds like a wikipedia page. I worry that candidates will learn to act more subtle, maybe configure their LLM to return with an anecdote around the tech in question, and practice "selling" their vocal communication.
msgodel · 1h ago
Most of this stuff isn't that hard to learn and apply. If they put that much effort into setting up an LLM cheating tool they could have just built something with whatever tool you're talking about.
dvh · 4h ago
Students cheat when grades are more valuable than knowledge.
esafak · 3h ago
He's talking about graduates. There are people out there trying to hold down multiple remote jobs by cheating through interviews and collecting as many paychecks as possible until they get caught.
spacemadness · 1h ago
I would say most technical interviewers cheat as they ask questions they themselves couldn’t solve on the spot.
shinycode · 4h ago
Exactly, the problem is not the students but the amount of efforts the education system has to pour in to change and make it better (and take advantage of LLMs as a learning opportunity )
jcranmer · 4h ago
Students cheat when they think grades are more valuable than knowledge.
We already have a real-world scourge of tech bros who view learning about other subjects as beneath them and who rate their ignorance as more valuable than experts in that subject. This wonderful new world of AI is only going to exacerbate that problem.
Biologist123 · 4h ago
Perfect.
Dracophoenix · 4h ago
> They repeat my technical question, pause for a few seconds, their voice drops to a monotone and they quickly recite a bulleted list of low level technical details that sounds like a wikipedia page.
Can you provide an example?
runamuck · 3h ago
Me: "Tell me about your favorite tech stack."
Them: "(Pause, eyes scan camera, monotone voice) There are many alternatives for a tech stack, each with their positives and negatives. Let's take a look at some of the more popular options..."
Not even willing to do remote interviews any more.
jerf · 4h ago
One of the things I would say I saw coming from a long ways away is that a lot of effort was put into detecting the "default" voice of LLMs, but people would eventually figure out how to kick them out of it, since it is after literally just a matter of asking. It took some months but word does seem to be finally getting out, I'm seeing the idea emerge in more and more places.
I gave it the prompt "Suppose you are in a job interview for a front-end web position and someone asks you about how you use the React library and the hardest problem you even had to solve with it. How might you react, along with a somewhat amusing anecdote?"[1] and it did pretty well. I think I'd play with it a bit to see if I can still suppress some of the LLM-isms that came out, but a human could edit them out in real-time with just a bit of practice too... it's not like you can just read it to your interviewer, you will need to Drama Class 101 this up a bit anyhow. It'll be easier to improv a bit over this than a bare Wikipedia list.
In other words, as with the question the article title asks, the question isn't about what happens "when" this starts being possible... the capability has run ahead of all but the most fervent AI user's understanding and it is already here. It's just a matter of the word-of-mouth getting around as to how to prompt the AIs to be less obvious. I also anticipate that in the next couple of years, the AI companies will be getting tired of people complaining about the "default LLM voice" and it'll shift to be something less obvious than it is now. Both remote interviews and college writing are really already destroyed, the news just hasn't gotten around to everybody yet.
(In fact I suspect that "default LLM voice" will eventually become a sort of cultural touchstone of 2024-2026 and be deliberately used in future cultural references to set stories in this time period. It's a transient quality of current-day LLMs, easy to get them out of even today, and I expect future LLMs to have much different "default voices".)
[1]: And in keeping with my own philosophy of "there's not a lot of value of just pasting in LLM responses" if you want to see what comes out you are welcome to play with it yourself. No huge surprises though. It did the job.
tempodox · 4h ago
The cynic in me says there's no problem with that. Either their jobs will be just as LLM-heavy and prompt-jockeying was the right skill to learn, or they won't be able to hold any job because they didn't learn anything relevant, which would be just payment for the cheating.
collingreen · 4h ago
But waste a lot of resources of the people trying to hire
tempodox · 3h ago
I imagine this would only be a gradual change. Applicants have been lying in their resumes just fine before LLMs. Finding good people has never been easy.
msgodel · 1h ago
Most of the important ones I took had:
1) Written in person exams that were most of the grade (this includes "blue book" exams where you have to sit in front of the professor and write an essay on whatever topic he writes on the board that morning as well as your typical math/algorithms tests on paper.)
2) Written homework where you have to essentially have a satisfactory discussion on the topic (no word range, you get graded on creative interpretation of the course subject matter.)
Language models could maybe help you with 2 but will actually kill your ability to handle 1 if you're cheating on homework with them. If anything language models will mean the end of those retarded make-work cookie cutter graded homework assignments that got in the way of actually studying and learning.
juujian · 4h ago
Don't even need to go that far. Provide a locked-down computer and have students write essays in a dedicated space. I have personally done that and it was a reasonably good experience.
m0llusk · 3h ago
Which raises the issue of what education is for. Is it to know things and solve specific problems in a controlled environment, or is it to work with available tools and resources across a range dynamically changing contexts? Does being at a locked down computer in a dedicated space match likely work settings?
hyperjeff · 1h ago
At a minimum, an education should leave you with an ability regardless of the tools at your disposal. There may come a time when whatever tool you have grown accustomed to will not be available. (If a war breaks out, for example, don’t count on any of your electronic devices to work.) By all means learn to use tools effectively, but that’s not enough.
zerkten · 2h ago
You can be sensitive to both, and good institutions have a duty to check instructors who go strongly in a single direction. I think the point is only to exclude the crutch in contexts where it affects learning. Being prepared for work is only one element of education. Its weight varies by course, institution, and many other factors.
Further, there are likely situations where the participants avail of AI to different extents based on how they feel about the situation (cf. different degrees of doping in athletics), and students will sometimes be limited by their means in use of AI tools.
stvltvs · 2h ago
This assumes that education's sole goal is to prepare students for work.
What would be the impact on democratic systems if voters always turn to an LLM for answers because schools didn't require them to think on their own?
rich_sasha · 1h ago
Exams were never the pinnacle of what a grad can do. They were an efficient test, under severe time constraints, that correlates well with overall ability in humans.
That AI can pass these tests doesn't mean it is as smart and capable as a grad. I mean, it might be, or if not today then in a few years, but not because it can pass exams, having digested past exams and sample solutions into its bellows.
boerseth · 4h ago
It may get resolved on its own. These days people study to get good grades in order to prove to future gatekeepers (like employers, or higher rungs of academia) that they know the material well. Post AGI, however, the gatekeepers may not be so interested in humans anymore, and we might not need grades at all. Studying anything could become something done exclusively for ones own interest, and the only point of a grade would be to give one-self a goal to achieve.
Alternatively, if we still want to cling on to this ritual of measuring the performance of students, you could give each and every one of them oral examinations with AI professors.
esafak · 4h ago
We are not post AGI. We still need to hire people who did not cheat their way through school and interviews with chatGPT. Even post AGI, we would want to hire qualified people.
EGreg · 4h ago
Boom, this.
Institutions that prepare people for future jobs have an even harder time to justify what they’re doing than the people who are looking for jobs right now. It’s just inertia at this point.
Not to mention that AI can educate the people better by solving Bloom’s Two Sigma Problem.
So colleges are obsolete except as four year cruises for entertainment and networking.
spacemadness · 1h ago
Ah yes, the AGI that will happen any day now but we can’t even define. Please buy the stock.
Balgair · 1h ago
The cheap version of learning is dead, and AI killed it.
Not that we were learning all that much to begin with. I mean, walk into any sorority and ask to see the test bank. The students and Profs were phoning it in for a while, by and large. Not all of them were though, and good on yah.
But now that the fig leaf is torn away, we're left with the Oxbridge model and not much else - small classes, under 10, likely under 5, with a grad level tutor, social pressure making sure you've done the work. The great thing about this though is that you'll have an AI listening in all the time and helping out, streamlining the busywork and allowing the group to get down to business.
But that version is very expensive. You're looking at ~$50k / student year [0] at a baseline Oxbridge model in secondary school on up - ~$400k / student from 9th to university graduation.
Assume a 6% loan rate for 30 years (a mortgage, essentially), and you've got ~$2,300 monthly payments for all your working life, ~$46k/year down the drain. How in the hell are you going to manage student loans like that and then try to live a life without a really good job? How the hell is a nation going to be expected to pay for that per kid if you make school free for them?
Cheap learning wasn't good, but it sufficed. The new models of education must answer to the fundamental question of education: How much does it cost?
[0] 2 hours 3x a week per class; 4 classes per tutor per week. Assume $100k/tutor and 5 students/tutor. So $5k / student / tutor. 4 classes / student. So $20k / student in just raw tutors. At least double that for overhead if not triple.
slightwinder · 3h ago
> The easy solution would be to revert back to paper only exams, and physical attendance - but that would also exclude a ton students.
Which students?
If it's just about travel-distance, maybe schools could organize themselves to offer local test-centers where students could attend exams under observation. Reusing existing facilities in this way is pretty common in my countries education-system since decades.
ninetyninenine · 4h ago
There’s a huge volume of revenue coming from remote students.
atonse · 4h ago
Even though I majored in CompSci, I still remember my college essay class and learning about primary and secondary sources and their relative quality, how to craft an argument, and how to articulate your argument to be persuasive. Outside of just writing, those skills have been useful in other scenarios too (like when subconsciously evaluating someone else's argument)
Of course, I still treated it like a lazy college student: I did it in 2.1 or 2.2 line spacing to hit the page requirements, and flipped my thesis because it was easier to research (I started out arguing against the US invading Iraq, but found it way easier to find sources that supported an invasion... well, we all know how reliable those sources were).
elmean · 4h ago
"but found it way easier to find sources that supported an invasion..." Hahahaha, US Army Psy Ops approves this message
ilaksh · 4h ago
AI isn't destroying anything. Don't blame the technology for what humans do with it.
AI should allow every student to have personalized instruction and tutoring. It should be a massive win.
If everyone instead of taking advantage of that refuses to do any work and decided to lie and pass the AIs output off as their own, that is not something the AI did. The students did that.
tempodox · 4h ago
> AI should allow every student to have personalized instruction and tutoring.
I admire your optimism.
Funny how everyone has their own dream of the miracles that “AI” should perform. It's just the perfect silver screen for everyone to project their wishes on.
marcellus23 · 3h ago
In the same vein, AI is a boogeyman that people can pin all their fears for the future on.
tempodox · 3h ago
Naturally, “perfect silver screen” works for any purpose.
javier123454321 · 4h ago
We turned higher ed into a qualification producing factory subsidized by the government at the expense of the kid's financial future. We overemphasized passing over learning, as the education is about the title, not the knowledge. It's not the student's fault that we created this incentive structure. The students that want to learn can still learn, those that come to higher education with a transactional mindset now can just pay for their degree. The truth is we are at the point where the logic of the commodification of our higher education system is being taken to its logical conclusion which is its own undoing.
spacemadness · 1h ago
Is this the AI investor version of avocado toast?
eviks · 4h ago
> AI should allow every student to have personalized instruction and tutoring
But wouldn't, so we only have the loss of cheating replacing learning.
_DeadFred_ · 2h ago
Let's throw away the potential of society because young adults are lazy and AI must be empowered. Or, we could realize the realities of human behavior and INTELLIGENTLY integrate AI. But nah, fuck society/fuck young adults for having the typical young adult mentality.
ImHereToVote · 4h ago
Blame is for small children and God. Moloch will require his sacrifice regardless of whom you choose to blame.
fullstackchris · 4h ago
Second this. Sick of seeing posts like this because correlation =/= causation, proved time and time again. It's just too easy to 'relate' these two things and leads to lazy writing / persisting this narrative which has in no way been proven to be true yet.
IMO the underlying cause has much more to do with a hiring cycle issue: the boom of the low-interest / free money / I-don't-need-to-pay-for-an-office covid years is now leading to the relative hiring "bust" (even though it's not really a bust, unemployment is at 4.2%, certainly nothing out of the ordinary for the US)
Biologist123 · 4h ago
A college-friend from the Unviersity of Oxford, where students write one or two essays a week, got the top first (best mark) in his history degree. Initially impressed, one day I asked him his exam method - where each student must produce 3 essays in 3 hours (or did then) across about 5 or 6 papers. My friend’s approach was to thoroughly research 12 essay questions and pre-write 16 page essays for each paper, which he would then learn verbatim and trot out word-for-word the best fit to each exam question.
This compared to my method of reading widely, learning quotes and ideas and then writing each essay fresh in the exam hall - and I would typically manage about 3-4 pages per essay. (Reader, I did not get a top first).
I relate this anecdote as I don’t really see my friend’s method as being much better than using AI. Although I do acknowledge his 16 page essays must have been reasonably good.
pcrh · 3h ago
Your friend's approach doesn't sound like cheating, after all the wrote the original essays.
It's more similar to spending hours preparing small exam cheat sheets, and then realizing that you didn't need them during the exam, as you had learnt the material.
Biologist123 · 45m ago
It definitely wasn’t cheating. But I felt it was not in the spirit of the exam system which I believed - maybe wrongly - was designed to test one’s ability to write a fresh essay from scratch under timed conditions.
What would you say about someone getting AI to write high standard essays and simply spend learn those word-for-word?
It’s also not cheating but not in the spirit of the thing I think.
eviks · 4h ago
> friend’s method as being much better than using AI
Why not? He wrote all the essays himself, after all, and in a setting that's much more relevant to real life vs. the artificial constraints of a shorter exam. With AI he would've written/learned nothing himself.
Biologist123 · 42m ago
It’s a fair point, but as a thought experiment, how would you feel about AI writing the essays and simply regurgitating those? Legal but not in the spirit of things I think.
compacct27 · 4h ago
The leverage has been flipped. We all had awful college classes teaching next to nothing, and now that you can get good grades without attending, what's left? "We lost critical thinking!" No, we were barely getting that in the first place. Now, classes need to be more valuable.
javier123454321 · 4h ago
This is exactly it. Are we surprised that civil engineering students forced to take a humanities class satisfied by psych 101 and having to pay thousands of dollars for the 3 credit hours are cheating on their term paper?
ModernMech · 28m ago
It's not surprising, as there are plenty of technical-minded people who believe they should never have to study anything related to "soft sciences", and will do anything to get out of it. But I don't think that people doing so with AI justifies the idea that civil engineering students should not be taught any humanities.
VeritySage · 6h ago
Rather than grading polished essays, instructors will use in-class writing sprints and live peer reviews to track idea development in real time
mistrial9 · 4h ago
yes agree - proctored writing assignments are happening now
What if education became research? If, in the hypothetical future, the AI can answer any question about any book or scientific theory, perhaps the educational system could focus on teaching people how to come up with good ideas to research, and how to do that research effectively? Rather than making the questions about historical information more difficult, or answering them in person or writing them in bluebooks, make the process of learning about how to create new knowledge? Educators would become people who teach you how to learn, how to design questions, and how to research those questions to produce factual answers. We've known lectures have been the worst way to teach for decades. Why maintain that failed system? If the reductionist goal of the college system is a degree that certifies you as an expert in historical knowledge, maybe we can just throw that away since the AI can handle that part now, and instead certify that people know how to ask the right questions of the AI, and how to interpret their answers to create new knowledge for humanity?
pcrh · 3h ago
AI is going to increase the value of prestige education over middle-of-the-road education.
Middle of the road colleges will not have the resources to ensure that students learn despite AI, whereas the Oxbridges, etc, will retain their tutorial systems and smaller class sizes, where AI is of no use whatsoever.
A comparable phenomenon perhaps exists in the news publishing world. It was envisaged that easy access to information would be the death of pay-to-read news. However, the huge volumes of mediocre and politically-driven output that swamped the internet, airwaves, and printing presses instead increased the relative value of thoughtful and well-sourced new and writing, e.g. the FT, Guardian, BBC, etc., even the New Yorker...
dagw · 3h ago
The question is can a really good student who knows and understands the topic at hand write a better essay with the help of AI, than I student that doesn't know the material and is just relying on AI?
I can easily tell code written by a novice programmer naively 'vibe coding' an app from code written by an experienced developer using AI to help him. Can a history professor tell the difference between a purely AI essay from one written by someone who knows what they're talking about, and is assisted by AI to make the essay better?
jcranmer · 3h ago
> I can easily tell code written by a novice programmer naively 'vibe coding' an app from code written by an experienced developer using AI to help him. Can a history professor tell the difference between a purely AI essay from one written by someone who knows what they're talking about?
Yes. That you consider this a question worth asking is a sign of your contempt for the craft of writing an essay. If an AI is that bad at mimicking expertise in your field, why shouldn't it be that bad at mimicking expertise in others' fields?
dagw · 3h ago
contempt for the craft of writing an essay
I did not mean to disparage the craft of writing, by being imprecise with my own writing. My base assumption was that it should be easy, but if it is this easy, why is everybody freaking out?
HDThoreaun · 2h ago
Then whats the problem? AI users will get bad marks for their essays and non AI users will get better marks. Seems to be working as it should
nickk81 · 4h ago
Hopefully AI articles and papers can skip things like:
Alex has wavy hair and speaks with the chill, singsong cadence of someone who has spent a lot of time in the Bay Area. He and Eugene scanned the menu, and Alex said that they should get clear broth, rather than spicy, “so we can both lock in our skin care.”
Interesting illustration of tradition meating modernity:
> He then transcribed Claude’s points in his notebook, since his professor ran a screen-free classroom.
Molitor5901 · 3h ago
The future will be oral examination, and similar to American law schools, students will have to write by hand and pass oral examinations.
tlhunter · 4h ago
An obvious solution is to require the use of Google docs and include the history as part of the assignment. If there is no sign of sentence restructuring then fail the assignment.
This is the equivalent of asking students to show their work when they do math problems and that is how we thwarted those evil calculators.
whiplash451 · 4h ago
This is likely easily gamed by asking the LLM to provide a number of intermediate versions of the output. You still have to do some yak shaving in google docs, but nothing too hard.
tlhunter · 4h ago
That's true. Ultimately I suppose it's a relatively short game of cat and mouse... Twenty year old calculators can show the work after all.
tempodox · 4h ago
What, forcing everyone to become a Google user, just so they can study?
calebh · 4h ago
I wrote a blog [1] a couple years ago about this solution - it turns out it is possible to use timestamp authority servers in combination with hashing functions to create a verified edit history. Like the other comment said, it merely starts an arms race where the AI side is likely to win, which is why I haven't pursued this further.
For something like digital art creation verifying the edit history is much more fruitful since the diffusion process is nothing like how humans create art.
Vendor lock-in is pretty common in courses at all levels.
eviks · 3h ago
This is already mentioned in the article
nh23423fefe · 4h ago
insane hoop jumping is the purpose of education?
politician · 4h ago
Universities have been criticized for ideological indoctrination. We might be able to quantify this: the increase in use of AI to write essays should result in a weakening of this phenomenon simply due to the lower engagement in the material and reduced critical thinking as was shown recent [1] studies [2].
In all of my school years, essays were THE WORST kind of exam there was. The grading was highly arbitrary anyway... Good that AI is killing that.
javier123454321 · 4h ago
Essays might not be a great tool for providing consistent grading, but as a tool for learning to think through an idea, learning to structure arguments coherently, to research your point and find counter factual, it is unmatched. Education __should__ optimize for learning, not grading.
The harsh reality is that academia as a whole needs to be revamped. The easy solution would be to revert back to paper only exams, and physical attendance - but that would also exclude a ton students. A huge number of modern students are online students, or similar programs where you don't need to show up physically. Moreover, I don't think universities / colleges themselves want to revert back, as it would mean hiring more people, spending more on buildings, etc.
Honestly, the pervasiveness of LLMs looks to really erode the critical thinking of entire future generations. Whatever the solution, we need to be taking these existential threats a lot more seriously than how we treated social media (the plague before this current plague).
Ask students to solve harder problems, assuming they will use AI to learn more effectively.
Invert the examination process to include teaching others, which you can’t fake. Or rework it to bring the viva voce into evaluation earlier than PhD.
There are plenty of ideas. The problem is, a generation of teachers likely need to be cycled through for this to really work. Much harder for tenured professors.
Every technical revolution “threatened to erode the critical thinking of a generation”, and sure, the printing press meant that fewer texts were memorized rote… not to say there are no risks this time, but rather that it’s hard to predict in advance. I can easily imagine access to personalized tutors making education much better for those who want/need to learn something.
I’m more worried about post-truth civilization than post-college writing civilization for sure.
Objectively, many of them did erode some amount of critical thinking, but led to skill transfer to other domains so maybe it was neutral. Some of them were productivity boons and we got the golden age that boomers hail from. Other revolutions have just been a straight degradation in QOL. Social Media and LLMs seem to be in that vein. I'd also throw in gambling ads/micro-transactions and smoking as things that haven't exactly helped society. Out of those four examples, we only tried to course correct on smoking and, after a long period of time, we can see it's a net benefit to not smoke.
> I’m more worried about post-truth civilization than post-college writing civilization for sure.
These are the same civilizations on the same timeline.
My opinion is that even if capabilities halted now, LLMs would be more economically valuable than the internet (compared over the same 50 year trajectory). And I predict that they will not halt any time soon.
Maybe this yields more resources to invest in education like the OP author, and we end up more enriched than ever before:
> I teach at a small liberal-arts college, and I often joke that a student is more likely to hand in a big paper a year late (as recently happened) than to take a dishonorable shortcut. My classes are small and intimate, driven by processes and pedagogical modes, like letting awkward silences linger, that are difficult to scale.
The only thing I’m confident about is volatility, the range of outcomes is wide.
Maybe maybe maybe
Should we gamble on the lives of future gens for some economic maybes or should we take a minute to think through all probable outcomes and build out some safeguards?
anyone who claims otherwise doesn't remember their school days
Yes and no.
Upper middle class parents as a group will still instill critical thinking skills in their kids.
But the above comment reveals more about SES (socioeconomic status) and education in general rather than something specific to critical thinking or LLMs. The current education environment in the US heavily favors kids from higher SES families for a number of reason. LLMs won’t change this.
The challenge for the education system, imho, is to find a way for lower SES kids to thrive in an LLM environment. Pre-LLM, this was already a challenge, but was possible. Post-LLM, the LLM crutch may be too easy for some lower SES folks to lean on such that they don’t develop the skills they need to develop higher order skills.
They are only allowed once students can do it on their own, because now you have a foundational understanding and the tool just speeds you up.
Thanks for the insightful comment!
Using a machine to do the very thing you are supposed to be demonstrating a proficiency in is cheating and harms the legitimacy of the accreditation of the school.
Exactly the threat of AI. With regards to jobs, we'll have a shock but we will adapt as with any other wave of automation.
I suspect this is the true fermi paradox. Once a civilization reaches a certain point, automation becomes harmful to the point that no one knows how do anything on their own. Societal collapse may be back to bronze age, if not more regressed.
Classic illustration of this: https://www.thomasthwaites.com/the-toaster-project/
We already have a real-world scourge of tech bros who view learning about other subjects as beneath them and who rate their ignorance as more valuable than experts in that subject. This wonderful new world of AI is only going to exacerbate that problem.
Can you provide an example?
I encountered this imposter too.
I gave it the prompt "Suppose you are in a job interview for a front-end web position and someone asks you about how you use the React library and the hardest problem you even had to solve with it. How might you react, along with a somewhat amusing anecdote?"[1] and it did pretty well. I think I'd play with it a bit to see if I can still suppress some of the LLM-isms that came out, but a human could edit them out in real-time with just a bit of practice too... it's not like you can just read it to your interviewer, you will need to Drama Class 101 this up a bit anyhow. It'll be easier to improv a bit over this than a bare Wikipedia list.
In other words, as with the question the article title asks, the question isn't about what happens "when" this starts being possible... the capability has run ahead of all but the most fervent AI user's understanding and it is already here. It's just a matter of the word-of-mouth getting around as to how to prompt the AIs to be less obvious. I also anticipate that in the next couple of years, the AI companies will be getting tired of people complaining about the "default LLM voice" and it'll shift to be something less obvious than it is now. Both remote interviews and college writing are really already destroyed, the news just hasn't gotten around to everybody yet.
(In fact I suspect that "default LLM voice" will eventually become a sort of cultural touchstone of 2024-2026 and be deliberately used in future cultural references to set stories in this time period. It's a transient quality of current-day LLMs, easy to get them out of even today, and I expect future LLMs to have much different "default voices".)
[1]: And in keeping with my own philosophy of "there's not a lot of value of just pasting in LLM responses" if you want to see what comes out you are welcome to play with it yourself. No huge surprises though. It did the job.
1) Written in person exams that were most of the grade (this includes "blue book" exams where you have to sit in front of the professor and write an essay on whatever topic he writes on the board that morning as well as your typical math/algorithms tests on paper.)
2) Written homework where you have to essentially have a satisfactory discussion on the topic (no word range, you get graded on creative interpretation of the course subject matter.)
Language models could maybe help you with 2 but will actually kill your ability to handle 1 if you're cheating on homework with them. If anything language models will mean the end of those retarded make-work cookie cutter graded homework assignments that got in the way of actually studying and learning.
Further, there are likely situations where the participants avail of AI to different extents based on how they feel about the situation (cf. different degrees of doping in athletics), and students will sometimes be limited by their means in use of AI tools.
What would be the impact on democratic systems if voters always turn to an LLM for answers because schools didn't require them to think on their own?
That AI can pass these tests doesn't mean it is as smart and capable as a grad. I mean, it might be, or if not today then in a few years, but not because it can pass exams, having digested past exams and sample solutions into its bellows.
Alternatively, if we still want to cling on to this ritual of measuring the performance of students, you could give each and every one of them oral examinations with AI professors.
Institutions that prepare people for future jobs have an even harder time to justify what they’re doing than the people who are looking for jobs right now. It’s just inertia at this point.
Not to mention that AI can educate the people better by solving Bloom’s Two Sigma Problem.
So colleges are obsolete except as four year cruises for entertainment and networking.
Not that we were learning all that much to begin with. I mean, walk into any sorority and ask to see the test bank. The students and Profs were phoning it in for a while, by and large. Not all of them were though, and good on yah.
But now that the fig leaf is torn away, we're left with the Oxbridge model and not much else - small classes, under 10, likely under 5, with a grad level tutor, social pressure making sure you've done the work. The great thing about this though is that you'll have an AI listening in all the time and helping out, streamlining the busywork and allowing the group to get down to business.
But that version is very expensive. You're looking at ~$50k / student year [0] at a baseline Oxbridge model in secondary school on up - ~$400k / student from 9th to university graduation.
Assume a 6% loan rate for 30 years (a mortgage, essentially), and you've got ~$2,300 monthly payments for all your working life, ~$46k/year down the drain. How in the hell are you going to manage student loans like that and then try to live a life without a really good job? How the hell is a nation going to be expected to pay for that per kid if you make school free for them?
Cheap learning wasn't good, but it sufficed. The new models of education must answer to the fundamental question of education: How much does it cost?
[0] 2 hours 3x a week per class; 4 classes per tutor per week. Assume $100k/tutor and 5 students/tutor. So $5k / student / tutor. 4 classes / student. So $20k / student in just raw tutors. At least double that for overhead if not triple.
Which students?
If it's just about travel-distance, maybe schools could organize themselves to offer local test-centers where students could attend exams under observation. Reusing existing facilities in this way is pretty common in my countries education-system since decades.
Of course, I still treated it like a lazy college student: I did it in 2.1 or 2.2 line spacing to hit the page requirements, and flipped my thesis because it was easier to research (I started out arguing against the US invading Iraq, but found it way easier to find sources that supported an invasion... well, we all know how reliable those sources were).
AI should allow every student to have personalized instruction and tutoring. It should be a massive win.
If everyone instead of taking advantage of that refuses to do any work and decided to lie and pass the AIs output off as their own, that is not something the AI did. The students did that.
I admire your optimism.
Funny how everyone has their own dream of the miracles that “AI” should perform. It's just the perfect silver screen for everyone to project their wishes on.
But wouldn't, so we only have the loss of cheating replacing learning.
IMO the underlying cause has much more to do with a hiring cycle issue: the boom of the low-interest / free money / I-don't-need-to-pay-for-an-office covid years is now leading to the relative hiring "bust" (even though it's not really a bust, unemployment is at 4.2%, certainly nothing out of the ordinary for the US)
This compared to my method of reading widely, learning quotes and ideas and then writing each essay fresh in the exam hall - and I would typically manage about 3-4 pages per essay. (Reader, I did not get a top first).
I relate this anecdote as I don’t really see my friend’s method as being much better than using AI. Although I do acknowledge his 16 page essays must have been reasonably good.
It's more similar to spending hours preparing small exam cheat sheets, and then realizing that you didn't need them during the exam, as you had learnt the material.
What would you say about someone getting AI to write high standard essays and simply spend learn those word-for-word?
It’s also not cheating but not in the spirit of the thing I think.
Why not? He wrote all the essays himself, after all, and in a setting that's much more relevant to real life vs. the artificial constraints of a shorter exam. With AI he would've written/learned nothing himself.
Middle of the road colleges will not have the resources to ensure that students learn despite AI, whereas the Oxbridges, etc, will retain their tutorial systems and smaller class sizes, where AI is of no use whatsoever.
A comparable phenomenon perhaps exists in the news publishing world. It was envisaged that easy access to information would be the death of pay-to-read news. However, the huge volumes of mediocre and politically-driven output that swamped the internet, airwaves, and printing presses instead increased the relative value of thoughtful and well-sourced new and writing, e.g. the FT, Guardian, BBC, etc., even the New Yorker...
I can easily tell code written by a novice programmer naively 'vibe coding' an app from code written by an experienced developer using AI to help him. Can a history professor tell the difference between a purely AI essay from one written by someone who knows what they're talking about, and is assisted by AI to make the essay better?
Yes. That you consider this a question worth asking is a sign of your contempt for the craft of writing an essay. If an AI is that bad at mimicking expertise in your field, why shouldn't it be that bad at mimicking expertise in others' fields?
I did not mean to disparage the craft of writing, by being imprecise with my own writing. My base assumption was that it should be easy, but if it is this easy, why is everybody freaking out?
Alex has wavy hair and speaks with the chill, singsong cadence of someone who has spent a lot of time in the Bay Area. He and Eugene scanned the menu, and Alex said that they should get clear broth, rather than spicy, “so we can both lock in our skin care.”
> He then transcribed Claude’s points in his notebook, since his professor ran a screen-free classroom.
This is the equivalent of asking students to show their work when they do math problems and that is how we thwarted those evil calculators.
For something like digital art creation verifying the edit history is much more fruitful since the diffusion process is nothing like how humans create art.
[1] https://helbl.ing/Written-Proof-of-Work/
[1] https://time.com/7295195/ai-chatgpt-google-learning-school/ [2] https://www.microsoft.com/en-us/research/wp-content/uploads/...