AI makes the humanities more important, but also weirder

268 findhorn 291 6/3/2025, 3:53:20 AM resobscura.substack.com ↗

Comments (291)

gdubs · 18h ago
There's a deeper educational problem that's been decades in the making which is that students are trained to see school and work as a series of never ending goals to achieve. The ultimate one is to 'get a job'. Well, now nobody can really even say what jobs will be available in 5 to 10 years with high confidence. Except maybe the trades – and we mostly cut those programs out of schools a long time ago.

If college students are using AI to breeze through their work rather than doing the reading themselves, developing grit – I don't see how we can blame them rather than ourselves for the educational and career system we've created. This problem didn't happen overnight, and it's not just AI.

stanford_labrat · 17h ago
Just like AI seems like it's being used as a convenient scapegoat for the layoffs and trimming following the end of ZIRP, now we see it also being used to blame for the failures of our modern education system.

Mostly which are that our system only rewards one thing in education: the grade. Not understanding, knowledge, intelligence, but instead a single number that is more easily gamified than anything. And this single number (your GPA) is the single most important thing for every level from middle school to college where it will unironically determine your entire (academic/academic-adjacent) future.

realitybit3z · 12h ago
IN RESPONSE TO: Just like AI seems like it's being used as a convenient scapegoat for the layoffs and trimming following the end of ZIRP

there are like 4 factors besides ZIRP i've witnessed as a hiring manager

- Layoff or dont hire locals in favor of H1b/H4

- Layoff or dont hire locals in favor of Nearshore and Offshore

- Section 174 Tax code on software: https://blog.pragmaticengineer.com/section-174/

- More productivity via LLMs, Code Assist

tharkun__ · 7h ago

    ... our system only rewards one thing in employment: the metric. Not understanding, knowledge, intelligence, but instead a single number that is more easily gamified than anything. And this single number (your metric value) is the single most important thing for every level from junior to principal where it will unironically determine your entire future.
TFTFY
throwawaybob420 · 17h ago
Our modern education system in the US is broken, but acting as if AI is a scapegoat is comical.

Capitalism is what has destroyed higher education in this country. The concept of going to school to get a job isn’t a failure of education but of economics.

AI is just another capitalist tool made to not only extract wealth out of you but something that they want you to rely on more and more so they can charge you even more down the road.

pjc50 · 16h ago
Even the Soviet Union made people go to school, and getting a degree was a route to higher status.

(there have been a few Communist revolutions against the concept of "university", for various political reasons, but China rebuilt theirs after the purges and Cambodia is a sad historical footnote)

realitybit3z · 11h ago
IN RESPONSE TO Even the Soviet Union made people go to school, and getting a degree was a route to higher status.

Also people did that to avoid Dedovshchina

https://en.wikipedia.org/wiki/Dedovshchina

https://www.hrw.org/reports/2004/russia1004/2.htm

NoOn3 · 15h ago
At least in the Soviet Union, they were free...
thatoneguy · 13h ago
In Soviet Union, university education free but came at cost of living in Soviet Union

- Yakov Smirnoff, probably

fallingknife · 16h ago
Before college was a means to get a job, it was status signalling for the upper class by showing they could spend 4 years not working and learning things with no economic value that few could afford. There was never a time when a large portion of society went to school past 18 for any reason other than economic or status gain, and why should they?
awkward · 12h ago
Economic OR status gain is putting a lot of work on the or.

We've put into place a context for intellectual achievement at scale. Why shouldn't status be apportioned to someone who is recognized by a panel of peers and teachers to have useful insight into their field?

aydyn · 12h ago
> Why shouldn't status be apportioned to someone who is recognized by a panel of peers and teachers to have useful insight into their field

Because many "fields" in colleges are not useful.

mrguyorama · 12h ago
>and why should they?

Because modern life is radically more complicated than humans can naturally deal with.

Your average peasant for millenia didn't need to understand Information security to avoid getting phished, didn't need to understand compounding interest for things like loans and saving for retirement (they'd just have kids and pray enough of them survive), didn't need to have some kind of mental model for the hordes of algorithms deployed against us for the express purpose of taking all of our available attention (a resource that people before a couple decades ago had so much excess of that boredom was a serious concern) for the express purpose of selling it to people who want to extract any dollar you may have access to, did not need to understand spreadsheets(!), etc etc etc etc

Like, being productive in modern society is complicated. That's what education is for.

fallingknife · 8h ago
You don't have to understand infosec to not get phished. And education doesn't do a damn thing to help you resist those algorithms.
nathan_compton · 14h ago
Learning stuff is cool?
RankingMember · 13h ago
I think learning stuff and making art just for the hell of it is going to become a lot more accepted as society continues on and more and more peoples' jobs get automated away. Obviously that's a huge simplification of a much more complex situation, but in general I think the best future is one where people are free to pursue interests without regard for those interests ability to pay for their food and housing.
altruios · 10h ago
UBI: the dream!

Decoupling working from living: means only intrinsically valuable things get worked on. No more working a 9-5 at a scam call center or figuring out how to make people click on ads. There is ONLY BENEFIT (to everyone) from giving labor such leverage.

Not every job needs to or should even exist: everyone having a job isn't utopia. Utopia is being free to choose what you work on. This directs market value for labor to go up. Work that needs to get done will be aligned with financial incentives (farmers, janitors, repair industries would soar to new heights).

UBI is a necessary and great idea: A bottom floor to capitalism means we all can stand up and lift this sinking ship.

s1artibartfast · 15h ago
There is nothing wrong with going to school to obtain knowledge and skills to secure a job.

The problem with the modern educational system is that it isnt very efficient at this task. Instead, most of the value relies on the screening that took place before the students even entered the institution, not the knowledge obtained while there.

kenjackson · 15h ago
Yep, this is a huge problem. I've long argued that we need value add metrics for colleges, and it probably won't be a single number, but rather a set of values depending on input values, e.g., some schools may deliver a lot of value for kids with 1550 SATs, but other schools may do better for kids with 1200.

Today we simply use college as a proxy for intelligence, so people just like to go to the highest rated college they can to be viewed as intelligent. What happens in the four years at the college is secondary.

csa · 3h ago
> Today we simply use college as a proxy for intelligence, so people just like to go to the highest rated college they can to be viewed as intelligent.

Hmmm… I would say college is a proxy for social currency, of which intelligence is one type. In most cases, intelligence is the least valuable (imho).

Pet_Ant · 13h ago
> There is nothing wrong with going to school to obtain knowledge and skills to secure a job.

That can't be the only goal. We also need to transmit culture, values, and teach them to become citizens.

aydyn · 12h ago
Teach them "values" as determined by a central governing authority?
goatlover · 11h ago
Yes, within a reason. Not in an authoritarian or dismissive manner. But the ideals of our western liberal societies.
aydyn · 11h ago
What happens if that centeal authority becoms subverted (as many people here would argue is the case right now)?

Doesnt that single point of failure indicate a weakness?

Pet_Ant · 11h ago
Values are your culture. The Nazis were elected and supported (at least initially) so you are right some what, but the answer is multiple countries.

But a country without a culture and without shared values is a sled being pulled by dogs in different directions and not a real team (as many people would argue has been the case for quite some time).

You need common values to work together to achieve goals. That's what a country is, people working together. When you don't, you just become tenants with passports.

s1artibartfast · 10h ago
you dont need 4-10 years of post-secondary school at 100k/yr to do that.

nor do our current institutions do a good job of what you describe.

aleph_minus_one · 17h ago
> Well, now nobody can really even say what jobs will be available in 5 to 10 years with high confidence.

I am very sure that those jobs that have been existing for a long time will continue existing for a long time (and no: even if some disruption occurs, these jobs won't suddenly disappear, but will rather phased out slowly; you thus have sufficient time to make a decent plan for you).

In other words: in my opinion one can predict rather well many jobs that will be available in 5 to 10 years.

The "inconvenient" truth rather is that many high-paying jobs in the new economy sector don't satisfy this criterion of "existing for a long time". So by this criterion you might miss out some hard to predict high-earning opportunities. Thus, if you are the kind of person who tends to easily become envious if your friends suddenly experience a windfall, such a job perhaps won't make you happy.

eyesofgod · 15h ago
> I don't see how we can blame them rather than ourselves for the educational and career system we've created

I dont know man, you do it every time.

tsunamifury · 18h ago
Trades would collapse if even 10% of the population switched to them. What does no one get about this?
Pet_Ant · 17h ago
Yeah, I feel like kids are encouraged to move not to jobs that will be profitable to them (ie the children) but that the moneyed want to pay less for, because the number of people being swayed would collapse any market.

It's like a boat captain telling all his passengers to rush to one side of the boat. Any single side will tip the boat over.

pjc50 · 17h ago
Trades tend to do a good job of preventing people from switching in due to apprenticeship systems. Depends on your jurisdiction, but you probably can't just hang out a sign saying "plumber".
lostphilosopher · 14h ago
Related - there needs to be individuals and businesses that want/need and can afford upgrades and repairs. If office workers are getting replaced with AI we don't need to build and maintain offices and the ecosystems that support them (see also WFH/Covid) and those workers won't have income to pay for plumbers, electricians, roofers, etc. for their personal property. A worst case scenario AI workforce revolution would attack trades from both supply and demand.
cafp12 · 17h ago
This is a good point and also why are people so certain that superhuman capable robots aren't right around the corner.
triceratops · 16h ago
But I wonder if lower labor costs would change the repair vs replace calculus by a lot.
guappa · 17h ago
Well in sweden it's forbidden to fix anything electrical on your own. You're allowed to change your lamps and little else.
the-mitr · 15h ago
Is it so? Is it true for other work also? Genuinely asking.
belorn · 14h ago
There is a concept called regulated professions. The US equivalent would likely be accredited or certified. Typically, anything in health, security, and education, but also things like accounting or commercial diving. EU has Directive 2005/36/EC that specify how this kind of laws may operate in EU.
eyesofgod · 15h ago
> What does no one get about this?

Its all idiotic romanticisation. Thats why all the out-of-their-ass anecdotes involve someone who's more of an entrepreneur than a tradesmen as well.

Theres some bizzare analog of the noble savage myth, but applied to blue collar work.

mtillman · 17h ago
I’m convinced plumbing would pay better than programming in certain neighborhoods right now now.
Workaccount2 · 17h ago
Plumbers never realized that they should be selling service plans and leasing toilets to people.

Software's golden goose is not letting people own things.

pjc50 · 17h ago
The people who lease toilets to people are landlords, as part of leasing the rest of the building or unit.
toyg · 17h ago
I'm pretty sure plumbers do sell service plans to businesses. It's just that households don't really need regular work.
bongodongobob · 15h ago
Service for what? I'm in my 40s and now that I think about it, I've never had to call a plumber in my entire life. The only time I can think of is my parents calling one when they got a new dishwasher 25 years ago.
kenjackson · 15h ago
I've had the plumber out twice this year. Probably depends on how handy you are with plumbing issues. Me not so much.
NikolaNovak · 17h ago
That's not new though. My colleague does IT for a few years then works as a construction worker for a few years, depending on the market. He's ready for anything :-)
renewiltord · 13h ago
It's just viral memes. Most people are inferior to the average LLM in reasoning and repeat phrases wholesale without comprehension of what it means. Few, if any, have even approached understanding what they are saying. Among those, fewer still have validated their assumptions.

Many years ago we had to deploy an HFT trading strategy on a specific hardware platform because that was the one that was (in this field) closest in network topology to the exchange. I just read the released docs for the platform, and watched a couple of videos where they hinted at certain architectural decisions and then you could work something out from there. Latency was reliable and our strats made money.

But this was public information and we were actually late to it. It had been released for 3 years at the time. And many people I talked to had told me that what they had was as good as it gets. But they were each just repeating what the other guy said.

Many of the things you see on Hacker News are just that: it's deterministic parrotry.

tsunamifury · 9h ago
I agree but the signal noise ratio is slightly better here. But it’s fine down hill over the decade.
bluGill · 18h ago
Nobody ever could say with confidence what jobs will pe available. those who know the basics and flexible can find something.
aleph_minus_one · 18h ago
> those who know the basics and flexible can find something.

I'm not so sure about that: I have a feeling that at least the current trend in quite some jobs is that they are looking for two kinds of people:

1. beginners who know the basics and are cheap

2. deep experts; these are paid well

What is in-between gets more and more hollowed out.

Thus the people who follow your advice will mostly stay in the "cheap" pool - I wouldn't consider this to be desirable.

Pet_Ant · 17h ago
> Thus the people who follow your advice will mostly stay in the "cheap" pool - I wouldn't consider this to be desirable.

Cheap... but employed. Expertise is only valuable if its valued. That means you not only need to be an expert in the thing, but also the market for the thing, and that's a lot to ask.

bluGill · 17h ago
You can move between groups. If your expertise turns out not needed get a cheap job while learning something else.
panzagl · 17h ago
The only way to develop deep expertise is to do it as your job.
aleph_minus_one · 17h ago
There exist in my opinion rather few jobs where you will develop a deep expertise in some area.
zaphar · 16h ago
I got my job/career precisely because I developed deep expertise on my own time. So no, that is not the only way to develop that expertise.
const_cast · 11h ago
For most people, if your expertise isn't guided by boards of already experienced people then it's probably not really worth it. Most people are shockingly bad at learning on their own. They're gonna go home and watch TV, go to the gym, and spend time with their loved ones.

Look, I would not want a doctor to perform my surgery who did not do a residency. I don't care if they carved up 1,000 cadavers in their free time. I want somebody where the board of their specialty has said "yup, this guys good". I'm not gonna spend the time to try to trust the doctor, because that's really really hard. I'm not a doctor, I don't know shit. I have to rely on institutions of trust to do that work for me.

And that's really what universities are at their core - institutions of trust. When you get a degree, there's trust you understand the material to an appropriate degree. When you pass a residency, there's trust you understand the material to an appropriate degree. If we lose that trust, such as by letting students cheat by AI, that is a big problem.

Could I hire someone who says they're an expert, with no degree, and just give them a leetcode problem? Sure. But if I hire someone with a degree, I have a much greater level of certainty they can actually code. Same goes for work experience.

panzagl · 14h ago
I guess it depends on what you consider deep expertise- I assumed a 10000 hours/PhD level of expertise, which would be hard to achieve in a couple years while working.
pjc50 · 17h ago
This is at odds with professional specialisms which take years to learn.

I don't think it's likely that AI will obliterate the job category of "lawyer", but that's what its boosters are claiming, and people need to make a car to house sized investment at age 18 depending on how true that is.

SoftTalker · 16h ago
I think the profession of "laywer" may come to an end sooner rather than later for many, unless they are able to get legislators to ban AI from practicing law. A lot of lawyering is writing in a particular legal vocabulary and style, and forming arguments based on precident, things that an LLM AI should be able to get very good at.

It won't be long before an impoverished criminal defendant will be able to chose between an AI that can spend practically unlimited time on his case and have perfect recall of any precident that might be relevant, or an overworked public defender who has 2 hours a week to spare in his schedule.

jjmarr · 15h ago
Go read lawyers complaining on reddit about pro se litigants using AI. They hallucinate precedents like APIs or get confused about what is supported.

If you think SWEs that build the same CRUD apps every day are vulnerable but SWEs that do "real work" aren't, apply that logic to lawyers.

Except lawyers don't have an automated compiler to check for fabricated precedents as case law databases are monopolized by one company.

freeone3000 · 14h ago
It costs money but KeyCite is real.
bluGill · 17h ago
it always has been. You make what looks like a good bet and if you are wrong adjust-
bluefirebrand · 16h ago
How this works for most people in reality is "if you are wrong, you're screwed"

Most people do not have the ability to adjust like that, for one reason or another

And even if you do, it still means your life is likely to hit an extremely rough patch while your adjusting catches up to where you were before

realitybit3z · 12h ago
IN RESPONSE TO: There's a deeper educational problem that's been decades in the making which is that students are trained to see school and work as a series of never ending goals to achieve. The ultimate one is to 'get a job'.

This one is really bad. I have waves of local grads begging for unpaid internships or work now. They all were AP students, good GPAs, good schools, "learned to code", took on 200k of college debt -- only to find the promised job is just not frigging there at the end of the line. Meanwhile, they are told there is a "massive shortage of coders" and see overseas workers hired into those same jobs. How do younger people trust the system anymore? Further, in light of this, why would any steel-worker (metaphorically speaking) want to retrain and learn to code if even the geeks learning to code face dismal outcomes?

thomastjeffery · 15h ago
This is rooted in the idea that a person should need permission to work. The reason we do this is because our capitalist society is structured around competition instead of collaboration. Because all work must be framed as a competition, the only way collaboration is allowed is by explicitly joining a team.

This rule is made explicit with copyright: derivative work is, by default, illegal. The only way that derivative work is allowed to be made is after making a contract with the rightsholder.

This system can only be enforced with incompatibility. You can't actually stop someone from making derivative work unless your help is actually required. The most familiar instance of this requirement is software: it's simply not pragmatic for someone to extend your code, when they only have access to a compiled binary. Software work can only be extended from a shared context: what you write must be compatible with what is written. That's generally unfeasible when what is written is a compile-time optimized binary executable.

Incompatibility is not a perfect barrier. It's possible to decompile software. It's possible to edit text, images, video, audio, etc. Copyright depends on incompatibility, and LLMs are simply the newest, most compelling way to call that bluff.

bigbacaloa · 16h ago
Most students see education as a video game - the goal is to score as many points as possible in a framework controlled by some apparently arbitrary rules. That educators fail to distinguish school from video games is mostly the fault of educators. And video games are more fun.
simonw · 1d ago
The first comment on that article caught my attention:

> I am a grad student in the philosophy department at SFSU and teach critical thinking. This semester I pivoted my entire class toward a course design that feels more like running an obstacle course with AI than engaging with Plato. And my students are into it.

It would be interesting to take a class of students and set them an assignment to come up with assignments for their fellow students that could not be completed using ChatGPT.

About ten years ago I went to a BarCamp conference where one of the events was a team quiz where the questions were designed to be difficult to solve using Google - questions like "What island is this?" where all you got was the outline of the island. It was really fun. Designing "ChatGPT-proof assignments" feels to me like a similar level of intellectual challenge.

aleph_minus_one · 17h ago
> Designing "ChatGPT-proof assignments" feels to me like a similar level of intellectual challenge.

Designing a "mostly ChatGPT-proof class" is in my opinion actually rather simple: just be inspired by the German university system:

Each week you have to complete exercise sheets which are often hard. But being capable of solving a certain quota of the exercises is just the beginning: this only qualifies you for being allowed to take the actual (oral or written) exam.

In this sense (as many professors will tell you), the basically sole purpose of the quota for solving (often hard) exercise sheets is actually preventing students from the "self-inflicted harm" of doing an exam that they are not yet prepared for.

And yes: while it is forbidden to "cheat" (e.g. ChatGPT) on your exercise sheets, this policy is typically not strongly enforced (you will nearly always just get a unequivocal, strongly worded oral reprimand by the tutor). Instead, if you did this, you will for sure not be prepared for the upcoming exam, and flunk it (and most students are very aware of this).

Oh yeah, I didn't mention yet that if you flunked the same exam typically 3 times (depending on the university), you have "finally failed" (endgültig nicht bestanden), and are not allowed anymore to study the same degree course at every German university.

Balgair · 15h ago
> Designing a "mostly ChatGPT-proof class" is in my opinion actually rather simple: just be inspired by the German university system:

Now the real question is the costs. In that, can you somehow make this system work at scale with the same profit as before? I know that the German system is a lot different than in the US, including things like tracking in 9th grade and below. But the real question is if the Universities make as much, if not more, cash. Because if the answer is 'no', then it's unlikely to be adopted.

> Oh yeah, I didn't mention yet that if you flunked the same exam typically 3 times (depending on the university), you have "finally failed" (endgültig nicht bestanden), and are not allowed anymore to study the same degree course at every German university.

That is an insanely awesome and clever idea. I love it. It puts real stakes there, if only perceived ones. I imagine that in the US if you flunk out of a major's classes twice or more, the number of students that continue on in that major is probably pretty low already though.

fc417fc802 · 11h ago
It's not clever, it's pointless red tape. I don't know if it's different in Germany but at least at US universities most higher level courses are part of a sequence and are thus only offered at a specific time of year. Thus failing delays your degree by a year and creates all sorts of logistical issues for the student.

Doing that more than 3 times would be absurd, and anyway the sort of student who has 3 F's on his transcript within his chosen major is unlikely to be maintaining a GPA above the minimum for his program (or any program for that matter). Rather than transferring to another degree such a student is likely to be forced out of the university entirely in short order.

aleph_minus_one · 9h ago
> but at least at US universities most higher level courses are part of a sequence and are thus only offered at a specific time of year. Thus failing delays your degree by a year and creates all sorts of logistical issues for the student.

This often also holds in Germany. And indeed if you thus fail an exam, your degree is delayed and you might have logistical issues.

The moral of this: learn hard so that you don't fail exams. :-)

SkyBelow · 14h ago
A complete ban feels like overkill. A cooldown period of X years would give someone time to reconsider and grow as a person.

Granted, this is without consider the financial side of things.

oceanplexian · 16h ago
> I didn't mention yet that if you flunked the same exam typically 3 times (depending on the university), you have "finally failed" (endgültig nicht bestanden), and are not allowed anymore to study the same degree course at every German university.

When Europeans wonder why they are falling behind the United States both economically and technologically (Especially in the AI race), here's a perfect example of why. A culture that turns you into a some kind of a permanent failure because you failed to check a box some arbitrary number of times isn't one that produces innovation.

xquce · 16h ago
And when Muricans wonder why the rest of the world laughs at your simplestic greed based view of "winning", this comment is a perfect example of why.

What a weird way to try and connect inter continental economics and private company valuation as a sole metric for success or achievement with test taking at a university level in Germany...

mebizzle · 14h ago
It's weird to correlate how the idiosyncracies of a deeply-rooted developmental system might affect the sensibilities or perspectives of the adults that said systems develop?
fc417fc802 · 11h ago
I want to agree with you. In fact I do agree with the point you're making about how policies, cultural values, and outcomes are intertwined.

However you need to realize that the country you're holding up as a counterexample brands people as felons over some fairly absurd things and more generally has a remarkably dysfunctional justice system that exhibits precisely the same cultural failures that you're objecting to here.

tstenner · 16h ago
Of course, the next European neighbor country where you can continue your studies (even taking most of your credits with you) is often only a few hundred kilometres away.
aleph_minus_one · 16h ago
If you really think that a university degree course is still the right thing for you, you typically just choose another "related, but different" degree course at some university in the same country.

Concerning "next European country", on the other hand, keep in mind that in this country often a different language is spoken; in particular for your daily life, you cannot assume that everybody understands English or even your native language.

dleeftink · 21h ago
Among others, Howard Rheingold has been active in this space. For those interested, check out the Peeragogy Handbook and the post that sparked the idea[0].

> The more I give my teacher-power to students and encourage them to take more responsibility for their own learning, the more they show me how to redesign my ways of teaching

[0]: https://clalliance.org/blog/toward-peeragogy/

dirkc · 21h ago
It's fun to see how old that article is and that the ideas still apply! Power dynamics are not considered enough when people talk about education. My believe is that the more you balance the power dynamic, the more learning is prioritized over education.
dleeftink · 19h ago
For an even earlier account, see the learning networks described by Illich (1971) in the Deschooling Society [0].

[0]: https://en.m.wikipedia.org/wiki/Deschooling_Society

homarp · 19h ago
https://bsky.app/profile/hrheingold.bsky.social/post/3lprsqu... points to a compiled syllabus and links to recordings of lectures and video chats in a free pdf, about an "intro to cooperation studies": https://rheingold.com/texts/IntroToCooperationStudies.pdf
lynx97 · 1d ago
> where all you got was the outline of the island

As a blind person I can't help but notice that this "innovative" question design is inherently inaccessible to people like me. Yes, I know I'm not the norm. However, I can't help but notice that this avoidance of text based assignments is making accessible teaching even more impossible. Say welcome to a new generation of Digital Divide.

math_dandy · 18h ago
As a university professor, I admit with some shame that accessibility issues for specific problem types is not on my radar. “Innovation” isn’t the main culprit here.

Fortunately, my university has a good accessibility center that takes care of accommodation issues (large print versions of tests, etc.). I just send them my tests and they take care of it. It’s a great service, and absolutely crucial because I simply don’t have the time to customize assessments. I assume they would get in touch if they were unable to retrofit accessibility onto an assessment, but that hasn’t happened in my fifteen years of employment.

simonw · 17h ago
Yes, it's true that the one example I presented from a quiz game I played at a conference 10+ years ago was not accessible to some people. If I could remember the other questions they used in that quiz I would share them here, but that's the only one that stuck in my brain.
contravariant · 18h ago
The good news is that avoiding text-based assignments isn't likely to last. Multimodal agents should be more than capable of identifying islands by shape, or will be soon.
svachalek · 12h ago
I gave Gemini a mockup of a UI and it generated the stupid thing perfectly in code and CSS. Recognizing an island seems trivial in comparison.
protocolture · 22h ago
Maybe they could 3d print the outline to make it more accessible? Or perhaps the goal is to use the image file with the outline to find the island, something that could/should be achieved via code?

I wouldn't be surprised if a novel solution to the problem emerged from students with diverse abilities. I don't think that brute forcing the problem by comparing the image of every island on the planet vs the outline visually is the best path forward. Its probably something like, create a ratio of the size of each of the bays and then probe for values that fit those quantities on wikipedia.

WhyIsItAlwaysHN · 1d ago
What would you need to tackle this assignment?

I think you're right to complain about the design of the question. It used Google's disadvantage in that it couldn't process visual information to add difficulty to the task.

However I'm pretty sure there's many of these challenges in an average stem subject. An example would be information dense diagrams which describe some chemical process.

I'm genuinely curious to understand how someone manages to understand these without sight. On my side, I can barely visualize so it was always extremely hard to decipher them and even worse remember them.

mantas · 1d ago
It’d be bad if there was no alternative. But different people can get different assignments and that’s ok.

Complaining about this is like me complaining about people drinking milk since I’m lactose intolerant.

mootothemax · 1d ago
> It’d be bad if there was no alternative. But different people can get different assignments and that’s ok.

While there are possibly noble goals behind your suggestion, in practise this puts anyone outside the mainstream in the category of “other,” people to be managed separately. I’ll leave to your imagination how much work is often put into supporting these “other” assignments and how up-to-date they’re kept va the mainstream.

> Complaining about this is like me complaining about people drinking milk since I’m lactose intolerant.

If this is genuinely your approach, you are being part of the problem; if taking a step back to reassess feels like too much work, I’d encourage you to explore why it feels that way, what emotions is this bringing up internally?

imawakegnxoxo · 22h ago
I think "everyone gets the exact same test" is a whole level of hell worse than "everyone gets a test that makes sense for their abilities and goals"

Are you just concerned about the administration side of things?

I think the GP here has a pretty reasonable outlook here

"Some people can't do some things the same way as others and that's ok, people are different"

Is the vibe I got from that post

fallingknife · 16h ago
Either the test must be the same for everybody or it is not a meaningful signal.
mantas · 15h ago
Yes. But there will always be an obscure case you can’t cover. Let’s say a test is written. Someone can’t write, but can dictate answers. Is it still the same test or not?

On the other hand, how far can society tip-tap around such alternative takes? Could he go study in a field where hand writing is expected and society would still be forced to adapt?

poulpy123 · 22h ago
> in practise this puts anyone outside the mainstream in the category of “other,” people to be managed separately.

So what ? This is literally the case. The commenter has to be managed separately

Pet_Ant · 17h ago
The commenter (rightly or wrongly) is -I think- suggesting that assignments etc should be from the outset be designed to be accessible in their original form for everyone, and not just "okay, this works for 90% of people, lets worry about the rest separately".

An example I have personally is when designing board games I now have internalised to never just use colour to communicate anything. Never solid colour which can be difficult to determine by the colour-blind, but different colours have different stripes or patterns embedded in them. So if you can't see the colour, you can recognise the stripe spacing, if you can't see the stripe spacing due to low visual-acuity, you can see the colour... and ideally different textures for the blind. (not pratical as it would wear off in the shuffling, but the idea is there).

mantas · 17h ago
> If this is genuinely your approach, you are being part of the problem; if taking a step back to reassess feels like too much work, I’d encourage you to explore why it feels that way, what emotions is this bringing up internally?

How am I part of the problem? Do you mean I should make a fuss if I get limited options? Or have emotions towards what... My genetical makeup? All in all, I've zero emotions about my lactose intolerance. Being mindful about what I eat is just natural for me. Just like for other people with food allergies/intolerances.

> in practise this puts anyone outside the mainstream in the category of “other,” people to be managed separately

Is your goal to get everyone down to the same level by removing all learning material that may be inaccessible to somebody?

Should people just stop consuming anything with lactose to make it more universal for people like me? But then a friend of mine will ask to stop eating fish. Someone else will take away chocolate. At the end of the day all of us will be much worse off.

UncleMeat · 1d ago
> Designing "ChatGPT-proof assignments" feels to me like a similar level of intellectual challenge.

This is indeed true. But a challenge is that few professors are being given the time and training to help do this. When you are on a 4/4 and just keeping your head above water you don't tend to have enough time to adopt experimental pedagogies and completely replan your courses. And professors are largely being tasked with doing this independently rather than having universities offer training or support with adjusting methods to be more AI resistant. And unfortunately the fast speed of development of these tools is making good ideas obsolete quickly. I know some people who switch to having students make podcasts rather than writing papers as a final assignment and then we started seeing "create your own podcast" tools appear and made it roughly as easy to cheat on this assignment as a traditional paper.

lr4444lr · 22h ago
How does making podcasts solve anything? Students can just read out what the AI writes into an audio recording.

Why is this problem being fretted over? In-class written and oral tests should be fine to assess students. If AI helps them learn or even cram the material, great!

mapt · 21h ago
One of the skills that students are typically terrible at is convincingly recreating the flow of a conversation that is written down. Shakespeare, spoken in meter-less monotone with none of the prosody that indicates actual speech by a human being communicating something; Prosody you can often decode even in a language you have no knowledge of. If they manage to fake it convincingly for an hour-long podcast, they have taught themselves acting on a level that is commercially useful and honestly understand much of what is being said.

If they are just working from AI notes and improvising the conversation, arguably they are simply doing the work.

What I don't know is whether an instructor would be allowed to earnestly grade an hour-long podcast subjectively.

simiones · 21h ago
Homework and/or project assignments are often a major component of scoring for many courses.
UncleMeat · 21h ago
You can do different podcast structures than a single person reading a script, which can produce some useful social barriers to cheating.

A lot of kids also suck at cheating so the small barrier of "ask ai to generate a script and then read it" appears to stop a bunch of cheaters.

There are limitations to timed evaluations such that oral exams or blue books are not panaceas.

simonw · 1d ago
My suggestion here isn't to have the professors do this: it's for the professors to challenge their students to do this.
UncleMeat · 1d ago
I am friends with an unusually large number of professors, many of which are at very top institutions. The "make your own pedagogy" approach for students leaves huge portions of the classroom doing zero work whatsoever.
simonw · 19h ago
Thats useful, thanks. Sounds like this approach won't help most students.
fallingknife · 16h ago
What was the point of having those students in the classroom in the first place?
UncleMeat · 13h ago
There really are students that will work hard when given direction but will flounder when asked to fully self direct.
eyesofgod · 15h ago
Taking their (or their parent's) money of course
tehjoker · 15h ago
alternative strategies will have them doing work and learning
fallingknife · 14h ago
And then forgetting. If you're not interested, you cram for the test and then forget. I see little point in forcing people who don't want to learn into classrooms. Worth a read https://www.astralcodexten.com/p/a-theoretical-case-against-...
fc417fc802 · 11h ago
Interesting essay. Makes some very good points, however I have to disagree with this one.

> ... they keep some sort of overall concept of learning. This is a pretty god-of-the-gaps-ish hypothesis, and counterbalanced by ...

The author is really missing the obvious here. Learning difficult subjects fundamentally changes how you think about and approach things. People aren't born able to engage in critical thinking or being able to reason algebraically or with an ability to navigate formal logic. That doesn't mean school is the only way to impart such skills, but it is certainly one of the ways.

Of course if the metric you use is "ability to answer trivia" then you are going to fail to capture that aspect.

thfuran · 21h ago
>on a 4/4

That's teaching four distinct courses?

UncleMeat · 20h ago
A 4/4 means that you teach four classes each semester. This is a pretty typical schedule for non tenure track or adjunct professors in the humanities or even tenure track faculty at slacs, though I've seen 5/5s before. It is a lot of work to update this many classes to adapt to AI.
bee_rider · 18h ago
That seems like a crazy amount of work. Four classes, or four sections of the same class?
UncleMeat · 17h ago
Depends, but it is often four distinct classes. And yeah, it is a lot of work. This is why reusing preps is so valuable and why it is a bit ridiculous to see people demanding that professors suddenly figure out (on their own) how to rewrite their syllabi to mitigate cheating with AI.
bee_rider · 17h ago
Yikes.

My professor often taught classes with a couple sections, I’d help out sometimes—even that was a ton of work, pretty hard to do the main job (research) during those semesters. I wonder if (given the demographics of the board) folks are suggesting things from a similar place of informed ignorance to me—coming from research oriented STEM universities. In that case it could be reasonable to say “well, the situation with our classes has gotten so dire with AI, it might make sense to sacrifice a semester of research to sort that all out.” Of course if you are already prepping for four classes, there is not much slack to sacrifice…

In that case a lot of us would be specifically more wrong than a random person plucked off the street, somebody who thinks the main job of every professor is teaching.

yupitsme123 · 18h ago
If a teacher knows the topic that they're teaching, then a 30 second talk with a student is enough to determine if the student actually knows anything. Maybe "assignments" aren't the best method of building and verifying knowledge.
lapcat · 17h ago
That comment struck me as odd if only because Plato is not commonly part of the standard critical thinking curriculum. Introduction to philosophy or history of philosophy yes, but critical thinking no.

This is not to say that Plato didn't think critically. Of course he did, as do all (or most) philosophers. I'm just talking about university courses and textbooks.

arvindh-manian · 1d ago
I find it difficult to think of examples that aren't just based on niche information (what did we talk about on this day? what analogy does the teacher make repeatedly?). Maybe multimodal stuff?
pixl97 · 19h ago
Even multimodal stuff is something is advancing pretty quickly. And with the more widespread incorporation of things like this, the information will be fed back into AI systems as training data making finding new things more difficult.
ModernMech · 14h ago
As a professor of Computer Science, I'm about ready to allow my students to use AI -- if you can't beat 'em, join 'em. But they're not going to like it, because what I'm going to do is significantly raise the bar.

For the last several years I've been allowing students (college sophomores) to do take home exams where they essentially have to build a networked system in C++. I tell them not to use AI but I have to assume they are.

Even still, the solutions are not all that great, despite unlimited time and resources, they still submit work that falls into a roughly B average. Which, if they didn't have AI, maybe that would be a C average, so perhaps the standards just need to go up. Still, it's not like they're all getting 100%.

AI is still really only as good as the person driving it, and it still, despite all the hype, hallucinates like crazy, such that if they don't pay close attention and constantly course correct, they can't expect the project to actually work.

aprilthird2021 · 1d ago
Yeah, that does sound like a good idea and a good way to prove one has learned the ideas presented in the course and the relationship between them well
exe34 · 21h ago
Reminds me of https://xkcd.com/810/
ivape · 21h ago
“Cannot be done with chat gpt”

Is a military base the only place “no devices” can be enforced or something? How deep is this addiction? I’m scared to ask what the academic version of fizz-buzz would be at the end of a 4 year degree, “hey write one paragraph describing a simple paradox, an example of irony, or an example of a metaphor”.

fc417fc802 · 11h ago
How broadly do you propose to enforce such a ban? Most students do their homework at home so I'm not sure how you're imagining this would work.

Even for exams I had my cellphone and backpack on me. You just weren't allowed into them. The only exception to that in my experience was exams proctored in a dedicated testing center where there are lockers, multiple human observers, and lots of cameras.

r2_pilot · 18h ago
I mean, before you get to military bases, there are Pearson's testing centers that have lockers for your phone/gear and proctored exams.
pixl97 · 19h ago
>Is a military base the only place “no devices” can be enforced or something?

I mean with the force of law, yea. Businesses get away with a little bit more in places because they pay you to show up.

But if you think anyone is going to university/college to have all their devices taken away all the time and pay for the privilege then you might be confused.

cvwright · 17h ago
No but people do pay universities for (among other things) a meaningful credential that testifies to their knowledge and abilities.

If taking away phones is what’s required to make that credential meaningful again, then the universities must do it.

I think of it like a personal trainer. Do you pay them to make you sweaty and sore? Not really, but sweat and soreness are consequences of doing what it takes to build muscle/endurance/etc.

pixl97 · 17h ago
>meaningful credential that testifies to their knowledge and abilities.

No they don't, or at least a significant percentage of them don't. They pay the university because it is a paywall between them and even getting an interview in the first place.

aydyn · 12h ago
But that isnt a good long term strategy. If degrees hold no real signal, employers are going to start noticing and the value of a degree decreases.

So its in universities best interest to actually test for competence.

... In theory.

sampl3username · 21h ago
I am a zoomer. Is it really that hard for everyone to NOT use AI? Using AI very clearly atrophies your cognitive skills.
sdenton4 · 18h ago
I'm a millennial who has watched the complete failure of my generation to realize the promise of those Obama 'Hope' posters.

"Everyone don't do X" runs into problems when there are clear incentives to do X. Like, say, grades which eventually will be used to determine who gets to graduate, who gets into which schools, who ultimately gets easier access to higher paying jobs. And simple convenience is an incentive in itself - if doing X saves time, it will be the default.

This same dynamic is why we still have a climate crisis, even though everyone has known about the problem for thirty years.

"Everyone just do Y" is what we call a collective action problem. When there are clear incentives to defect, it is also a free rider problem.

Animats · 1d ago
The author is talking mostly about teaching history. But what he describes as teaching history is more like history appreciation. It's not about how to use history as an aid to prediction. It's more about studying the "classics", from Cicero onwards.

Military officers study much history, but in a different way. They look for mistakes. Why did someone lose a battle or a war? Why did they even get into that particular battle? Why was the situation prior to WWI so strategically unstable that the killing of a minor player touched off a world war? The traditional teaching of history tends to focus on the winners, especially if they write well. Reading what the losers wrote, or what was written about them, can be more productive than reading the winners.

If you want to understand Cicero's era, for example, read [1]. This is a study of Cicero by an experienced and cynical newspaper political reporter. The author writes that too many historians take Cicero's speeches at face value. The author knows better, having heard many politicians of his own day.

This sort of approach tends to take one out of the area where LLMs are most useful, because, as yet, they're not that good at detecting sycophancy.

[1] https://archive.org/details/thiswasciceromod0000henr

UncleMeat · 1d ago
> It's not about how to use history as an aid to prediction.

This is not what academic historians generally do. The study of history tells us about the story of humanity and exists for wider reasons than direct application to decision making today. This is even true for military history, which is among the slowest subfields at adopting new methods and practices.

sdoering · 1d ago
Thanks. Came here to say that. Having studied history - esp. in a time where Hayden White's assertion that "history is fiction" was strongly (and sometimes emotionally) discussed in between students and professors alike, taught me, that - while we might learn from history - that that is not the reason to "do" history.

I also learned to appreciate history on a fundamentally different level than I did, when I loved history as a subject in school. But that would lead too far out from this thread here.

I can relate to the original piece, because - while I use AI daily for work and in private life - I also see the dangers. Yes, it is a new medium - and I heard roughly the same critique about the internet, when I was at university - but I think it is quite a fundamentally different paradigm shift that we are living through.

I feel (used intentionaly here) it is more like the invention of the printing press (from an societal impact standpoint) than the dissemination of the internet into the wider society. But that is just my current working hypothesis.

jhanschoo · 18h ago
> Hayden White's assertion that "history is fiction"

Your comment made me dig up "Historical text as literary artifact".

I have several thoughts on that, but one very important analogy stood out to me and is succinct enough to comment here. In statistics, one talks about the population, collecting data (sampling from the popluation), and building a model from the collected data. You may have guessed that the collected data alone does not typically make much sense, but for the patterns encapsulated by the model, which then has explanatory power; yet among the three classes I mentioned, it is the most fictitious and constructed for a particular goal, the most inauthentic.

vkou · 14h ago
The thing is, this statistics data-to-model only works on large datasets.

History deals with very small, very heterogeneous datasets. Finding patterns in them (if they even exist) is very difficult, and it's why historical models and grand unified theories are so often bunk.

jhanschoo · 8h ago
> this statistics data-to-model only works on large datasets.

> deals with very small, very heterogeneous datasets.

The population itself is a very heterogeneous "dataset", that becomes (perhaps) apprehensible after you interrogate it with respect to the statistical question you have in mind. Let's not forget that the neatness of the data is constructed from the narrowness of the question asked of it, and even still the data is expected (in the colloquial sense) to take on a minimally-informative, max-entropy distribution, except in respect to the statistic of interest. The critical questions levied against historiography also apply here: perhaps the question you are collecting data for is not related to the population the way you think it is.

> why historical models and grand unified theories are so often bunk.

A lot of history practiced today are self-aware of just how much narrative has to be invented to "make sense" of historical events, and so historians value diverse perspectives. One understands that the same events may not hold the same meaning across communities, and so seeks to record and compare how they are perceived across communities or individuals; that perception in that community's narrative is at least subjectively authentic for that community or individual.

To the extent that the historical goal is to predict political/cultural/economic future, perhaps political/cultural theorists/economists are better positioned.

quanto · 1d ago
> while we might learn from history - that that is not the reason to "do" history.

If we don't learn from history, why do we do history? Is it a form of pure entertainment, i.e. of arts? If so, does that give more credence to White's argument?

I have had a PhD colleague who genuinely believed that history ought to be (in the philosophical normative sense) contributing to national propaganda, thus of national interest. By extension, this is why history departments should be funded by tax dollars.

roenxi · 1d ago
A fun party game is to get some pure mathematicians together, speculate loudly that they must be motivated in their work by the numerous practical applications of maths and watch cheekily to see which of their faces fail to conceal the spasm of fury. 1 point per asymmetric eye twitch.

They're academics, their job is to make true statements and maintain a culture that cares a lot about whether their colleagues can prove them false. Beyond that, it is hard to figure out what purpose they might be fulfilling.

oersted · 23h ago
That's a good point, beyond prediction, history does have utility for surfacing "shared stories" that bring people together. Let's not underestimate the real-world impact of such stories. Most big societal movements, both very good and very bad, have been fueled by shared history, or the making of shared history.

What's tricky is that indeed such stories need not be factual to be powerful, just as long as they resonate well-enough with the current "common-sense" and have some kind authority behind it giving it credence.

Historians do have an important responsibility here in making sure that people are aware of the real story so that they are not easily manipulated, just like journalists have a similar responsibility in a democracy, fourth estate and all.

CaptArmchair · 22h ago
Oh, it's not just the "big stories". History is a pretty big flag that covers a lot of territory. At heart, history is about asking perennial questions like "where do we come from?", "how did the past shape us today?" and "how could the past inform us?".

This is true on an individual as well as a collective level, and goes well beyond academia. Consider genealogy & family history, local and regional culture and traditions, remembrance,... There is always a personal connection, and that tends to become extremely tangible in individual stories. Whether that's finding a lost relative, honoring one's culture, or just being able to empathize with the lives of people who are centuries gone and discovering that they weren't all that different from us today.

Historians do carry a big responsibility. That's why accountability is at the heart of anyone who does historical research on a professional level; or are motivated to spread their interpretation of the historical record well beyond a few listeners. That's why historians are instilled with a reflex to keep a pragmatic attitude and ask critical questions.

galaxyLogic · 1d ago
I agree that goal should be to "learn from history" but not in the narrow sense of how could a specific historical catastrophe could have been avoided. We need a general understanding of histopry because it helps us understand human behavior in general. It helps us understand ourselves.
UncleMeat · 1d ago
> If we don't learn from history, why do we do history?

Historians uncover and communicate the story of humanity in as rich and diverse of a way as possible. This is, in my mind, somewhat comparable to the process of doing pure math research or fundamental physics research. While there may be practical outcomes for today (either unexpected or intended as a goal of a particular research direction), we also understand that doing math for its own purposes is valuable.

The process of doing archival research, putting sources in dialog with other research, and even simply reading secondary source writing achieves some positive outcomes. It widens and deepens empathy for the rich diversity of human behavior. It builds skills for critical analysis of media and communication. It can provide narrative and argument for people advocating for change today (both good and bad). But these do not need to be the reasons why we embark on history research. They are side outcomes of undirected analysis.

I'll also add that concern about "history departments funded by tax dollars" is just factually unfounded. My wife is a history and the size of grants is hilarious coming from my background in CS. Like, a grant for $2,000-$5,000 would be considered chunky. And grants are often coming from weird places like random corporations or donor funding rather than from the government. The NEH was already basically dead before Trump 2.0 and now is dead and buried. People upset about academic history can rest easy knowing that there aren't historians living large on your tax dollars.

IggleSniggle · 18h ago
One of the primary advantages of having a software engineer (or lawyer, for that matter) on staff isn't that they produce software, not exactly. It's that they guide the flow of process, and can adapt the river as needed to keep the business running smoothly.

Historians, imho, serve a similar purpose for society at large. They digest the information of the past to make sense of it for today, and sometimes they build dams that redirect the informational flow of future history.

surgical_fire · 1d ago
> But what he describes as teaching history is more like history appreciation. It's not about how to use history as an aid to prediction. It's more about studying the "classics", from Cicero onwards.

How are those things dissociated?

I always saw history as a waht to explain "why things are the way they are". Whatever we are now, this amalgamation of good or bad things, is a consequence of what came before.

Is "studying the classics" from a historical standpoint even done for "appreciation"? I always presumed that one should do so critically, and that's more or less what I found from actual historians.

UncleMeat · 23h ago
The “explain how we got here” approach to history has some pretty big limitations. The historical actors had no idea where the future was going. When we examine their actions from the perspective of the single contingency that ended up happening we will almost necessarily miss meaningful understanding and make things seem inevitable when they really weren’t.

There’s interesting writing to be done in this mode but it is definitely not the primary mode.

surgical_fire · 22h ago
> The historical actors had no idea where the future was going.

Does it matter in order to explain how things came to be?

Understanding their motivations, the incentives that led them to do what they did, the sociopolitical context they were inserted into, the limitations of a historical perspective (quite often the accounts of past historical figures were written by people invested in portraying it in one way or another).

All those things would help, when looking at history critically, to make some sense of the present and where things might be going in the future.

In a sense, things that happened were inevitable because they came to pass. We are not talking about a possibility, we are talking about a certainty long after the fact. Understanding that it might have been avoided in some ways can be helpful, but also is an exercise in wishful thinking and guesswork.

UncleMeat · 20h ago
> Does it matter in order to explain how things came to be?

Yes. Writing on historiography has been detailed about the ways that this sort of framing can be limiting.

BeFlatXIII · 14h ago
Studying the winners is also a surefire way to learn only survivorship bias.
_elephant · 1d ago
It makes a good point by separating history as strategic analysis from history as cultural appreciation. Most teaching today leans toward the latter, which is also easier for AI to replicate. But the most valuable historical thinking often comes from uncomfortable questions failure, unintended consequences, and perspectives that usually get ignored.

No comments yet

spankibalt · 1d ago
> Military officers study much history, but in a different way. They look for mistakes.

Amongst other things...

FranzFerdiNaN · 1d ago
> It's not about how to use history as an aid to prediction.

Because this isnt the goal of history. You cant use history to predict the future.

> If you want to understand Cicero's era, for example, read [1]. This is a study of Cicero by an experienced and cynical newspaper political reporter. The author writes that too many historians take Cicero's speeches at face value. The author knows better, having heard many politicians of his own day.

No they dont. This is yet another example of an outsider looking at another field, thinking he knows better and making a fool of himself. Also, why would you ignore 80 years of scholarship about Cicero and read a book from 1942???

pixl97 · 18h ago
>You cant use history to predict the future.

I mean this statement is somewhat false.

If I show you a picture of a cup from 100 ms ago and it's in midair probability favors that the cup will be on it's way to the ground to its demise. Statistically this will be true.

History is filled with analogous times where by watching the present you can make predictions far better than flipping a coin.

HexPhantom · 1d ago
When history is approached as a pattern of decisions, risks, and consequences (not just narratives) it starts to feel a lot more relevant to other complex domains, including AI itself. And yeah, studying the losers (or the overlooked) often yields more insight than lionizing the winners.
LurkandComment · 17h ago
1. PhD Defenses in the humanities have both a written thesis and an oral on the spot defense. You can't ChatGPT that easily because a professor will know how to curve ball to things that are seemingly unrelated but comparable to a human.

2. As an MA student I was solving semantic problems for engineers for analysis that they couldn't fathom. They were very smart at technical things (and great writers), but when language problems come up, it was a challenge. You can be a great communicator but not understand language itself.

3. Most people in positions for AI are being evaluated by things AI is good at. So as a candidate with a very good understanding of language, the AI wouldn't know how to evaluate my ability. I would really have to outline a problem in language AI has to face and explain it to a human. Then get them to understand it's value.

kenjackson · 14h ago
I think all PhDs have an oral defense. Not only that its common to have Quals where you have to present the state of the art research in your field and answer oral questions about it. Even if you had ChatGPT it would be tough because a lot of questions are like, "Why did XYZ do ABC after seeing result 123?" The problem is that often its an untrue statement they are trying to get you to justify -- XYZ didn't do ABC or didn't do it after seeing 123. This is something I imagine most LLMs to struggle with today and say, "No, that didn't happen -- here's actually what happened" when its in a nuanced field.
UncleMeat · 13h ago
I have never even heard of a trick or misleading question for quals or for a defense. That’d be ridiculous. You don’t need to do that.
wisty · 1d ago
A lazy physics teacher can make every question a math question in disguise. If a physics teacher is worried that a better calculator will make their test trivial, maybe they need to teach physics instead of testing math.

A lazy humanities teacher can make every question a writing question in disguise. If a better spell checker will make a humanities assessment trivial, then maybe they need to teach humanities instead of testing essay writing.

I'm saying this in a kinda inflammatory way, but does the quality of ones ideas really correlate well with a well written essay?

UncleMeat · 23h ago
I agree that professors can probably develop ai resistant evaluations.

But there is minimal institutional support for this. Everybody is going it on their own. And the ai tools are changing rapidly. Telling a ntt faculty member teaching a 4/4 that they just need to stop being lazy and redo their entire evaluation method is not really workable.

Iteration is also slow. You do a new syllabus to try to discourage cheating. You run the course for a semester. You see the results with now just a few weeks to turn around for your spring semester courses. At best you get one iteration cycle per six months. More likely it is one per year since it is basically impossible to meaningfully digest the outcome of what you tried and to meaningfully try something different mere weeks later.

wrp · 22h ago
I've been urging this kind of adaptation for the past couple years, with very little success. Switching to evaluation by only on-site work was simple to do, but creating new work forms has been largely a non-starter. Young professors are focused on their publication count and most old codgers (like me) have their eyes on retirement and hoping to escape before accountability hits the fan. When I offer to help explore restructuring their teaching materials, the typical response is glazed eyes and sudden awareness of the need to be elsewhere.
JKCalhoun · 22h ago
Yeah, I've been thinking about when the calculator came to bear. I was in high school physics when (rich) kids started bringing in "scientific" calculators. There was a flap then about whether they should be allowed at all.

I'm not sure they are analogous to LLMs, but a compelling argument at the time was that the students were going to use them in the field anyway. (That certainly seems to be the future for software engineering students.)

this15testingg · 23h ago
equating chatgpt to "a better spell checker" is wild; how is writing not also a skill that should be taught?
teaearlgraycold · 23h ago
It’s the thinking that precedes writing that should be the bulk of your grade. Possessing what looks like an artifact of organized thought isn’t enough these days to evaluate a student.
ordinaryradical · 16h ago
The “thinking that precedes the writing” is not where the gold is. That kind of thinking is often confused, internally inconsistent, liable to miss critical details or nuance, and full of deductive leaps which may or may not pan out. Writing demands rigor of thought, it forces us to question premises, find evidence for a point of view, discern between the hypothetical and the factual, and try to organize these such that they cohere.

The thinking _is_ the writing. To be able to write is to be able to think, and if you are surrendering the writing to a machine that’s not ultimately what you’re surrendering—you’re giving up independent thought itself.

teaearlgraycold · 12h ago
You misunderstand. I mean the thinking that you say “_is_ the writing”. But the final words aren’t the thoughts that you should be graded on (aka “this is not a pipe”, but your essay is not your thoughts). What a new assessment could look like is an open book debate with a human (or, hell, with a rubber duck LLM). If at the end your thoughts are organized and self consistent you’ve done well.
cmrdporcupine · 18h ago
What needs to happen is a return to a more Socratic method of teaching the humanities.

Instead of being about the consumption and production of texts -- texts which only the TA and maybe the prof will read ... ever -- the class should be more about the dialectical process of discussing the ideas in the texts and lectures the students consume.

An LLM will be able to easily rattle off an undergrad level paper. But it can't sit in the classroom and have an intelligent conversation with peers.

Of course none of this will happen because it doesn't scale economically nearly as well. It is far more labour intensive.

shortrounddev2 · 23h ago
I think if you removed the economic incentive to cheat then humanities wouldn't have the problem it does with LLMs
wizzwizz4 · 23h ago
There's always an incentive to cheat: economic, reputational… people cheat in single-player video games. People cheat in sudoku puzzles. By removing all incentives, all you do is destroy the concept of "cheating", not prevent the behaviour.
Timwi · 23h ago
I think you've listed several disparate incentives that should be looked at separately.

Economic incentives are what this thread is about. If those are removed, some instances of cheating will subside.

Reputational incentives are similar and harder to address, but they are also less strong because the cheating has a much higher chance to backfire. If you are found out, your economic benefits might continue, but your reputation is immediately damaged.

Singleplayer games are an entirely different situation. There's clearly no economic incentive here, and reputational incentive only applies if you are looking to share your singleplayer result with other people, in which case I could argue it's no longer singleplayer in the game theoretic sense. If it's not that, the only remaining motivation to cheat at something like Sudoku or a singleplayer videogame is to learn or at least satisfy an itch or curiosity, which I think is a perfectly legitimate motivation.

pixl97 · 18h ago
>Economic incentives are what this thread is about.

You cannot remove economic incentives for the most part, 'economy' is attached to all facets of peoples lives, especially where rival goods exist.

teaearlgraycold · 23h ago
It’s definitionally not cheating if it’s a single player environment. That’s like saying skipping to the end of a movie is cheating.
JKCalhoun · 22h ago
You forgot laziness.
dr_dshiv · 1d ago
What most people don’t know about history & humanities is how much work there is to do.

People get (rightly!) excited about decoding burnt scrolls in Herculaneum. But most don’t realize that less than 10% of Neo-Latin texts from the renaissance to the early modern period have been translated to English.

One example: Marsilio Ficino was a brilliant philosopher who was hired by Cosimo Medici to translate Greek texts (Plato, Plotinus, the Hermetica, etc) into Latin. This had a massive impact on the renaissance and European enlightenment. But many of Ficino’s own texts have not been translated to English!

LLMs will, of course, have a massive impact on this… but so can students! Any student can make a meaningful contribution if they care to. There is so much to be discovered and unpacked. Unfortunately, so much of the humanities in high school is treated as an exercise rather than real discovery. I judge my students now based on how much I learn from them. Why not?

wizzwizz4 · 23h ago
The transformer architecture was originally developed for translation tasks, but massive, overfit generative models are awful at translation. (Seriously, part-of-speech classification + dictionary lookup + grammar mapping – an incredibly simplistic system with performance measurable in chapters per second – does a better job, and you even get confidence intervals.) If you want a translation tool, use something like Project Bergamot (https://browser.mt/), not generative AI.

> Unfortunately, so much of the humanities in high school is treated as an exercise rather than real discovery.

I, too, find this exceptionally annoying.

bonoboTP · 16h ago
That's an absurd claim. Show me the translation benchmark where you saw such a result.
dr_dshiv · 11h ago
wizzwizz4 · 12h ago
Does "every time I've challenged people about this on Hacker News" count as a benchmark? See e.g. https://news.ycombinator.com/item?id=35531558.
fc417fc802 · 10h ago
No, it doesn't. I don't find the linked thread remotely convincing regarding your (frankly preposterous) claim that preexisting non-deep-learning solutions are better at translation in the general case.

Don't get me wrong. I'm sure that once fine tuned by a human for a specific language pair that such systems are better at performing literal translation. But the value proposition of deep learning here is that you don't need a large team of experts to laboriously train a given language pair, that the entire training process is largely unsupervised, and that the translations aren't hopelessly literal. The ML algorithms can pick up on idioms given a sufficiently large dataset.

wizzwizz4 · 9h ago
Found the other good post in that thread: https://news.ycombinator.com/item?id=35532941. Google Translate and DeepL were capable of mostly-accurate Japanese translation two (or even 5) years ago (as seen in my previous link). Multiple people ran this text through GPT-4, and none of its translations were any good, as assessed by this Japanese-speaker.

There's a reason we needed the Rosetta stone to give as an in for translating Egyptian hieroglyphics. You can't just throw Big Data at the problem and expect it to give accurate results. Dedicated translation setups can get good results, but ChatGPT's ilk doesn't.

Convincing translations are not the same thing as accurate translations. RLHF optimises for convincing, not accurate.

fc417fc802 · 8h ago
You originally made a broad claim about the technology as a class though. It's not particularly surprising that chatgpt (from 2 years ago no less) isn't state of the art at translations given that it isn't optimized for that (at least that I'm aware). The same approaches that are used to construct LLMs can be applied to construct a model intended for machine translation. That is where the value proposition lies.
wizzwizz4 · 6h ago
I'm not sure whether you're missing the point by a mile, or whether I am.

The transformer architecture, later modified to create GPT models, was originally designed for translation. The modifications to make it do predictive text in a chatbot style make it much, much worse at translating: one of the biggest issues is that generative systems fail silently. Using the tech appropriately gives you things like Project Bergamot: https://browser.mt/.

dr_dshiv · 1h ago
I get your point about the surprising news that gpt4 does so poorly at translation. I didn’t know that!

However, I think the idea is that LLM technologies have improved considerably since then. Do you still feel that Claude or ChatGPT perform worse than DeepL? It would be really nice to have an objective benchmark for comparison

verisimi · 23h ago
The history we have inevitably passes through bottlenecks. Much is left out, edited away, embellished.

We have no idea what happened over 500 years ago, but the idea of a brilliant scholar translating the Greek texts on behalf of the Medicis, shouldn't simply be accepted as stated. If you are running the world (like the Medicis did) history would be a lever of control. It seems inevitable that the stories should be directed (carefully) towards whatever ends.

History is actually a present day activity - it provides the backdrop to the present. Altering that backdrop has present value.

I can't see how ai will help us the endeavour of trying to get a better handle on what happened in the past. I do see how it would provide modern-day Medicis a way to change the backdrop more quickly.

UncleMeat · 23h ago
Paleography, transcription, and translation are hellishly difficult to learn and really make it difficult for interested people to explore the archive. LLMs have become pretty darn good at doing this out of the box. Prior to this you needed specially trained ML models and before that there was no automation whatsoever.
vintermann · 21h ago
I can't find any big AI model which can read historical Kurrent-family handwriting well out of the box. You still need specially trained models (i.e. transkribus) which generalize terribly.
UncleMeat · 21h ago
Transkribus is definitely still the best option in some cases. But there are a bunch of cases where it was necessary two years ago and isn't necessary anymore, which is pretty remarkable.
dr_dshiv · 23h ago
> I can't see how ai will help us the endeavour of trying to get a better handle on what happened in the past.

Well, AI democratizes the power of translation (and interrogating texts). Before only a few specialists could go directly to the source. Now anyone can try to make sense of it.

verisimi · 17h ago
We don't get to the source. Have you ever personally seen any historical sources? Eg the sources for classical writings or the Bible? At best you get high resolution imagery provided by suspicious folks such as RB Toth.

If the source is corrupt - as I suggest and as is possible because the Medicis provided/sanctioned their version - all you have is interpretation based on flawed data. Endless production of information (by ai) based on flawed sources (this is our history) only serves to increase the haystack rather than helping you to converge on truth.

quanto · 1d ago
> Today, engineers working on AI systems also need to think deeply and critically about the relationship between language and culture and the history and philosophy of technology. When they fail to do so, their systems literally start to break down.

Perhaps so. But not in the (quasi-)academic sense that the author is thinking. It's not the lack of an engineer's academic knowledge in history and philosophy that makes an AI system fail.

> Then there’s the newfound ability of non-technical people in the humanities to write their own code. This is a bigger deal than many in my field seem to recognize. I suspect this will change soon. The emerging generation of historians will simply take it for granted that they can create their own custom research and teaching tools and deploy them at will, more or less for free.

This is the lede buried deep inside the article. When the basic coding skill (or any skill) is commoditized, it's the people with complementary skills that benefit the most.

treyd · 1d ago
> When the basic coding skill (or any skill) is commoditized, it's the people with complementary skills that benefit the most.

I think that "knowing how to ask good questions" that you then solve has always been a valuable skill.

lwo32k · 1d ago
The kids are not even trying to do either. They are already gravitating towards multidisciplinary teams, cause unlike past generations they are dealing with a rate of change at a totally different level. In such an environment, people get to see their own limitations much faster no matter the quality of their training and they end up having to rely on others.

The big challenge is getting very different people with ever growing different skillsets and interests to coordinate, stay in sync and row in one direction.

sdoering · 1d ago
The field of history had their - oh, this is a thing - moment quite a few decades ago with Hayden White. It is a good example of the underlying issue.

Hayden White's assertion that "history is fiction" was (and still is) a complex one, not intended to dismiss the factual accuracy of historical narratives (as it is more often than not portraied).

Instead, it highlights the interpretive nature of historical writing and the way historians shape their accounts of the past through literary and rhetorical techniques. White argues that historians, like novelists, use narrative structures and stylistic devices to construct meaning from historical events.

HexPhantom · 1d ago
The failures we see in AI systems usually come from neglecting real-world complexity
amelius · 23h ago
Basically a reformulation of Joel Spolsky's "commoditize your complement".
ReptileMan · 1d ago
>The emerging generation of historians will simply take it for granted that they can create their own custom research and teaching tools and deploy them at will, more or less for free.

And they will spend 12 hours trying to figure out which is the fake python library and the citation that the LLM has hallucinated from the real one. Vibe coding is just WYSIWYG on steroids in good and bad. WYSIWYG didn't go anywhere.

mastazi · 1d ago
> which is the fake python library and the citation that the LLM has hallucinated from the real one. Vibe coding is just WYSIWYG on steroids

Maybe you haven't used AI coding tools in a while, the latest ones can run build tools, write and run unit tests, run linters, and will try and fix any errors that may arise during those steps. Of course it's possible that a library may have been been hallucinated, but this will just trigger an error during the build job and the AI agent will go back and fix it. Same thing for failing unit tests.

Just last week I saw Copilot fixing a failing unit test, then running the test, then making some more changes and repeating the process until the test was running successfully. At some point during this process, it asked me if it could install a VS Code extension so that it could run the test by itself, I agreed then it went from there until the issue was resolved. This was with the bottom-tier free version of Copilot.

Of course there are limits to what AI tools can do, but they are evolving all the time and at some point in a not too distant future they will be good enough in most cases.

Regarding hallucinated citations, I imagine that the problem can be solved by allowing the LLM to access and verify citations, then the agent can fix its own hallucinations just like most coding agents already do.

ReptileMan · 23h ago
You have one skill that no historian possess - the intuition what could go wrong and how and what to do if there is hiccup on this chain
vineyardmike · 1d ago
> WYSIWYG didn't go anywhere

Like MS Word?

These are pretty easy to solve problems tbh. LLM tools already exist that can work around “hallucinating libraries” effectively, not that this a real concern. It’s not magic, but these tired skeptic takes are just not based on reality.

It’s much more likely that LLMs will be used to supercharge visualizations with custom UIs and widgets, or in conjunction with things like MS excel for data analysis. Non-engineers won’t be vibe-coding a database anytime soon, but they could be making a PWA that marketing can use to add a filter on photos or help guide a biologist towards a python script to sort pictures of specimens based on an OpenCV model.

bonoboTP · 16h ago
It's really embarrassing to put a stake in these. It's like when Gary Marcus said that an image generator won't be able to draw a horse riding an astronaut and will make the astronaut ride the horse because that is a more frequent pattern in the training set. Or when an urban legend (academic legend?) started that state of the art classifiers misclassify cows when they are on sandy beaches and it only works on grass (it happened in some cases with small datasets and shortcut learning but there was no sota image classifier with such glaring and straightforward errors, but it trickled down to popular consciousness and grade school teaching materials on AI limitations.) Now it's about hallucinating nonexistent libraries. But reasoning models and RAG and large contexts and web search make this much less of an issue. The limitations everyone point at that trickle down to a soundbite that everyone repeats usually don't turn out to be fundamental limitations at all.
ReptileMan · 15h ago
It is not about fundamental limitations. It is about black boxes and magic. If you don't understand a system then you know not what lurks within and how it can bite you in the ass. Black boxes break - and when they break, you are helpless.

If you already know how to build software LLM are a godsend. I have actually had a quite a nice case recently when LLM invented some quite nice imaginary graphql mutators. I had enough experience in the field to not waste time debugging, a historian that hadn't shipped software before won't.

There were WYSIWYG before, before them was visual programming - we have tried to abstract that pesky complexity since forever. So far with no success. I don't see anything in LLM/Gen AI whatever that will change it. It will make good people more productive, it would make sloppy sloppier, it won't make people bad at solving problems good at it.

bonoboTP · 13h ago
Yes, the consequences will be that mediocre devs will have a harder time finding jobs. The higher skilled will be fine (for some time at least), but the trend is clear. LLMs are going to make the top engineers more effective and the less skilled will have less and less to contribute. The skill floor for useful contributions is rising. Just like in other industries. Most of the world's software projects aren't at the forefront of humanity's knowledge that needs brilliant minds. It's integration, dependencies, routing around edge cases, churning out some embedded code for another device, mostly standard stuff but enough differences that previous automation paradigms couldn't handle the uniqueness. LLMs plus a few highly skilled people (who know how to prompt them effectively) will get it done in less time than a team does today.

It's not that LLMs turn low skilled people into geniuses, it's that a large segment of even those with enough cognitive skills to work in software today will no longer have marketable skill levels. The experienced good ones will have some, but a lot won't.

UncleMeat · 13h ago
I am generally pretty skeptical of ai coding. At my job I’m resisting it pretty constantly. However, my wife is a historian with minimal programming skills and these tools have allowed her to build some things that she’d never have been able to build without them (without spending a thoroughly unreasonable amount of time learning). Sometimes she comes to me with some code that is clearly just totally wrong but most of the time she’s able to stumble her way to something useful for exploring her documents and data. Then we just do a code review before anything goes into a publication to check that it is actually doing what she intends.
neilv · 1d ago
> > Maintain professionalism and grounded honesty that best represents OpenAI and its values.

I think a humanities person could tell you in an instant how that part of the system prompt would backfire catastrophically (in a near-future rogue-AI sci-fi story kind of way), exactly because of the way it's worded.

In that scenario, the fall of humanity before the machines wasn't entirely due to hubris. The ultimate trigger was a smarmy throwaway phrase, which instructed the Internet-enabled machine to gaze into the corporate soul of its creator, and emulate it. :)

thomastjeffery · 17h ago
That's what would happen if it was a logical system. It's not. This is where it gets interesting.

Instead, it's a statistical model, and including that prompt is more like a narrative weight than a logical demand. By including these words in this order, the model will be more likely to explore narratives that are statistically likely to follow them, with that likelihood determined by the content the model was trained on, and the extra redistribution of weights via training.

We don't really need to worry about technically misstating our objectives to an LLM. It doesn't follow objectivity. Instead, we need to be concerned about misrepresenting the overall vibe, which is a much more mysterious task.

carbonguy · 1d ago
I spent some years as a teacher and so have some first-hand experience here; my take, for what it's worth, is that LLMs have indeed blown a gaping hole in the structure of education as actually practiced in the USA. That hole is: much of education is based on the assumption that the unsupervised production of written artifacts is proof of progress in learning. LLMs can produce those artifacts now, incidentally disrupting the paid essay-writing industry (one assumes).

From this, I agree with the article - since educators now have to figure out another hook to hang evaluation on, the questions "what the hell does it mean to 'learn', anyway?" and "how the hell do we meaningfully measure that 'learning', whatever it is?" have even more salience, and the humanities certainly have something meaningful to say on both of them. I'll (puckishly) predict that recitations and verbal examinations are going to make a comeback - harder to game at this point, but then, who knows how long 'this point' will last?

Zorass · 1d ago
For a long time, schools assumed that if a student turned in a written essay, it meant they had learned something. But now AI can write those essays too ,so that assumption doesn’t hold up anymore.

The real question is: if writing alone doesn’t prove learning, then what does?

Maybe we’ll see a return to oral exams or live discussions. Not because they’re perfect, but because they’re harder to fake.

In a way, AI didn’t ruin education it just exposed problems that were already there.

dotancohen · 1d ago
The definition of learning has not changed. One of our first written records is a man complaining that this newfangled writing thing will make the students lazy, they will no longer have to remember their studies.

Education will adapt.

palmotea · 1d ago
> One of our first written records is a man complaining that this newfangled writing thing will make the students lazy, they will no longer have to remember their studies.

And if your summary is correct, he was right. If you don't remember what you've learned (i.e. integrate it into your mind), you haven't learned anything. You're just, say, a book operator.

HPsquared · 1d ago
The solution has been around a very long time: closed-book tests (verbal or written). Works against LLMs too!
pixl97 · 16h ago
"""Solution"""

The particular problem here is the number of staff needed to actually administer and grade these kinds of tests. We're already talking about how expensive education is, just wait till this happens.

dotancohen · 1d ago
Exactly my point. It sometimes takes time for education to adapt to new tools. Writing, calculators, search engines, etc. I was in the last semester to learn mechanical drawing, the next semester learned CAD. Education adapts.
bazoom42 · 20h ago
What record is that? Just out of curiosity.
thomastjeffery · 16h ago
I think the main subject of that hole is objectivity, and it affects much more than education.

By obsessively structuring education around measurement, we have implied that what is taught is objective fact. That's always been bullshit, but - for the most part - it's been consistent enough with itself to function. The more strict your presentation of knowledge is, the more it can pretend to be truly objective. Because a teacher has total authority over the methodology of learning, this consistency can be preserved.

The reality has always been that anything written is subjective. The only way to learn objective fact is to explore it through many subjective representations. This is obvious to anyone who learns mathematics: you don't just see x^2+y^2=z^2, and suddenly understand the implications of the Pythagorean theorem.

Because objectivity can't be written, objectivity is not computable. We can't give a computer several subjective representations of a concept, and have it figure it out. This is the same problem as ambiguity: there is no objectively correct way to compute an ambiguous statement. This is why we write in programming languages: each programming language grammar is "context-free", which means that everything must be completely and explicitly defined. To write a computable statement, we must subject the abstract concept in mind to the language (grammar and environment) we write it in.

Most of the writing in our society is done with software. Because of this, we are implicitly encouraged to establish shared context. If what we write is consistent with the context of others' writing, it can pretend to be objective, and be more computable. Social interactions that are facilitated by software are also implicitly structured by that software's structure. There can be no true free-form dialogue in social media.

The exciting thing about LLMs is that they don't have this problem. They conveniently dodge it, by not doing any logic at all! Now instead of the software subjecting you to a strict environment, it subjects you to a familiar one. There is no truth, only vibes. There is no calculation, only guesses. This feels a lot like objectivity, but it's just a new facade. The difference is that the boundary is somewhere new and unfamiliar.

---

I think the death of objectivity is good progress overall. It's been a long time coming. Strict structure has always been overrated. The best way to learn is to explore as many perspectives as you can find. We humans can work with vibes and logic, together.

empath75 · 18h ago
A shockingly large percentage of education in the US is teaching kids to produce bullshit essays, so it's no surprise that AI is blowing a hole in it.
kouru225 · 18h ago
Bingo. If you want to just write an essay, then ChatGPT is perfect. If you want to write an essay that says something very particular, then ChatGPT starts to give you issues
rwmj · 22h ago
I'm pretty sceptical about the entire use of computers in education. For myself, I seem to be incapable of learning anything unless I read it from paper and write down notes in the margin or in my (paper) notebook. Obviously I use screens every day (being a programmer ...) but for actual recall of new material, I need to use paper. Also if I'm in a physical meeting or conference, I never have a laptop open, and I take notes on paper. This makes me doubt that you learn anything if laptops or tablets are involved. Nothing specific to AI, just my general observation.
aniviacat · 22h ago
> This makes me doubt that you learn anything if laptops or tablets are involved.

This may be true for you, but it certainly isn't generally true.

I haven't written anything substantial on paper in years, and I certainly have learned new things in that time.

kraftman · 21h ago
I think this comes down to how you first learnt to learn. If I write notes in nodepad i dont remember them as well as if i write them by hand, but I think that's just a byproduct of years of school, and other people have learnt their own ways and are just as effective.
ordersofmag · 19h ago
There's an entire generation that is evidence that your personal case is not the general case.
dgb23 · 18h ago
Gamified learning apps, in a broad sense, seem to be very effective for _practice_.
xwowsersx · 13h ago
I have no doubt AI will impact the field, as it will many (most) others. That said, some assertions that may warrant reconsideration IMO:

> The machines best us across nearly every subject

This feels overstated, given the well-known limitations of AI in reasoning, factual reliability, and depth in specialized domains

Describing traditional scholarship as mere "archaeological artifacts" feels like prematurely dismissing the enduring value of books and other research practices

- The idea that AI offers "pure attention" seems to romanticize what is, in essence, statistical pattern recognition...potentially misrepresenting the nature of the exchange

> Factory-style scholarly productivity was never the essence of the humanities

maybe inadvertent, but devalues serious academic work that follows rigorous, methodical research traditions

Beyond that, there's a few claims that would benefit from some additional evidence:

- student paralysis rely heavily on anecdote; broader empirical data across varied institutions would strengthen the case - His comparisons between AI and human experts feel selective...would be interested to see a more systematic evaluation to see where AI meaningfully matches or falls short of expert performance

- I'm somewhat partial to the author's view, but in terms of long-term educational outcomes, no real evidence to show that AI integration improves learning or contributes meaningfully to human development - the idea of a "singularity of self-consciousness" is pretty... under-argued? - The supposed link between declining humanities enrollment and AI's potential lacks concrete data to support the correlation

Tycho · 20h ago
I would love to be a literature student today. Consume the classics in audiobook form while flâneuring or gaming. Red team every essay with ChatGPT. Become a top tier Genius.com contributor for poetry. Listen to all the online lectures from Oxford, Yale etc. Effortlessly pick up where past critics have left off because the AI can summarise all contributions on any topic. Find every reference to support your thesis in the reference text — without scanning for hours.
KineticLensman · 19h ago
(source: am a literature student - UK open university)

I just asked get ChatGPT a question about use of language in a specific book (China Miéville's Kraken). It gave a plausible answer but used a common English word ('knackering') instead of the word 'knacking' that is actually used (the word 'knackering' isn't in the text). I then asked it to give me a quote using 'knackering' and it gave me a quote with the word 'knacking'. I asked it to put the quote into Harvard reference style which it did, but didn't provide a page number, which it said I'd have to look up myself.

I think relying on ChatGPT for an essay would be fairly frustrating, and if you used ChatGPT to develop your thesis, you could very quickly get a generic plagiaristic answer (based on the quotes everyone else used) rather than one than that captured your own response to the text, and which contained factual errors.

[Edit] It also said that "gods inhabiting USB sticks are presented matter-of-factly" in Kraken which is false (there are no mentions of USB). When I asked it for a quote to support its assertion, it replied "While the novel doesn't explicitly depict gods inhabiting USB sticks..."

piltdownman · 19h ago
Mieville is a wonderfully challenging prompt for ChatGPT. In the same vein, I would imagine 'The City and The City' would end up with a bunch of half-hallucinated forum-posting and reviews regarding Disco Elysium.
KineticLensman · 18h ago
> Mieville is a wonderfully challenging prompt for ChatGPT.

Indeed. I'm doing a close reading of Kraken and have been amazed at the number of 'weird' entities and plot events that Miéville introduces [0]. It would be very easy to hallucinate a few more.

There are more than 100 passages in Kraken where Miéville names people, places and events that occur in other media or culture. For example, in a list of people associated with submarine technology, he throws in the name of the underwater stuntman [1] who played the Creature from the Black Lagoon. Again, it would be easy to hallucinate additional content.

[0] For example, a secondary character who was originally an ancient Egyptian burial slave-statue who went on strike in the afterlife, and in modern London is now the trade-union convenor for the Union of Magicked Assistants.

[1] Ricou Browning, if anyone is interested.

Tycho · 19h ago
Did you upload the text of the novel? It’s under copyright so wouldn’t be in the training data or retrievable via search.
KineticLensman · 19h ago
I didn't upload the text; I was trying to see what happened if one asked a question about a specific book, as if I was someone trying to do an essay in a hurry. I know the text quite well so could easily spot the hallucinated answers but someone with less knowledge might have been fooled. The experience reconvinced me not to rely on this specific LLM for help with my essays.
Tycho · 18h ago
Well, you have to give the tool a chance. If it can’t even see the text you’re asking about, it’s going to rely on whatever quotes and references it can crib together from reviews and blogs and comments.

Try it for a text that is in the training data, or the public internet, or can be put into the context window - then it might help.

KineticLensman · 18h ago
Well, yes, I'm sure if I uploaded a full text and spent time on the prompts I could get better results, although these might be similar to other students doing the same thing, which might get me a plagiarism fail.

My point was that I would be very hesitant to rely on ChatGPT as an assistant in a real literature task. Many of the texts on my Eng Lit course are in copyright, as is all of the module material (the OU's course-specific textbooks). The hallucinations are a real show-stopper.

Tycho · 18h ago
Your position seems to be “if you use the tool like a total dumbass, you might get a bad outcome.” Like… maybe use it intelligently?
stanford_labrat · 17h ago
On the topic of hammers: "if you don't hold the hammer at exactly 2.6cm from the end of the handle and strike the nail with 6N of force at an angle of 58 degrees then of course you won't get a good nail strike into the wood. Oh and you must only use acacia sourced from the subtropics".

Give me a break, it's a hammer. This is a perfectly normal "use" of ChatGPT and a good example of how a literature student may opt to try and use AI to make their work easier in some way. It also conveniently demonstrates some of the shortcomings of using a LLM for this sort of task. No need to call them a dumbass.

nluken · 18h ago
> Consume the classics in audiobook form while flâneuring or gaming

I can't tell if you're serious or not, but this would be like saying you've "watched" a movie because you had it on your second monitor while you were playing counter strike. There's a fundamental difference in hearing a book read to you and actually listening and engaging with the audio on a meaningful level. The latter requires focus, and you're not going to be able to do that if you're gaming away at the same time.

Put another way, why would someone who presumably loves reading bother studying literature if they don't actually care enough to pay attention to the books they supposedly loved?

Tycho · 17h ago
I’ve listened plenty of audiobooks while driving or walking, why would I not be able to do the same while playing GTA?

The reason to do this is that just sitting in a chair ploughing through books gets very unappealing after a while. And there’s a lot of books to get through.

nluken · 17h ago
If you're getting what you want out of your reading then more power to you, but I think you'll find that devoting your full attention to what you're doing fundamentally differs from doing it in the background. Why play GTA when you could just listen and enjoy the book? What is GTA adding to that experience?

I think a lot of people, including, I presume, most who willingly choose to study literature, enjoy reading as an activity unto itself and so don't feel the need to add additional distractions. Not everybody finds sitting down and "ploughing through" (how disdainful a phrase!) a book unappealing.

Tycho · 15h ago
A serious literature degree might prescribe a reading list of ~10 books per week. Some of it stuff you don’t even like but is “important context” or whatever. At a certain point it becomes a chore for anyone. You need to find ways to help you get through more.

The reading (listening) of course is not “in the background”. Maybe sometimes you’re distracted and have to skip back 30s and re-listen. Fine. If the game is too distracting, fine, play something simpler, or watch soccer, do chores, anything where the moment to moment continuity does not require effort to track, but still gives you some benefit.

piltdownman · 19h ago
What a wonderful bit of satire.

To those that missed the joke - 'consuming' the classics is the antithesis of a liberal arts education. The value lies in the engagement, the debate, the Hegellian dialectic involved in arriving at a true grok level understanding of the text or topic.

It would be akin to reading the Sparknotes of Ulysses and being able to reference how it draws heavily on Homer's Odyssey, or utilises stream-of-consciousness narrative to great effect; and thinking that, as a result, you have the faintest understanding of the text, its conception, or its impact.

The OP almost hit on this with the 'Listen to all the online lectures from Oxford,Yale,MIT etc...'. Unlike coding bootcamps or similar, universities are not VOCATIONAL TRAINING - no matter how skewed towards that end-goal the American Economy is dictating such. As just about any Educator can attest, no amount of listening to youtube lectures will replace the University experience, nevermind the Oxbridge/Ivy League experience.

The pedagogical benefits are simply unrealisable from an AI prompt 'streamlining'- i.e. being forced to read and engage with topics outside of your comfort zone to maintain your GPA, engaging and working with people from a diversity of outlooks and backgrounds, benefitting from the 1:1 and small group sessions with the Academics who often wrote the literal book on the subject in question.

If the intersection of JSTOR and Machine Learning didn't reduce humanities to Cory Doctorow level script-kiddyism, the hoi polloi throwing prompts into a hallucinatory markov chain isn't likely to advance or diminish Academia anymore than the excess of 'MBA IN 5 DAYS!' or '...for Dummies' titles previously available.

yupitsme123 · 17h ago
I ask this as someone with a lot of respect for education and the Humanities: do the majority of Humanities students actually get this type of education?

Any conversations I've had with students or graduates of Arts, Literature, etc. indicates that their education was very much about consuming and regurgitating. Maybe the top 5% approach their studies the way you're describing but I've never seen anyone like that in the wild.

Tycho · 18h ago
I’m not being satirical. And nowhere did I suggest that you don’t still read the actual text you’re studying. There are more ways to “engage” with literature than just going to seminars.
peterldowns · 19h ago
You can do this without being in a formal college program, go for it!
j33zusjuice · 19h ago
OK, that last sentence would be incredible. I think this is tongue in cheek, but I was an English lit major so I feel the need to defend my honor. XD

I don’t think you’d do well in 400-level classes this way. English Lit isn’t as much of a joke as STEM students make it out to be[1]: it gets a lot harder than the bullshit 101/201 courses everyone is required to take. You’re supposed to try to be original in your analysis, and it has to be rigorously proven within the text itself.

Probably as a grad student you’d start arguing against other critics points, but not undergrad. I think that would hold for almost all schools because no one at that level in that field wants to hear from someone who doesn’t know how to analyze a text in the first place. It’d be like a high school student trying to tell you about software (or systems, network, data, etc.) engineering.

For similar reasons, AI summarizations for past contributions wouldn't work, either. If you’re arguing someone else’s analysis is wrong, you’re going to need to read and understand the whole thing. And if you’re just copying from AI, you’ll have a hard time defending your position.

Although, man, if you can understand the subtext of a book from listening to an audiobook *while gaming*, AND you have time to watch all the online lectures about a book!? I need to talk to you about time management, my friend!

1 - I have been involved in so many forums and subreddits where people try to analyze books, comics, TV, or movies. Based on what I have seen come out of people there —- most of you MFers couldn’t pass 300-level classes.

People can’t analyze literature for shit, and I think it’s because everyone gets such a negative perception of literary analysis because high school and required college classes are junk. It’s actually really hard to read five novels in month, keep track of all the characters and plots and themes and so on, and understand all of them well enough to write a coherent argument. I saw so many kids in my major crying about their GPA getting tanked because they weren’t ready for rigorous analysis. FWIW, I was 25 as a Junior (third year of uni, in the US), and had spent the last few years reading exercise physiology papers while bored at work. Seeing real science changed my life, and I wanted to apply their level of rigor to my own analysis of any kind. It’s why I’m good at my job now, tbh.

Tycho · 18h ago
You probably wouldn’t be arguing that previous critics are wrong but surely to get top marks you’d be expected to know what relevant literary criticism has already been published and then build on that in some fashion. No doubt just reading that stuff would sharpen your own insights. AI should make it much easier to find the most relevant criticism and put it in context.

AI should also be able to help you gather evidence from the reference texts, because it can exercise reasonable judgement without any constraint on patience or access or time. Consider the recent social media sensation about the lady who got a PhD for analysing how smell factors into the fates of literary characters. AI can quickly find thousands of examples, and then filter them as desired.

You could even have the AI write essays for you - “analyse this novel through the lens of ____ theory” - where no literary criticism already exists to review. You could have it generate any number of such essays, applying theories/frameworks you might not even know, but want to understand better.

I think it’s possible to “read” an audiobook while doing something monotonous like walking, driving, or, yes, gaming. The lectures you probably have to treat like podcasts and just play them in the background and pick up some ideas.

piltdownman · 19h ago
//For similar reasons, AI summarizations for past contributions wouldn't work, either. If you’re arguing someone else’s analysis is wrong, you’re going to need to read and understand the whole thing. And if you’re just copying from AI, you’ll have a hard time defending your position.

QFT. It's like the Sparknotes scenario I outlined in my post above - what you get from this level of engagement isn't insight, least of all a debatable position, it's just a loosely cohesive bunch of table-quiz facts.

//People can’t analyze literature for shit, and I think it’s because everyone gets such a negative perception of literary analysis because high school and required college classes are junk.

Because most of what is being examined is passive/active voicings, brain-dead symbolism, and ham-fisted and dated metaphors as literary vehicles.

Even at University level there should be an emphasis on a 101 level course hammering home the importance of Critical Theory in Literary Criticism as a framework and approach for disseminating texts. Without a basic understanding of the cultural, historical, and ideological dimensions under which a text was conceived and published, you haven't a hope of climbing the foothills of Beckett, Camus, Dickens, Dostoevsky, Eliot, Joyce, Kafka, Shaw...

meroes · 17h ago
Sounds like all ways to skirt around the actual learning bit. Consuming the classics while gaming? Really?
mathgradthrow · 17h ago
Opposite the Turing test, there is its dual: "Can you administer a turing test? Can you tell if you are talking to a person?"

There are lots of people who fail this test. Many of them are professional evaluators in the humanities. They should unquestionably lose their jobs as evaluators.

greenavocado · 17h ago
That is the reverse turing test: https://www.youtube.com/watch?v=MxTWLm9vT_o
gitfan86 · 18h ago
For every Quinton Tarantino there are dozens of people who could make amazing films but never had the opportunity for a wide variety of reasons that now mostly go away.
shermantanktop · 17h ago
And for each of those dozens, there are now millions of untalented people who can generate terrible films with a few clicks. How does a Tarantino, or even one of your new talents, ever get discovered?

The friction is a feature, not a bug.

physicsguy · 17h ago
> The friction is a feature, not a bug.

The friction means only wealthy people can do anything in a sphere.

shermantanktop · 10h ago
We've seen unprecedented democratization of media over the last 30 years, primarily due to the internet. I'm personally not seeing a corresponding positive impact on cultural creativity. I see a lot of tweets and a lot of Etsy stores, and the audience for long novels (for example) is aging out.
gitfan86 · 10h ago
Quality rises to the top organically.
throwaway75 · 21h ago
What is the general take on the ethics of using AI as a much more powerful search engine? For example, "Find all occurences in the text where Odysseus' will is overridden by the Gods." The question is not something that would be directly set by an instructor, but might be required to substantiate an interpretation that a student is aiming for.

Finding something like this is difficult and requires reading the text closely. But with AI, you could get away by reading around the passages returned by AI.

pasc1878 · 21h ago
Will the AI find ALL the occurrences. Finding hallucinated ones is OK here as the user can check each one.
throwaway75 · 21h ago
Perhaps not. But it'll probably find sufficient passages/instances that are useful.
pasc1878 · 21h ago
Yes but the question did say ALL.

Finding some yes it might well be quicker than a manual read.

So it depends on what the actual requirement is.

throwaway75 · 21h ago
Let's say sufficient instances for the student to prove his/her point - the student used "all" in the AI prompt, but really meant as many as possible from which they could pick and choose. Is it ethical or should it be viewed as cheating?
Leo-thorne · 22h ago
AI hasn’t destroyed the humanities, but it definitely made things more complicated and harder to teach. I’ve seen students rely entirely on chatbots to write essays, and it really does make teaching harder. As the article says, we need to find ways to make AI and the humanities work together, not just hold on to the old way of doing things.
thijson · 19h ago
A lot of my engineering exams didn't allow certain calculators, so I guess that's what will happen for humanities, supervised and timed essay writing.

I remember going to museums and seeing all the modern art, and wondering why art went to such weird places compared to the realism of the 1600's, and my conclusion was that invention of photography made realism obsolete. I wonder if something similar will happen to the humanities.

HexPhantom · 1d ago
When effort becomes optional, so does growth. And the quote from the student who sees college as networking with a side of AI-cheating? Painfully on the nose. There's a real risk of LLMs flattening the educational journey into a kind of performance optimization game
phito · 23h ago
> There's a real risk of LLMs flattening the educational journey into a kind of performance optimization game

It's already been the case for a long time, it's just going to get worse.

thenthenthen · 20h ago
I have been saying this (not sure about the weird part), but is it not obvious that in an AI saturated ‘field’, actual, factual, practical work should have an easy advantage?
seydor · 23h ago
Humans are mimetic creatures. The biggest damage (some call it influence) that LLMs will do is that next generation will be talking like an LLM. They might even normalize hallucination
thijson · 19h ago
That reminds me of a professor that would play various forms of music compressed in different ways. The class actually preferred the distortion produced by mp3 over uncompressed.
krapp · 22h ago
We've already begun to normalize the belief that humans hallucinate as often as LLMs, and are categorically less reliable and less trustworthy than LLMs. The political and social ramifications of this are bound to be negative.
numpad0 · 22h ago
Is hype cycle effect hitting hard on AI recently? The arguments aren't changing for years, and emotional components in them feel progressively toned down.
bwfan123 · 17h ago
The hype cycle is running on FOMO. "as a developer, you will be left behind" And is sustained by fear tactics. "there wont be any developer jobs left"

Giant investments are being made, and a lot of people's outcomes rest on them, which incentivizes them to become evangelists for the cause.

End-of-day it is a productivity boosting tool like a search-engine or a calculator - whats funny to see the human drama playing out around its adoption.

lapcat · 17h ago
> When I asked him why he had gone through so much trouble to get to an Ivy League university only to off-load all of the learning to a robot, he said, “It’s the best place to meet your co-founder and your wife.”

On the one hand, this guy is not wrong, from a personal perspective. On the other hand, guys like this are a total waste of time and space in the classroom.

Schools have always faced a fundamental internal conflict, i.e., dueling purposes, and recent developments such as the rise of ChatGPT and the rise of student loan debt may have finally brought the entire educational system to a crisis.

We like to think of education as the primary purpose of the educational system, hence the name. And education is indeed essential, not just for job training but also for the enrichment of human life, as well as the preservation of democracy and freedom. Arguably, the latter are more important. Yet we also rely on the educational system for social and economic sorting. It's a ranking system that has a monumental effect on the future prospects of students. And as the above quotation notes, it's a club, an exclusive club, where the lucky few get to meet the right people and enter into important social circles.

At the university level, things become even more muddled, because professors and scientists are performing important research that may have little or nothing to do with the education of undergraduate students. A lot of this research is important for society, and for industry, but it still comes into conflict with the other purposes of the university.

So how do we resolve the inherent conflict? Is there a way to split the educational system into separate entities that are truer to their individual purposes? Can we make a place for the social ladder climber that's separate from a place where real learning happens, where nobody is tempted to cheat because there's nothing to be gained from cheating? It should also be noted that not everyone is ready to learn. Our current educational system is designed like an assembly line, based on age and social promotion. Everyone of the same age is expected to follow the same track. I personally wasn't ready to study and learn when I was a teenager; it took several more years for me to mature emotionally, develop ambition and discipline. It's difficult to force-feed students: “Most assignments in college are not relevant,” he told me. “They’re hackable by AI, and I just had no interest in doing them.” Perhaps, though, there might be more motivation to learn if learning was not tied to future economic and social prospects. If students view school as merely a tool to achieve certain benefits, then of course they'll view anything else in school as a waste of time.

In a sense, our educational system finds failure to educate to be acceptable, even desirable, because failure fulfills the alternative purpose of social ranking.

pjc50 · 16h ago
> Can we make a place for the social ladder climber that's separate from a place where real learning happens, where nobody is tempted to cheat because there's nothing to be gained from cheating?

I don't think so. Let me suggest a set of hypotheses:

- the social ladder always exists; it's basically fundamental to human existence, especially when trying to do anything (see tyranny of structurelessness)

- people are always trying to climb the hierarchy

- the examocracy(+) was introduced by ancient Chinese and/or Prussia in an attempt to bring actual competence into the social hierachy, in large part to help the leader fight wars

- if there isn't a "fair" route to the top, competent outsiders will resort to unfair ones, which can be remarkably lethal and destructive

- the university provides its own small academic and intellectual hierarchy, but this exists in dialogue with the needs of a wider society and its demand for non-academic intellectually specialized work. Whether that's priests, lawyers, doctors, army officers, weapons designers, civil servants, or programmers

- this comes down to: does the wider society demand human competence in intellectual operations still, or does ALL of that get AI-automated away? Do we get "humans, but with a smarter Clippy" or do we get "AI runs the world for billionaires and everyone else is surplus peasantry".

- the secret third, worse, case is that AI destroys the concept of competence and things stop working on a large scale.

(+) I do not want to have a huge fight over the word "meritocracy"

lapcat · 16h ago
It's unclear why you think the educational system needs to be part of the social ladder.

I'm imagining for example that all citizens are required to receive an education, and demonstrate the same level of mastery in the subjects taught as everyone else, without grades or rankings. You might call it "pass-fail", but I resist that idea, because I don't consider failure to be an option, except in rare cases of learning disability.

umanwizard · 15h ago
Do you reject the idea that some people are better at things than others? 99% of people simply can’t learn the same level of mathematics that a pure mathematician does, no matter how hard they try or how competently they’re taught, and then what happens in your system to people who would have been pure mathematicians?
lapcat · 15h ago
> Do you reject the idea that some people are better at things than others?

No. Of course not.

> 99% of people simply can’t learn the same level of mathematics that a pure mathematician does

Why would this be a requirement for all citizens? Perhaps you misunderstood "demonstrate the same level of mastery". I meant that there would be a specified minimum requirement, though we definitely shouldn't set the bar too low. What I would reject is social promotion, where students get to move along to the next level as long as they have a D grade or higher. This does not demonstrate sufficient understanding of the material taught.

Grades are for ranking. I'm suggesting that we ditch grades and simply demand that everyone learn what we expect them to learn. If students learn more than what's expected, good for them, that's not a problem.

> what happens in your system to people who would have been pure mathematicians

I'm not sure what you mean. How am I preventing people from becoming pure mathematicians?

umanwizard · 14h ago
How do you decide who to filter into the more difficult and faster-paced classes that will prepare someone to be a pure mathematician, other than grades?
lapcat · 13h ago
I'm not sure exactly what you mean by "filter into" or at what level or stage of classes you're referring to.

To a large extent, it's self-filtering: who is going to voluntarily take advanced math classes except those who are intered in advanced math?

There are of course larger societal questions of how much schooling is funded by the government and how much needs to be self-funded.

I don't know if this is what you're talking about, but I had to take the Graduate Records Exam in order to get into grad school. But the GRE is not taught or administered by the university, so in that sense it's not part of the educational system. You don't actually have to be in the math department to take an advanced math class, though, and other students may take the classes as part of a PhD minor or just for some other interest.

umanwizard · 10h ago
You need to filter them way before that. Certainly by 16, probably even earlier like 12 or 13, someone destined for a career in STEM will be bored and wasting time by any material that 90% of people are capable of learning.
lapcat · 9h ago
I still don't know what you mean by "filter". I asked, and you haven't elaborated.

The only thing I can think of is that you (mistakenly) assume that I'm suggesting all students must move at the same pace? But that would be a very weird interpretation of me, since I've already expressed strong criticism of that idea: "Our current educational system is designed like an assembly line, based on age and social promotion. Everyone of the same age is expected to follow the same track."

umanwizard · 9h ago
Sort students into different classes based on their ability and interest.
lapcat · 8h ago
I have no objection to that, and I don't know why you think I would.

You seem to be assuming that "all students must meet these standards" means that all students must meet these standards at the same age and time, but I never said or implied that.

umanwizard · 8h ago
Right, but how do you filter students, if not by grades? With no grades, how do you know which ones are capable of a more rigorous program? (Similarly, later, how do you know which ones should get into the best universities? And so on)
lapcat · 7h ago
> Right, but how do you filter students, if not by grades? With no grades, how do you know which ones are capable of a more rigorous program?

The concept here is mastery rather than grades. When a student has mastered a subject, they can move onto more rigorous subjects. If they haven't mastered a subject, then they need to continue studying that subject. When we socially promote students who receive a grade of D, C, or even B, I wonder, what is the student missing? Why are we giving up and moving on just because an arbitrary amount of time has passed? And if the student has failed to completely learn things at the current level, what's going to happen when the student moves on to higher, harder levels? That's a disaster waiting to happen, compounding ignorance over time.

It appears that the "answer" to these questions is that our society cares more about ranking students than it does about educating students. So we allow the education to be incomplete for many or perhaps most students, as long as we already have our ranking from "best" to "worst" (at a given age). This is, in my opinion, an unfortunate consequence of the multiple conflicting purposes of the educational system.

> (Similarly, later, how do you know which ones should get into the best universities? And so on)

I already talked about the GRE, for example, so I don't know why you're still confused about that. (I'm not actually defending the GRE, or the SAT, another standardized test, but there are clearly ways to evaluate people separate from schools and grades.)

I do have some problems with the inequality of opportunities implied by "the best universities", but that's a whole other discussion. I've already mentioned that the conflicting purposes of the educational system extend to the college level too, are arguably even worse at that level. Think back to the very first quote I posted, from the A.I. cheater for whom the point of an Ivy League university was not to receive an education but to make the right social connections. How is academic achievement even relevant there? To me, this is a sign that the current system is fundamentally broken.

pjc50 · 16h ago
> why you think the educational system needs to be part of the social ladder.

Depends if you want the people running the country to be competent or not.

lapcat · 15h ago
> Depends if you want the people running the country to be competent or not.

In some sense, the goal of the educational system should be to make everyone competent. In any case, it's unclear what you mean by "running the country". Politically speaking, the people currently running the US are not competent, despite the fact that the educational system and the social hierarchy are inextricably entangled.

Whatever kind of -ocracy you're proposing (I'm personally not proposing here that we abolish democracy), I don't see any inherent reason why a social ranking system and the educational system can't be separate rather than combined into one.

When employers start looking the alma mater of applicants, their grades in school, and such (reportedly Mark Shuttleworth of Canonical is obsessed with high school!), that's when the entire educational system gets warped. Learning gets cast aside in favor of credentialism, and consequently, cheating to get ahead.

renewiltord · 14h ago
Everyone's always claiming the humanities are important the humanities are important but everyone I know in STEM is more learned than everyone in humanities. More philosophy, more history, more classical literature. This means that humanities education is relatively bogus.
photochemsyn · 15h ago
Let's be honest: majoring in the humanities is the easy way to a college degree and thus attracts people put off by the hard sciences and all the mathematical rigor which can't be offloaded onto an LLM as you do have to solve problems in real time on exams without any external assistance.

If the student's goal at any Ivy League college is to 'meet their partner and their co-founder' then attending social functions where they can expand their networks and meet new candidates for those roles will take precedence over spending three hours diligently studying difficult material for every hour spent in lecture.

Of course, computer science students and others in hard sciences have been gaming the system for decades, with many solutions to take-home programming exercises found in online forums, and there's always the option of paying a tutor to ease the way forward - and LLMs are essentially inexpensive tutors that vastly help motivated students learn material - a key aide when many university-level professors view teaching as an unpleasant burden and devote minimal time and effort to it, with little material preparation and recycling tests from a decade ago that are all archived in fraternity and sorority collections.

The solution to students using LLMs to cheat is obvious - more in-class work, more supervised in-section work, and devaluing take-home assignments - but this means more time and toil for the instructors, who are often just as lazy and unmotivated as the students.

[note the in-person coding interview seems to have been invented by employers who realized anyone could get a CS degree and good grades without being a good programmer via cheating on assignments, and this happened well before LLMs hit the scene]

paulpauper · 1d ago
When I was a postdoc at Columbia, I taught one of the core curriculum classes mentioned here, with a reading list that included over a dozen weighty tomes (one week was spent on the Bible, the next week on the Quran, another on Thomas Aquinas, and so on). There wasn’t much that was fun or easy about it.

This sums up the problem. It's not fun and the only thing that matters is the credential anyway , so unsurprisingly students are outsourcing to drudgery to AI. In the past it was CliffsNotes. The business model of college-to-career needs to be overhauled.

The humanities are important, yet at the same time, not everyone should be req. to study them. AI arguably makes it easier to learn, so think this is a welcome development.

timewizard · 12h ago
> Generative AI elevates the value of humanistic skills

No. It just steals them, whitewashes their copyright, then passes them off as a "machine creation." If there were no humans actively using language there would be no language models.

> Generative AI makes it harder to teach humanistic skills

Yea. That's what happens when you steal for a living.

> For this reason, it is vitally important that educators learn how to personally create and deploy AI-based assignments and tools that are tailored directly for the type of teaching they want to do.

All with zero evidence that this will improve outcomes for them. For this reason you should TEST YOUR HYPOTHESIS.

I hate AI hype.

bios444 · 1d ago
Yes
bgwalter · 1d ago
Another day, another Substack ad by this "science historian", naturally with a quote by AI shill Simon Willison. The entire world rests on his shoulders.
antithesizer · 1d ago
It's quaint how there are people who think something called "the humanities" exerts some occult guiding influence on the world. Studying the humanities would disabuse them of that notion.
Caelus9 · 1d ago
I think that when a new era comes, we should choose to embrace it actively. Today, the humanities face unprecedented challenges, but they also contain new opportunities. The rise of artificial intelligence has undoubtedly subverted traditional teaching methods, but it also forces us to rethink a fundamental question: What is the real value of the humanities? As a historian at Princeton University once pointed out, artificial intelligence can process data and generate text, but it can never replace human subjective experience, moral judgment, and our exploration of meaning, such as questions like “Who am I?” In the final analysis, the value of the humanities may not be “useful” in the traditional sense, but in helping us better understand ourselves and the world we live in.
throw8393949 · 1d ago
> My idea is that students will read Darwin’s writings first, then demonstrate what they learned via the choices they make in game. To progress, you must embody the real epistemologies and knowledge of a 19th century naturalist.

It is kind of funny, since modern sociology and humanities are in a deep denial of theory of evolution, Darwinism and any of its implications.

> producing a generation of students who have simply never experienced the feeling of focused intellectual work.

This was already blamed on phones. Some study said like 30% of university graduates are functionally illiterate; they are unable to read book, and hold its content in memory long enough to understand it!